From python-checkins at python.org Fri Feb 1 00:20:15 2013 From: python-checkins at python.org (victor.stinner) Date: Fri, 1 Feb 2013 00:20:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_433=3A_update_the_impleme?= =?utf-8?q?ntation?= Message-ID: <3Yxy8v3V1hzRB2@mail.python.org> http://hg.python.org/peps/rev/a9b88df8cbab changeset: 4707:a9b88df8cbab user: Victor Stinner date: Fri Feb 01 00:19:01 2013 +0100 summary: PEP 433: update the implementation files: pep-0433.txt | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/pep-0433.txt b/pep-0433.txt --- a/pep-0433.txt +++ b/pep-0433.txt @@ -532,12 +532,14 @@ os.dup() -------- + * Windows: ``DuplicateHandle()`` [atomic] * ``fcntl(fd, F_DUPFD_CLOEXEC)`` [atomic] * ``dup()`` + ``os.set_cloexec(fd, True)`` [best-effort] os.dup2() --------- + * ``fcntl(fd, F_DUP2FD_CLOEXEC, fd2)`` [atomic] * ``dup3()`` with ``O_CLOEXEC`` flag [atomic] * ``dup2()`` + ``os.set_cloexec(fd, True)`` [best-effort] -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 1 00:49:33 2013 From: python-checkins at python.org (victor.stinner) Date: Fri, 1 Feb 2013 00:49:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_433=3A_subprocess_clears_?= =?utf-8?q?cloexec_flag_of_pass=5Ffds?= Message-ID: <3Yxypj1lkPzMPr@mail.python.org> http://hg.python.org/peps/rev/a9f8eb11b08b changeset: 4708:a9f8eb11b08b user: Victor Stinner date: Fri Feb 01 00:48:18 2013 +0100 summary: PEP 433: subprocess clears cloexec flag of pass_fds files: pep-0433.txt | 11 +++-------- 1 files changed, 3 insertions(+), 8 deletions(-) diff --git a/pep-0433.txt b/pep-0433.txt --- a/pep-0433.txt +++ b/pep-0433.txt @@ -174,9 +174,6 @@ should be modified to conform to this PEP. The new ``os.set_cloexec()`` function can be used for example. -XXX Should ``subprocess.Popen`` clear the close-on-exec flag on file -XXX descriptors of the constructor the ``pass_fds`` parameter? - .. note:: See `Close file descriptors after fork`_ for a possible solution for ``fork()`` without ``exec()``. @@ -229,6 +226,9 @@ Add a new command line option ``-e`` and an environment variable ``PYTHONCLOEXEC`` to the set close-on-exec flag by default. +``subprocess`` clears the close-on-exec flag of file descriptors of the +``pass_fds`` parameter. + All functions creating file descriptors in the standard library must respect the default *cloexec* parameter (``sys.getdefaultcloexec()``). @@ -284,11 +284,6 @@ If a file must be inherited by child processes, ``cloexec=False`` parameter can be used. -``subprocess.Popen`` constructor has an ``pass_fds`` parameter to -specify which file descriptors must be inherited. The close-on-exec -flag of these file descriptors must be changed with -``os.set_cloexec()``. - Advantages of setting close-on-exec flag by default: * There are far more programs that are bitten by FD inheritance upon -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 1 01:01:01 2013 From: python-checkins at python.org (victor.stinner) Date: Fri, 1 Feb 2013 01:01:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_433=3A_typo?= Message-ID: <3Yxz3x2bWfzRB2@mail.python.org> http://hg.python.org/peps/rev/c969d6ce3619 changeset: 4709:c969d6ce3619 user: Victor Stinner date: Fri Feb 01 00:59:47 2013 +0100 summary: PEP 433: typo files: pep-0433.txt | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/pep-0433.txt b/pep-0433.txt --- a/pep-0433.txt +++ b/pep-0433.txt @@ -230,7 +230,8 @@ ``pass_fds`` parameter. All functions creating file descriptors in the standard library must -respect the default *cloexec* parameter (``sys.getdefaultcloexec()``). +respect the default value of the *cloexec* parameter: +``sys.getdefaultcloexec()``. File descriptors 0 (stdin), 1 (stdout) and 2 (stderr) are expected to be inherited, but Python does not handle them differently. When -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 1 04:02:31 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 1 Feb 2013 04:02:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_=2317040=3A_document_that_?= =?utf-8?q?shelve=2Eopen=28=29_and_the_Shelf_object_can_be_used_as_context?= Message-ID: <3Yy35M5FxSzQ12@mail.python.org> http://hg.python.org/cpython/rev/935a286b8066 changeset: 81860:935a286b8066 parent: 81858:e6cc582cafce user: Ezio Melotti date: Fri Feb 01 05:01:50 2013 +0200 summary: #17040: document that shelve.open() and the Shelf object can be used as context managers. Initial patch by Berker Peksag. files: Doc/library/shelve.rst | 16 ++++++++++++---- 1 files changed, 12 insertions(+), 4 deletions(-) diff --git a/Doc/library/shelve.rst b/Doc/library/shelve.rst --- a/Doc/library/shelve.rst +++ b/Doc/library/shelve.rst @@ -44,8 +44,11 @@ .. note:: Do not rely on the shelf being closed automatically; always call - :meth:`close` explicitly when you don't need it any more, or use a - :keyword:`with` statement with :func:`contextlib.closing`. + :meth:`~Shelf.close` explicitly when you don't need it any more, or + use :func:`shelve.open` as a context manager:: + + with shelve.open('spam') as db: + db['eggs'] = 'eggs' .. warning:: @@ -118,10 +121,15 @@ The *keyencoding* parameter is the encoding used to encode keys before they are used with the underlying dict. - .. versionadded:: 3.2 - The *keyencoding* parameter; previously, keys were always encoded in + :class:`Shelf` objects can also be used as context managers. + + .. versionchanged:: 3.2 + Added the *keyencoding* parameter; previously, keys were always encoded in UTF-8. + .. versionchanged:: 3.4 + Added context manager support. + .. class:: BsdDbShelf(dict, protocol=None, writeback=False, keyencoding='utf-8') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 04:20:42 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 1 Feb 2013 04:20:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE2MTI4OiBjbGFy?= =?utf-8?q?ify_that_instances_of_user-defined_classes_compare_equal_with?= Message-ID: <3Yy3VL2XyVzRKB@mail.python.org> http://hg.python.org/cpython/rev/79a021beaf58 changeset: 81861:79a021beaf58 branch: 2.7 parent: 81859:8ee6d96a1019 user: Ezio Melotti date: Fri Feb 01 05:18:44 2013 +0200 summary: #16128: clarify that instances of user-defined classes compare equal with themselves. files: Doc/glossary.rst | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/Doc/glossary.rst b/Doc/glossary.rst --- a/Doc/glossary.rst +++ b/Doc/glossary.rst @@ -330,7 +330,8 @@ All of Python's immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are. Objects which are instances of user-defined classes are hashable by default; they all - compare unequal, and their hash value is their :func:`id`. + compare unequal (except with themselves), and their hash value is their + :func:`id`. IDLE An Integrated Development Environment for Python. IDLE is a basic editor -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 04:20:43 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 1 Feb 2013 04:20:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE2MTI4OiBjbGFy?= =?utf-8?q?ify_that_instances_of_user-defined_classes_compare_equal_with?= Message-ID: <3Yy3VM54yjzRJm@mail.python.org> http://hg.python.org/cpython/rev/e84c5cf92b6f changeset: 81862:e84c5cf92b6f branch: 3.2 parent: 81856:9c0cd608464e user: Ezio Melotti date: Fri Feb 01 05:18:44 2013 +0200 summary: #16128: clarify that instances of user-defined classes compare equal with themselves. files: Doc/glossary.rst | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/Doc/glossary.rst b/Doc/glossary.rst --- a/Doc/glossary.rst +++ b/Doc/glossary.rst @@ -320,7 +320,8 @@ All of Python's immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are. Objects which are instances of user-defined classes are hashable by default; they all - compare unequal, and their hash value is their :func:`id`. + compare unequal (except with themselves), and their hash value is their + :func:`id`. IDLE An Integrated Development Environment for Python. IDLE is a basic editor -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 04:20:45 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 1 Feb 2013 04:20:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2316128=3A_merge_with_3=2E2=2E?= Message-ID: <3Yy3VP0TqKzRKL@mail.python.org> http://hg.python.org/cpython/rev/d9255c100971 changeset: 81863:d9255c100971 branch: 3.3 parent: 81857:886f48754f7e parent: 81862:e84c5cf92b6f user: Ezio Melotti date: Fri Feb 01 05:20:06 2013 +0200 summary: #16128: merge with 3.2. files: Doc/glossary.rst | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/Doc/glossary.rst b/Doc/glossary.rst --- a/Doc/glossary.rst +++ b/Doc/glossary.rst @@ -320,7 +320,8 @@ All of Python's immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are. Objects which are instances of user-defined classes are hashable by default; they all - compare unequal, and their hash value is their :func:`id`. + compare unequal (except with themselves), and their hash value is their + :func:`id`. IDLE An Integrated Development Environment for Python. IDLE is a basic editor -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 04:20:46 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 1 Feb 2013 04:20:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE2MTI4OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3Yy3VQ355CzRJ5@mail.python.org> http://hg.python.org/cpython/rev/1890c63f6153 changeset: 81864:1890c63f6153 parent: 81860:935a286b8066 parent: 81863:d9255c100971 user: Ezio Melotti date: Fri Feb 01 05:20:20 2013 +0200 summary: #16128: merge with 3.3. files: Doc/glossary.rst | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/Doc/glossary.rst b/Doc/glossary.rst --- a/Doc/glossary.rst +++ b/Doc/glossary.rst @@ -320,7 +320,8 @@ All of Python's immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are. Objects which are instances of user-defined classes are hashable by default; they all - compare unequal, and their hash value is their :func:`id`. + compare unequal (except with themselves), and their hash value is their + :func:`id`. IDLE An Integrated Development Environment for Python. IDLE is a basic editor -- Repository URL: http://hg.python.org/cpython From ncoghlan at gmail.com Fri Feb 1 05:15:45 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 1 Feb 2013 14:15:45 +1000 Subject: [Python-checkins] cpython (2.7): #16128: clarify that instances of user-defined classes compare equal with In-Reply-To: <3Yy3VL2XyVzRKB@mail.python.org> References: <3Yy3VL2XyVzRKB@mail.python.org> Message-ID: On 1 Feb 2013 13:22, "ezio.melotti" wrote: > > http://hg.python.org/cpython/rev/79a021beaf58 > changeset: 81861:79a021beaf58 > branch: 2.7 > parent: 81859:8ee6d96a1019 > user: Ezio Melotti > date: Fri Feb 01 05:18:44 2013 +0200 > summary: > #16128: clarify that instances of user-defined classes compare equal with themselves. > > files: > Doc/glossary.rst | 3 ++- > 1 files changed, 2 insertions(+), 1 deletions(-) > > > diff --git a/Doc/glossary.rst b/Doc/glossary.rst > --- a/Doc/glossary.rst > +++ b/Doc/glossary.rst > @@ -330,7 +330,8 @@ > All of Python's immutable built-in objects are hashable, while no mutable > containers (such as lists or dictionaries) are. Objects which are > instances of user-defined classes are hashable by default; they all > - compare unequal, and their hash value is their :func:`id`. > + compare unequal (except with themselves), and their hash value is their > + :func:`id`. The hash(x) == id(x) behaviour is a CPython implementation detail. It shouldn't be mentioned here. > > IDLE > An Integrated Development Environment for Python. IDLE is a basic editor > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Feb 1 06:00:32 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Fri, 01 Feb 2013 06:00:32 +0100 Subject: [Python-checkins] Daily reference leaks (e6cc582cafce): sum=0 Message-ID: results for e6cc582cafce on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogRH6JFY', '-x'] From python-checkins at python.org Fri Feb 1 12:17:30 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 1 Feb 2013 12:17:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3ODM6?= =?utf-8?q?_Remove_declarations_of_nonexistent_private_variables=2E?= Message-ID: <3YyG4V3z5yzNyM@mail.python.org> http://hg.python.org/cpython/rev/6074530b526f changeset: 81865:6074530b526f branch: 2.7 parent: 81861:79a021beaf58 user: Serhiy Storchaka date: Fri Feb 01 13:13:32 2013 +0200 summary: Issue #1783: Remove declarations of nonexistent private variables. files: Include/sysmodule.h | 3 --- 1 files changed, 0 insertions(+), 3 deletions(-) diff --git a/Include/sysmodule.h b/Include/sysmodule.h --- a/Include/sysmodule.h +++ b/Include/sysmodule.h @@ -19,9 +19,6 @@ PyAPI_FUNC(void) PySys_WriteStderr(const char *format, ...) Py_GCC_ATTRIBUTE((format(printf, 1, 2))); -PyAPI_DATA(PyObject *) _PySys_TraceFunc, *_PySys_ProfileFunc; -PyAPI_DATA(int) _PySys_CheckInterval; - PyAPI_FUNC(void) PySys_ResetWarnOptions(void); PyAPI_FUNC(void) PySys_AddWarnOption(char *); PyAPI_FUNC(int) PySys_HasWarnOptions(void); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 12:17:31 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 1 Feb 2013 12:17:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3ODM6?= =?utf-8?q?_Remove_declarations_of_nonexistent_private_variables=2E?= Message-ID: <3YyG4W6ZgczR9Y@mail.python.org> http://hg.python.org/cpython/rev/349419bb6283 changeset: 81866:349419bb6283 branch: 3.2 parent: 81862:e84c5cf92b6f user: Serhiy Storchaka date: Fri Feb 01 13:14:20 2013 +0200 summary: Issue #1783: Remove declarations of nonexistent private variables. files: Include/sysmodule.h | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diff --git a/Include/sysmodule.h b/Include/sysmodule.h --- a/Include/sysmodule.h +++ b/Include/sysmodule.h @@ -20,10 +20,6 @@ PyAPI_FUNC(void) PySys_FormatStdout(const char *format, ...); PyAPI_FUNC(void) PySys_FormatStderr(const char *format, ...); -#ifndef Py_LIMITED_API -PyAPI_DATA(PyObject *) _PySys_TraceFunc, *_PySys_ProfileFunc; -#endif - PyAPI_FUNC(void) PySys_ResetWarnOptions(void); PyAPI_FUNC(void) PySys_AddWarnOption(const wchar_t *); PyAPI_FUNC(void) PySys_AddWarnOptionUnicode(PyObject *); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 12:17:33 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 1 Feb 2013 12:17:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=231783=3A_Remove_declarations_of_nonexistent_private_va?= =?utf-8?q?riables=2E?= Message-ID: <3YyG4Y2NlTzR16@mail.python.org> http://hg.python.org/cpython/rev/9d68f705e25f changeset: 81867:9d68f705e25f branch: 3.3 parent: 81863:d9255c100971 parent: 81866:349419bb6283 user: Serhiy Storchaka date: Fri Feb 01 13:14:47 2013 +0200 summary: Issue #1783: Remove declarations of nonexistent private variables. files: Include/sysmodule.h | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diff --git a/Include/sysmodule.h b/Include/sysmodule.h --- a/Include/sysmodule.h +++ b/Include/sysmodule.h @@ -20,10 +20,6 @@ PyAPI_FUNC(void) PySys_FormatStdout(const char *format, ...); PyAPI_FUNC(void) PySys_FormatStderr(const char *format, ...); -#ifndef Py_LIMITED_API -PyAPI_DATA(PyObject *) _PySys_TraceFunc, *_PySys_ProfileFunc; -#endif - PyAPI_FUNC(void) PySys_ResetWarnOptions(void); PyAPI_FUNC(void) PySys_AddWarnOption(const wchar_t *); PyAPI_FUNC(void) PySys_AddWarnOptionUnicode(PyObject *); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 12:17:34 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 1 Feb 2013 12:17:34 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=231783=3A_Remove_declarations_of_nonexistent_priv?= =?utf-8?q?ate_variables=2E?= Message-ID: <3YyG4Z4vGhzRB1@mail.python.org> http://hg.python.org/cpython/rev/905b4e3cf6d0 changeset: 81868:905b4e3cf6d0 parent: 81864:1890c63f6153 parent: 81867:9d68f705e25f user: Serhiy Storchaka date: Fri Feb 01 13:15:17 2013 +0200 summary: Issue #1783: Remove declarations of nonexistent private variables. files: Include/sysmodule.h | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diff --git a/Include/sysmodule.h b/Include/sysmodule.h --- a/Include/sysmodule.h +++ b/Include/sysmodule.h @@ -20,10 +20,6 @@ PyAPI_FUNC(void) PySys_FormatStdout(const char *format, ...); PyAPI_FUNC(void) PySys_FormatStderr(const char *format, ...); -#ifndef Py_LIMITED_API -PyAPI_DATA(PyObject *) _PySys_TraceFunc, *_PySys_ProfileFunc; -#endif - PyAPI_FUNC(void) PySys_ResetWarnOptions(void); PyAPI_FUNC(void) PySys_AddWarnOption(const wchar_t *); PyAPI_FUNC(void) PySys_AddWarnOptionUnicode(PyObject *); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 20:07:44 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 20:07:44 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MDk4?= =?utf-8?q?=3A_Make_sure_every_module_has_=5F=5Floader=5F=5F_defined=2E?= Message-ID: <3YySW42gXszNgq@mail.python.org> http://hg.python.org/cpython/rev/05747d3bcd9c changeset: 81869:05747d3bcd9c branch: 3.3 parent: 81867:9d68f705e25f user: Brett Cannon date: Fri Feb 01 14:04:12 2013 -0500 summary: Issue #17098: Make sure every module has __loader__ defined. Thanks to Thomas Heller for the bug report. files: Lib/importlib/_bootstrap.py | 8 +- Misc/NEWS | 3 + Modules/signalmodule.c | 3 +- Python/importlib.h | 572 ++++++++++++----------- 4 files changed, 298 insertions(+), 288 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -1703,9 +1703,11 @@ else: BYTECODE_SUFFIXES = DEBUG_BYTECODE_SUFFIXES - for module in (_imp, sys): - if not hasattr(module, '__loader__'): - module.__loader__ = BuiltinImporter + module_type = type(sys) + for module in sys.modules.values(): + if isinstance(module, module_type): + if not hasattr(module, '__loader__'): + module.__loader__ = BuiltinImporter self_module = sys.modules[__name__] for builtin_name in ('_io', '_warnings', 'builtins', 'marshal'): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #17098: All modules now have __loader__ set even if they pre-exist the + bootstrapping of importlib. + - Issue #16979: Fix error handling bugs in the unicode-escape-decode decoder. - Issue #13886: Fix input() to not strip out input bytes that cannot be decoded diff --git a/Modules/signalmodule.c b/Modules/signalmodule.c --- a/Modules/signalmodule.c +++ b/Modules/signalmodule.c @@ -1367,9 +1367,8 @@ void PyOS_InitInterrupts(void) { - PyObject *m = PyInit_signal(); + PyObject *m = PyImport_ImportModule("signal"); if (m) { - _PyImport_FixupBuiltin(m, "signal"); Py_DECREF(m); } } diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 20:07:45 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 20:07:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317098=3A_all_modules_should_have_=5F=5Floader?= =?utf-8?b?X18=?= Message-ID: <3YySW55YCPzSP4@mail.python.org> http://hg.python.org/cpython/rev/1f1a1b3cc416 changeset: 81870:1f1a1b3cc416 parent: 81868:905b4e3cf6d0 parent: 81869:05747d3bcd9c user: Brett Cannon date: Fri Feb 01 14:07:28 2013 -0500 summary: Issue #17098: all modules should have __loader__ files: Lib/importlib/_bootstrap.py | 8 +++++--- Misc/NEWS | 3 +++ Modules/signalmodule.c | 3 +-- 3 files changed, 9 insertions(+), 5 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -1723,9 +1723,11 @@ else: BYTECODE_SUFFIXES = DEBUG_BYTECODE_SUFFIXES - for module in (_imp, sys): - if not hasattr(module, '__loader__'): - module.__loader__ = BuiltinImporter + module_type = type(sys) + for module in sys.modules.values(): + if isinstance(module, module_type): + if not hasattr(module, '__loader__'): + module.__loader__ = BuiltinImporter self_module = sys.modules[__name__] for builtin_name in ('_io', '_warnings', 'builtins', 'marshal'): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17098: All modules now have __loader__ set even if they pre-exist the + bootstrapping of importlib. + - Issue #16979: Fix error handling bugs in the unicode-escape-decode decoder. - Issue #13886: Fix input() to not strip out input bytes that cannot be decoded diff --git a/Modules/signalmodule.c b/Modules/signalmodule.c --- a/Modules/signalmodule.c +++ b/Modules/signalmodule.c @@ -1362,9 +1362,8 @@ void PyOS_InitInterrupts(void) { - PyObject *m = PyInit_signal(); + PyObject *m = PyImport_ImportModule("signal"); if (m) { - _PyImport_FixupBuiltin(m, "signal"); Py_DECREF(m); } } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 20:40:33 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 20:40:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fixes_Issue_?= =?utf-8?q?=236972=3A_The_zipfile_module_no_longer_overwrites_files_outsid?= =?utf-8?q?e_of?= Message-ID: <3YyTDx0KCkzSP5@mail.python.org> http://hg.python.org/cpython/rev/0c5fa35c9f12 changeset: 81871:0c5fa35c9f12 branch: 3.2 parent: 81866:349419bb6283 user: Gregory P. Smith date: Fri Feb 01 11:22:43 2013 -0800 summary: Fixes Issue #6972: The zipfile module no longer overwrites files outside of its destination path when extracting malicious zip files. files: Doc/library/zipfile.rst | 17 +++- Lib/test/test_zipfile.py | 86 +++++++++++++++++++++++++-- Lib/zipfile.py | 23 ++++-- Misc/NEWS | 3 + 4 files changed, 106 insertions(+), 23 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -214,6 +214,16 @@ to extract to. *member* can be a filename or a :class:`ZipInfo` object. *pwd* is the password used for encrypted files. + .. note:: + + If a member filename is an absolute path, a drive/UNC sharepoint and + leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes + ``foo/bar`` on Unix, and ``?:\foo\bar`` becomes ``foo\bar`` on Windows. + And all ``".."`` components in a member filename will be removed, e.g.: + ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal + characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) + replaced by underscore (``_``). + .. method:: ZipFile.extractall(path=None, members=None, pwd=None) @@ -222,12 +232,9 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. warning:: + .. note:: - Never extract archives from untrusted sources without prior inspection. - It is possible that files are created outside of *path*, e.g. members - that have absolute filenames starting with ``"/"`` or filenames with two - dots ``".."``. + See :meth:`extract` note. .. method:: ZipFile.printdir() diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -29,7 +29,7 @@ SMALL_TEST_DATA = [('_ziptest1', '1q2w3e4r5t'), ('ziptest2dir/_ziptest2', 'qawsedrftg'), - ('/ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), + ('ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), ('ziptest2dir/ziptest3dir/ziptest4dir/_ziptest3', '6y7u8i9o0p')] @@ -409,10 +409,7 @@ writtenfile = zipfp.extract(fpath) # make sure it was written to the right place - if os.path.isabs(fpath): - correctfile = os.path.join(os.getcwd(), fpath[1:]) - else: - correctfile = os.path.join(os.getcwd(), fpath) + correctfile = os.path.join(os.getcwd(), fpath) correctfile = os.path.normpath(correctfile) self.assertEqual(writtenfile, correctfile) @@ -434,10 +431,7 @@ with zipfile.ZipFile(TESTFN2, "r") as zipfp: zipfp.extractall() for fpath, fdata in SMALL_TEST_DATA: - if os.path.isabs(fpath): - outfile = os.path.join(os.getcwd(), fpath[1:]) - else: - outfile = os.path.join(os.getcwd(), fpath) + outfile = os.path.join(os.getcwd(), fpath) with open(outfile, "rb") as f: self.assertEqual(fdata.encode(), f.read()) @@ -447,6 +441,80 @@ # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) + def check_file(self, filename, content): + self.assertTrue(os.path.isfile(filename)) + with open(filename, 'rb') as f: + self.assertEqual(f.read(), content) + + def test_extract_hackers_arcnames(self): + hacknames = [ + ('../foo/bar', 'foo/bar'), + ('foo/../bar', 'foo/bar'), + ('foo/../../bar', 'foo/bar'), + ('foo/bar/..', 'foo/bar'), + ('./../foo/bar', 'foo/bar'), + ('/foo/bar', 'foo/bar'), + ('/foo/../bar', 'foo/bar'), + ('/foo/../../bar', 'foo/bar'), + ('//foo/bar', 'foo/bar'), + ('../../foo../../ba..r', 'foo../ba..r'), + ] + if os.path.sep == '\\': # Windows. + hacknames.extend([ + (r'..\foo\bar', 'foo/bar'), + (r'..\/foo\/bar', 'foo/bar'), + (r'foo/\..\/bar', 'foo/bar'), + (r'foo\/../\bar', 'foo/bar'), + (r'C:foo/bar', 'foo/bar'), + (r'C:/foo/bar', 'foo/bar'), + (r'C://foo/bar', 'foo/bar'), + (r'C:\foo\bar', 'foo/bar'), + (r'//conky/mountpoint/foo/bar', 'foo/bar'), + (r'\\conky\mountpoint\foo\bar', 'foo/bar'), + (r'///conky/mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\\conky\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//conky//mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\conky\\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//?/C:/foo/bar', 'foo/bar'), + (r'\\?\C:\foo\bar', 'foo/bar'), + (r'C:/../C:/foo/bar', 'C_/foo/bar'), + (r'a:b\ce|f"g?h*i', 'b/c_d_e_f_g_h_i'), + ]) + + for arcname, fixedname in hacknames: + content = b'foobar' + arcname.encode() + with zipfile.ZipFile(TESTFN2, 'w', zipfile.ZIP_STORED) as zipfp: + zipfp.writestr(arcname, content) + + targetpath = os.path.join('target', 'subdir', 'subsub') + correctfile = os.path.join(targetpath, *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname, targetpath) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree('target') + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall(targetpath) + self.check_file(correctfile, content) + shutil.rmtree('target') + + correctfile = os.path.join(os.getcwd(), *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall() + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + os.remove(TESTFN2) + def test_writestr_compression(self): zipfp = zipfile.ZipFile(TESTFN2, "w") zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) diff --git a/Lib/zipfile.py b/Lib/zipfile.py --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -1062,17 +1062,22 @@ """ # build the destination pathname, replacing # forward slashes to platform specific separators. - # Strip trailing path separator, unless it represents the root. - if (targetpath[-1:] in (os.path.sep, os.path.altsep) - and len(os.path.splitdrive(targetpath)[1]) > 1): - targetpath = targetpath[:-1] + arcname = member.filename.replace('/', os.path.sep) - # don't include leading "/" from file name if present - if member.filename[0] == '/': - targetpath = os.path.join(targetpath, member.filename[1:]) - else: - targetpath = os.path.join(targetpath, member.filename) + if os.path.altsep: + arcname = arcname.replace(os.path.altsep, os.path.sep) + # interpret absolute pathname as relative, remove drive letter or + # UNC path, redundant separators, "." and ".." components. + arcname = os.path.splitdrive(arcname)[1] + arcname = os.path.sep.join(x for x in arcname.split(os.path.sep) + if x not in ('', os.path.curdir, os.path.pardir)) + # filter illegal characters on Windows + if os.path.sep == '\\': + illegal = ':<>|"?*' + table = str.maketrans(illegal, '_' * len(illegal)) + arcname = arcname.translate(table) + targetpath = os.path.join(targetpath, arcname) targetpath = os.path.normpath(targetpath) # Create all upper directories if necessary. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -216,6 +216,9 @@ Library ------- +- Issue #6972: The zipfile module no longer overwrites files outside of + its destination path when extracting malicious zip files. + - Issue #4844: ZipFile now raises BadZipFile when opens a ZIP file with an incomplete "End of Central Directory" record. Original patch by Guilherme Polo and Alan McIntyre. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 20:40:34 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 20:40:34 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fixes_Issue_=236972=3A_The_zipfile_module_no_longer_overwrites?= =?utf-8?q?_files_outside_of?= Message-ID: <3YyTDy4hrxzSS8@mail.python.org> http://hg.python.org/cpython/rev/483488a1dec5 changeset: 81872:483488a1dec5 branch: 3.3 parent: 81869:05747d3bcd9c parent: 81871:0c5fa35c9f12 user: Gregory P. Smith date: Fri Feb 01 11:31:31 2013 -0800 summary: Fixes Issue #6972: The zipfile module no longer overwrites files outside of its destination path when extracting malicious zip files. files: Doc/library/zipfile.rst | 17 +++- Lib/test/test_zipfile.py | 86 +++++++++++++++++++++++++-- Lib/zipfile.py | 23 ++++-- Misc/NEWS | 3 + 4 files changed, 106 insertions(+), 23 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -242,6 +242,16 @@ to extract to. *member* can be a filename or a :class:`ZipInfo` object. *pwd* is the password used for encrypted files. + .. note:: + + If a member filename is an absolute path, a drive/UNC sharepoint and + leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes + ``foo/bar`` on Unix, and ``?:\foo\bar`` becomes ``foo\bar`` on Windows. + And all ``".."`` components in a member filename will be removed, e.g.: + ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal + characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) + replaced by underscore (``_``). + .. method:: ZipFile.extractall(path=None, members=None, pwd=None) @@ -250,12 +260,9 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. warning:: + .. note:: - Never extract archives from untrusted sources without prior inspection. - It is possible that files are created outside of *path*, e.g. members - that have absolute filenames starting with ``"/"`` or filenames with two - dots ``".."``. + See :meth:`extract` note. .. method:: ZipFile.printdir() diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -24,7 +24,7 @@ SMALL_TEST_DATA = [('_ziptest1', '1q2w3e4r5t'), ('ziptest2dir/_ziptest2', 'qawsedrftg'), - ('/ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), + ('ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), ('ziptest2dir/ziptest3dir/ziptest4dir/_ziptest3', '6y7u8i9o0p')] @@ -501,10 +501,7 @@ writtenfile = zipfp.extract(fpath) # make sure it was written to the right place - if os.path.isabs(fpath): - correctfile = os.path.join(os.getcwd(), fpath[1:]) - else: - correctfile = os.path.join(os.getcwd(), fpath) + correctfile = os.path.join(os.getcwd(), fpath) correctfile = os.path.normpath(correctfile) self.assertEqual(writtenfile, correctfile) @@ -526,10 +523,7 @@ with zipfile.ZipFile(TESTFN2, "r") as zipfp: zipfp.extractall() for fpath, fdata in SMALL_TEST_DATA: - if os.path.isabs(fpath): - outfile = os.path.join(os.getcwd(), fpath[1:]) - else: - outfile = os.path.join(os.getcwd(), fpath) + outfile = os.path.join(os.getcwd(), fpath) with open(outfile, "rb") as f: self.assertEqual(fdata.encode(), f.read()) @@ -539,6 +533,80 @@ # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) + def check_file(self, filename, content): + self.assertTrue(os.path.isfile(filename)) + with open(filename, 'rb') as f: + self.assertEqual(f.read(), content) + + def test_extract_hackers_arcnames(self): + hacknames = [ + ('../foo/bar', 'foo/bar'), + ('foo/../bar', 'foo/bar'), + ('foo/../../bar', 'foo/bar'), + ('foo/bar/..', 'foo/bar'), + ('./../foo/bar', 'foo/bar'), + ('/foo/bar', 'foo/bar'), + ('/foo/../bar', 'foo/bar'), + ('/foo/../../bar', 'foo/bar'), + ('//foo/bar', 'foo/bar'), + ('../../foo../../ba..r', 'foo../ba..r'), + ] + if os.path.sep == '\\': # Windows. + hacknames.extend([ + (r'..\foo\bar', 'foo/bar'), + (r'..\/foo\/bar', 'foo/bar'), + (r'foo/\..\/bar', 'foo/bar'), + (r'foo\/../\bar', 'foo/bar'), + (r'C:foo/bar', 'foo/bar'), + (r'C:/foo/bar', 'foo/bar'), + (r'C://foo/bar', 'foo/bar'), + (r'C:\foo\bar', 'foo/bar'), + (r'//conky/mountpoint/foo/bar', 'foo/bar'), + (r'\\conky\mountpoint\foo\bar', 'foo/bar'), + (r'///conky/mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\\conky\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//conky//mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\conky\\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//?/C:/foo/bar', 'foo/bar'), + (r'\\?\C:\foo\bar', 'foo/bar'), + (r'C:/../C:/foo/bar', 'C_/foo/bar'), + (r'a:b\ce|f"g?h*i', 'b/c_d_e_f_g_h_i'), + ]) + + for arcname, fixedname in hacknames: + content = b'foobar' + arcname.encode() + with zipfile.ZipFile(TESTFN2, 'w', zipfile.ZIP_STORED) as zipfp: + zipfp.writestr(arcname, content) + + targetpath = os.path.join('target', 'subdir', 'subsub') + correctfile = os.path.join(targetpath, *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname, targetpath) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree('target') + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall(targetpath) + self.check_file(correctfile, content) + shutil.rmtree('target') + + correctfile = os.path.join(os.getcwd(), *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall() + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + os.remove(TESTFN2) + def test_writestr_compression_stored(self): zipfp = zipfile.ZipFile(TESTFN2, "w") zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) diff --git a/Lib/zipfile.py b/Lib/zipfile.py --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -1229,17 +1229,22 @@ """ # build the destination pathname, replacing # forward slashes to platform specific separators. - # Strip trailing path separator, unless it represents the root. - if (targetpath[-1:] in (os.path.sep, os.path.altsep) - and len(os.path.splitdrive(targetpath)[1]) > 1): - targetpath = targetpath[:-1] + arcname = member.filename.replace('/', os.path.sep) - # don't include leading "/" from file name if present - if member.filename[0] == '/': - targetpath = os.path.join(targetpath, member.filename[1:]) - else: - targetpath = os.path.join(targetpath, member.filename) + if os.path.altsep: + arcname = arcname.replace(os.path.altsep, os.path.sep) + # interpret absolute pathname as relative, remove drive letter or + # UNC path, redundant separators, "." and ".." components. + arcname = os.path.splitdrive(arcname)[1] + arcname = os.path.sep.join(x for x in arcname.split(os.path.sep) + if x not in ('', os.path.curdir, os.path.pardir)) + # filter illegal characters on Windows + if os.path.sep == '\\': + illegal = ':<>|"?*' + table = str.maketrans(illegal, '_' * len(illegal)) + arcname = arcname.translate(table) + targetpath = os.path.join(targetpath, arcname) targetpath = os.path.normpath(targetpath) # Create all upper directories if necessary. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -167,6 +167,9 @@ Library ------- +- Issue #6972: The zipfile module no longer overwrites files outside of + its destination path when extracting malicious zip files. + - Issue #4844: ZipFile now raises BadZipFile when opens a ZIP file with an incomplete "End of Central Directory" record. Original patch by Guilherme Polo and Alan McIntyre. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 20:40:36 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 20:40:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fixes_Issue_=236972=3A_The_zipfile_module_no_longer_over?= =?utf-8?q?writes_files_outside_of?= Message-ID: <3YyTF01bfrzSP5@mail.python.org> http://hg.python.org/cpython/rev/249e0b47b686 changeset: 81873:249e0b47b686 parent: 81870:1f1a1b3cc416 parent: 81872:483488a1dec5 user: Gregory P. Smith date: Fri Feb 01 11:35:00 2013 -0800 summary: Fixes Issue #6972: The zipfile module no longer overwrites files outside of its destination path when extracting malicious zip files. files: Doc/library/zipfile.rst | 17 +++- Lib/test/test_zipfile.py | 86 +++++++++++++++++++++++++-- Lib/zipfile.py | 23 ++++-- Misc/NEWS | 3 + 4 files changed, 106 insertions(+), 23 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -242,6 +242,16 @@ to extract to. *member* can be a filename or a :class:`ZipInfo` object. *pwd* is the password used for encrypted files. + .. note:: + + If a member filename is an absolute path, a drive/UNC sharepoint and + leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes + ``foo/bar`` on Unix, and ``?:\foo\bar`` becomes ``foo\bar`` on Windows. + And all ``".."`` components in a member filename will be removed, e.g.: + ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal + characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) + replaced by underscore (``_``). + .. method:: ZipFile.extractall(path=None, members=None, pwd=None) @@ -250,12 +260,9 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. warning:: + .. note:: - Never extract archives from untrusted sources without prior inspection. - It is possible that files are created outside of *path*, e.g. members - that have absolute filenames starting with ``"/"`` or filenames with two - dots ``".."``. + See :meth:`extract` note. .. method:: ZipFile.printdir() diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -24,7 +24,7 @@ SMALL_TEST_DATA = [('_ziptest1', '1q2w3e4r5t'), ('ziptest2dir/_ziptest2', 'qawsedrftg'), - ('/ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), + ('ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), ('ziptest2dir/ziptest3dir/ziptest4dir/_ziptest3', '6y7u8i9o0p')] @@ -501,10 +501,7 @@ writtenfile = zipfp.extract(fpath) # make sure it was written to the right place - if os.path.isabs(fpath): - correctfile = os.path.join(os.getcwd(), fpath[1:]) - else: - correctfile = os.path.join(os.getcwd(), fpath) + correctfile = os.path.join(os.getcwd(), fpath) correctfile = os.path.normpath(correctfile) self.assertEqual(writtenfile, correctfile) @@ -526,10 +523,7 @@ with zipfile.ZipFile(TESTFN2, "r") as zipfp: zipfp.extractall() for fpath, fdata in SMALL_TEST_DATA: - if os.path.isabs(fpath): - outfile = os.path.join(os.getcwd(), fpath[1:]) - else: - outfile = os.path.join(os.getcwd(), fpath) + outfile = os.path.join(os.getcwd(), fpath) with open(outfile, "rb") as f: self.assertEqual(fdata.encode(), f.read()) @@ -539,6 +533,80 @@ # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) + def check_file(self, filename, content): + self.assertTrue(os.path.isfile(filename)) + with open(filename, 'rb') as f: + self.assertEqual(f.read(), content) + + def test_extract_hackers_arcnames(self): + hacknames = [ + ('../foo/bar', 'foo/bar'), + ('foo/../bar', 'foo/bar'), + ('foo/../../bar', 'foo/bar'), + ('foo/bar/..', 'foo/bar'), + ('./../foo/bar', 'foo/bar'), + ('/foo/bar', 'foo/bar'), + ('/foo/../bar', 'foo/bar'), + ('/foo/../../bar', 'foo/bar'), + ('//foo/bar', 'foo/bar'), + ('../../foo../../ba..r', 'foo../ba..r'), + ] + if os.path.sep == '\\': # Windows. + hacknames.extend([ + (r'..\foo\bar', 'foo/bar'), + (r'..\/foo\/bar', 'foo/bar'), + (r'foo/\..\/bar', 'foo/bar'), + (r'foo\/../\bar', 'foo/bar'), + (r'C:foo/bar', 'foo/bar'), + (r'C:/foo/bar', 'foo/bar'), + (r'C://foo/bar', 'foo/bar'), + (r'C:\foo\bar', 'foo/bar'), + (r'//conky/mountpoint/foo/bar', 'foo/bar'), + (r'\\conky\mountpoint\foo\bar', 'foo/bar'), + (r'///conky/mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\\conky\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//conky//mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\conky\\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//?/C:/foo/bar', 'foo/bar'), + (r'\\?\C:\foo\bar', 'foo/bar'), + (r'C:/../C:/foo/bar', 'C_/foo/bar'), + (r'a:b\ce|f"g?h*i', 'b/c_d_e_f_g_h_i'), + ]) + + for arcname, fixedname in hacknames: + content = b'foobar' + arcname.encode() + with zipfile.ZipFile(TESTFN2, 'w', zipfile.ZIP_STORED) as zipfp: + zipfp.writestr(arcname, content) + + targetpath = os.path.join('target', 'subdir', 'subsub') + correctfile = os.path.join(targetpath, *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname, targetpath) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree('target') + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall(targetpath) + self.check_file(correctfile, content) + shutil.rmtree('target') + + correctfile = os.path.join(os.getcwd(), *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall() + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + os.remove(TESTFN2) + def test_writestr_compression_stored(self): zipfp = zipfile.ZipFile(TESTFN2, "w") zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) diff --git a/Lib/zipfile.py b/Lib/zipfile.py --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -1229,17 +1229,22 @@ """ # build the destination pathname, replacing # forward slashes to platform specific separators. - # Strip trailing path separator, unless it represents the root. - if (targetpath[-1:] in (os.path.sep, os.path.altsep) - and len(os.path.splitdrive(targetpath)[1]) > 1): - targetpath = targetpath[:-1] + arcname = member.filename.replace('/', os.path.sep) - # don't include leading "/" from file name if present - if member.filename[0] == '/': - targetpath = os.path.join(targetpath, member.filename[1:]) - else: - targetpath = os.path.join(targetpath, member.filename) + if os.path.altsep: + arcname = arcname.replace(os.path.altsep, os.path.sep) + # interpret absolute pathname as relative, remove drive letter or + # UNC path, redundant separators, "." and ".." components. + arcname = os.path.splitdrive(arcname)[1] + arcname = os.path.sep.join(x for x in arcname.split(os.path.sep) + if x not in ('', os.path.curdir, os.path.pardir)) + # filter illegal characters on Windows + if os.path.sep == '\\': + illegal = ':<>|"?*' + table = str.maketrans(illegal, '_' * len(illegal)) + arcname = arcname.translate(table) + targetpath = os.path.join(targetpath, arcname) targetpath = os.path.normpath(targetpath) # Create all upper directories if necessary. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -239,6 +239,9 @@ Library ------- +- Issue #6972: The zipfile module no longer overwrites files outside of + its destination path when extracting malicious zip files. + - Issue #4844: ZipFile now raises BadZipFile when opens a ZIP file with an incomplete "End of Central Directory" record. Original patch by Guilherme Polo and Alan McIntyre. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 20:40:37 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 20:40:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fixes_Issue_?= =?utf-8?q?=236972=3A_The_zipfile_module_no_longer_overwrites_files_outsid?= =?utf-8?q?e_of?= Message-ID: <3YyTF15fKyzSS3@mail.python.org> http://hg.python.org/cpython/rev/4d1948689ee1 changeset: 81874:4d1948689ee1 branch: 2.7 parent: 81865:6074530b526f user: Gregory P. Smith date: Fri Feb 01 11:40:18 2013 -0800 summary: Fixes Issue #6972: The zipfile module no longer overwrites files outside of its destination path when extracting malicious zip files. files: Doc/library/zipfile.rst | 10 +++ Lib/test/test_zipfile.py | 86 +++++++++++++++++++++++++-- Lib/zipfile.py | 23 ++++-- Misc/NEWS | 3 + 4 files changed, 104 insertions(+), 18 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -242,6 +242,16 @@ .. versionadded:: 2.6 + .. note:: + + If a member filename is an absolute path, a drive/UNC sharepoint and + leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes + ``foo/bar`` on Unix, and ``?:\foo\bar`` becomes ``foo\bar`` on Windows. + And all ``".."`` components in a member filename will be removed, e.g.: + ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal + characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) + replaced by underscore (``_``). + .. method:: ZipFile.read(name[, pwd]) diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -26,7 +26,7 @@ SMALL_TEST_DATA = [('_ziptest1', '1q2w3e4r5t'), ('ziptest2dir/_ziptest2', 'qawsedrftg'), - ('/ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), + ('ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), ('ziptest2dir/ziptest3dir/ziptest4dir/_ziptest3', '6y7u8i9o0p')] @@ -391,10 +391,7 @@ writtenfile = zipfp.extract(fpath) # make sure it was written to the right place - if os.path.isabs(fpath): - correctfile = os.path.join(os.getcwd(), fpath[1:]) - else: - correctfile = os.path.join(os.getcwd(), fpath) + correctfile = os.path.join(os.getcwd(), fpath) correctfile = os.path.normpath(correctfile) self.assertEqual(writtenfile, correctfile) @@ -414,10 +411,7 @@ with zipfile.ZipFile(TESTFN2, "r") as zipfp: zipfp.extractall() for fpath, fdata in SMALL_TEST_DATA: - if os.path.isabs(fpath): - outfile = os.path.join(os.getcwd(), fpath[1:]) - else: - outfile = os.path.join(os.getcwd(), fpath) + outfile = os.path.join(os.getcwd(), fpath) self.assertEqual(fdata, open(outfile, "rb").read()) os.remove(outfile) @@ -425,6 +419,80 @@ # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) + def check_file(self, filename, content): + self.assertTrue(os.path.isfile(filename)) + with open(filename, 'rb') as f: + self.assertEqual(f.read(), content) + + def test_extract_hackers_arcnames(self): + hacknames = [ + ('../foo/bar', 'foo/bar'), + ('foo/../bar', 'foo/bar'), + ('foo/../../bar', 'foo/bar'), + ('foo/bar/..', 'foo/bar'), + ('./../foo/bar', 'foo/bar'), + ('/foo/bar', 'foo/bar'), + ('/foo/../bar', 'foo/bar'), + ('/foo/../../bar', 'foo/bar'), + ('//foo/bar', 'foo/bar'), + ('../../foo../../ba..r', 'foo../ba..r'), + ] + if os.path.sep == '\\': + hacknames.extend([ + (r'..\foo\bar', 'foo/bar'), + (r'..\/foo\/bar', 'foo/bar'), + (r'foo/\..\/bar', 'foo/bar'), + (r'foo\/../\bar', 'foo/bar'), + (r'C:foo/bar', 'foo/bar'), + (r'C:/foo/bar', 'foo/bar'), + (r'C://foo/bar', 'foo/bar'), + (r'C:\foo\bar', 'foo/bar'), + (r'//conky/mountpoint/foo/bar', 'foo/bar'), + (r'\\conky\mountpoint\foo\bar', 'foo/bar'), + (r'///conky/mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\\conky\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//conky//mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\conky\\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//?/C:/foo/bar', 'foo/bar'), + (r'\\?\C:\foo\bar', 'foo/bar'), + (r'C:/../C:/foo/bar', 'C_/foo/bar'), + (r'a:b\ce|f"g?h*i', 'b/c_d_e_f_g_h_i'), + ]) + + for arcname, fixedname in hacknames: + content = b'foobar' + arcname.encode() + with zipfile.ZipFile(TESTFN2, 'w', zipfile.ZIP_STORED) as zipfp: + zipfp.writestr(arcname, content) + + targetpath = os.path.join('target', 'subdir', 'subsub') + correctfile = os.path.join(targetpath, *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname, targetpath) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree('target') + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall(targetpath) + self.check_file(correctfile, content) + shutil.rmtree('target') + + correctfile = os.path.join(os.getcwd(), *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall() + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + os.remove(TESTFN2) + def test_writestr_compression(self): zipfp = zipfile.ZipFile(TESTFN2, "w") zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) diff --git a/Lib/zipfile.py b/Lib/zipfile.py --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -1040,17 +1040,22 @@ """ # build the destination pathname, replacing # forward slashes to platform specific separators. - # Strip trailing path separator, unless it represents the root. - if (targetpath[-1:] in (os.path.sep, os.path.altsep) - and len(os.path.splitdrive(targetpath)[1]) > 1): - targetpath = targetpath[:-1] + arcname = member.filename.replace('/', os.path.sep) - # don't include leading "/" from file name if present - if member.filename[0] == '/': - targetpath = os.path.join(targetpath, member.filename[1:]) - else: - targetpath = os.path.join(targetpath, member.filename) + if os.path.altsep: + arcname = arcname.replace(os.path.altsep, os.path.sep) + # interpret absolute pathname as relative, remove drive letter or + # UNC path, redundant separators, "." and ".." components. + arcname = os.path.splitdrive(arcname)[1] + arcname = os.path.sep.join(x for x in arcname.split(os.path.sep) + if x not in ('', os.path.curdir, os.path.pardir)) + # filter illegal characters on Windows + if os.path.sep == '\\': + illegal = ':<>|"?*' + table = str.maketrans(illegal, '_' * len(illegal)) + arcname = arcname.translate(table) + targetpath = os.path.join(targetpath, arcname) targetpath = os.path.normpath(targetpath) # Create all upper directories if necessary. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,9 @@ Library ------- +- Issue #6972: The zipfile module no longer overwrites files outside of + its destination path when extracting malicious zip files. + - Issue #17049: Localized calendar methods now return unicode if a locale includes an encoding and the result string contains month or weekday (was regression from Python 2.6). -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:12:26 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 22:12:26 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Additional_fix?= =?utf-8?q?_for_Issue_=2312268=3A_The_io_module_file_object_writelines=28?= =?utf-8?q?=29_methods?= Message-ID: <3YyWGy32sqzSP4@mail.python.org> http://hg.python.org/cpython/rev/a5e7b38caee2 changeset: 81875:a5e7b38caee2 branch: 2.7 user: Gregory P. Smith date: Fri Feb 01 13:02:59 2013 -0800 summary: Additional fix for Issue #12268: The io module file object writelines() methods no longer abort early when one of its write system calls is interrupted (EINTR). files: Misc/NEWS | 3 +++ Modules/_io/iobase.c | 5 ++++- Modules/_io/textio.c | 7 +++++-- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -679,6 +679,9 @@ Extension Modules ----------------- +- Issue #12268: The io module file object writelines() methods no longer + abort early when one of its write system calls is interrupted (EINTR). + - Fix the leak of a dict in the time module when used in an embedded interpreter that is repeatedly initialized and shutdown and reinitialized. diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c --- a/Modules/_io/iobase.c +++ b/Modules/_io/iobase.c @@ -660,7 +660,10 @@ break; /* Stop Iteration */ } - res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + res = NULL; + do { + res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + } while (res == NULL && _PyIO_trap_eintr()); Py_DECREF(line); if (res == NULL) { Py_DECREF(iter); diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -1213,8 +1213,11 @@ Py_DECREF(pending); if (b == NULL) return -1; - ret = PyObject_CallMethodObjArgs(self->buffer, - _PyIO_str_write, b, NULL); + ret = NULL; + do { + ret = PyObject_CallMethodObjArgs(self->buffer, + _PyIO_str_write, b, NULL); + } while (ret == NULL && _PyIO_trap_eintr()); Py_DECREF(b); if (ret == NULL) return -1; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:12:27 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 22:12:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Additional_fix?= =?utf-8?q?_for_Issue_=2312268=3A_The_io_module_file_object_writelines=28?= =?utf-8?q?=29_methods?= Message-ID: <3YyWGz65xZzSSC@mail.python.org> http://hg.python.org/cpython/rev/2fd669aa4abc changeset: 81876:2fd669aa4abc branch: 3.2 parent: 81871:0c5fa35c9f12 user: Gregory P. Smith date: Fri Feb 01 13:03:39 2013 -0800 summary: Additional fix for Issue #12268: The io module file object writelines() methods no longer abort early when one of its write system calls is interrupted (EINTR). files: Misc/NEWS | 3 +++ Modules/_io/iobase.c | 5 ++++- Modules/_io/textio.c | 7 +++++-- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -817,6 +817,9 @@ Extension Modules ----------------- +- Issue #12268: The io module file object writelines() methods no longer + abort early when one of its write system calls is interrupted (EINTR). + - Fix the leak of a dict in the time module when used in an embedded interpreter that is repeatedly initialized and shutdown and reinitialized. diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c --- a/Modules/_io/iobase.c +++ b/Modules/_io/iobase.c @@ -674,7 +674,10 @@ break; /* Stop Iteration */ } - res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + res = NULL; + do { + res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + } while (res == NULL && _PyIO_trap_eintr()); Py_DECREF(line); if (res == NULL) { Py_DECREF(iter); diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -1249,8 +1249,11 @@ Py_DECREF(pending); if (b == NULL) return -1; - ret = PyObject_CallMethodObjArgs(self->buffer, - _PyIO_str_write, b, NULL); + ret = NULL; + do { + ret = PyObject_CallMethodObjArgs(self->buffer, + _PyIO_str_write, b, NULL); + } while (ret == NULL && _PyIO_trap_eintr()); Py_DECREF(b); if (ret == NULL) return -1; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:12:29 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 22:12:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_better_correct?= =?utf-8?q?ed_news_entry?= Message-ID: <3YyWH11vkczST6@mail.python.org> http://hg.python.org/cpython/rev/81f7bdf7bbb6 changeset: 81877:81f7bdf7bbb6 branch: 3.2 user: Gregory P. Smith date: Fri Feb 01 13:06:44 2013 -0800 summary: better corrected news entry files: Misc/NEWS | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -817,8 +817,8 @@ Extension Modules ----------------- -- Issue #12268: The io module file object writelines() methods no longer - abort early when one of its write system calls is interrupted (EINTR). +- Issue #12268: The io module file object write methods no longer abort early + when one of its write system calls is interrupted (EINTR). - Fix the leak of a dict in the time module when used in an embedded interpreter that is repeatedly initialized and shutdown and reinitialized. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:12:30 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 22:12:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_better_news_en?= =?utf-8?q?try?= Message-ID: <3YyWH24gkwzSTF@mail.python.org> http://hg.python.org/cpython/rev/c8f8708d509a changeset: 81878:c8f8708d509a branch: 2.7 parent: 81875:a5e7b38caee2 user: Gregory P. Smith date: Fri Feb 01 13:07:27 2013 -0800 summary: better news entry files: Misc/NEWS | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -679,8 +679,8 @@ Extension Modules ----------------- -- Issue #12268: The io module file object writelines() methods no longer - abort early when one of its write system calls is interrupted (EINTR). +- Issue #12268: The io module file object write methods no longer abort early + when a write system calls is interrupted (EINTR). - Fix the leak of a dict in the time module when used in an embedded interpreter that is repeatedly initialized and shutdown and reinitialized. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:12:32 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 22:12:32 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Additional_fix_for_issue_=2312268=3A_The_io_module_file_object?= =?utf-8?q?_write_methods_no?= Message-ID: <3YyWH40K45zSSG@mail.python.org> http://hg.python.org/cpython/rev/30fc620e240e changeset: 81879:30fc620e240e branch: 3.3 parent: 81872:483488a1dec5 parent: 81876:2fd669aa4abc user: Gregory P. Smith date: Fri Feb 01 13:08:23 2013 -0800 summary: Additional fix for issue #12268: The io module file object write methods no longer abort early when a write system call is interrupted (EINTR). files: Misc/NEWS | 6 ++++++ Modules/_io/iobase.c | 5 ++++- Modules/_io/textio.c | 7 +++++-- 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -506,6 +506,12 @@ - Issue #15906: Fix a regression in `argparse` caused by the preceding change, when ``action='append'``, ``type='str'`` and ``default=[]``. +Extension Modules +----------------- + +- Issue #12268: The io module file object write methods no longer abort early + when one of its write system calls is interrupted (EINTR). + Tests ----- diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c --- a/Modules/_io/iobase.c +++ b/Modules/_io/iobase.c @@ -669,7 +669,10 @@ break; /* Stop Iteration */ } - res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + res = NULL; + do { + res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + } while (res == NULL && _PyIO_trap_eintr()); Py_DECREF(line); if (res == NULL) { Py_DECREF(iter); diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -1247,8 +1247,11 @@ Py_DECREF(pending); if (b == NULL) return -1; - ret = PyObject_CallMethodObjArgs(self->buffer, - _PyIO_str_write, b, NULL); + ret = NULL; + do { + ret = PyObject_CallMethodObjArgs(self->buffer, + _PyIO_str_write, b, NULL); + } while (ret == NULL && _PyIO_trap_eintr()); Py_DECREF(b); if (ret == NULL) return -1; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:12:33 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 22:12:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_null_merge?= Message-ID: <3YyWH531HvzST2@mail.python.org> http://hg.python.org/cpython/rev/29e3aa7f2f4b changeset: 81880:29e3aa7f2f4b branch: 3.3 parent: 81879:30fc620e240e parent: 81877:81f7bdf7bbb6 user: Gregory P. Smith date: Fri Feb 01 13:08:51 2013 -0800 summary: null merge files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:12:34 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 1 Feb 2013 22:12:34 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Additional_fix_for_issue_=2312268=3A_The_io_module_file_?= =?utf-8?q?object_write_methods_no?= Message-ID: <3YyWH65ldVzSTN@mail.python.org> http://hg.python.org/cpython/rev/8f72519fd0e9 changeset: 81881:8f72519fd0e9 parent: 81873:249e0b47b686 parent: 81880:29e3aa7f2f4b user: Gregory P. Smith date: Fri Feb 01 13:10:33 2013 -0800 summary: Additional fix for issue #12268: The io module file object write methods no longer abort early when a write system call is interrupted (EINTR). files: Misc/NEWS | 6 ++++++ Modules/_io/iobase.c | 5 ++++- Modules/_io/textio.c | 7 +++++-- 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -715,6 +715,12 @@ `sha3_256`, `sha3_384` and `sha3_512`. As part of the patch some common code was moved from _hashopenssl.c to hashlib.h. +Extension Modules +----------------- + +- Issue #12268: The io module file object write methods no longer abort early + when one of its write system calls is interrupted (EINTR). + Tests ----- diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c --- a/Modules/_io/iobase.c +++ b/Modules/_io/iobase.c @@ -669,7 +669,10 @@ break; /* Stop Iteration */ } - res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + res = NULL; + do { + res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + } while (res == NULL && _PyIO_trap_eintr()); Py_DECREF(line); if (res == NULL) { Py_DECREF(iter); diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -1247,8 +1247,11 @@ Py_DECREF(pending); if (b == NULL) return -1; - ret = PyObject_CallMethodObjArgs(self->buffer, - _PyIO_str_write, b, NULL); + ret = NULL; + do { + ret = PyObject_CallMethodObjArgs(self->buffer, + _PyIO_str_write, b, NULL); + } while (ret == NULL && _PyIO_trap_eintr()); Py_DECREF(b); if (ret == NULL) return -1; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:38:00 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 22:38:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Add_a_test_for?= =?utf-8?q?_fix_of_issue_=2317098?= Message-ID: <3YyWrS3SZtzSQf@mail.python.org> http://hg.python.org/cpython/rev/4a4688b865ff changeset: 81882:4a4688b865ff branch: 3.3 parent: 81869:05747d3bcd9c user: Brett Cannon date: Fri Feb 01 14:43:59 2013 -0500 summary: Add a test for fix of issue #17098 files: Lib/test/test_importlib/test_api.py | 13 ++++++++++++- 1 files changed, 12 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_importlib/test_api.py b/Lib/test/test_importlib/test_api.py --- a/Lib/test/test_importlib/test_api.py +++ b/Lib/test/test_importlib/test_api.py @@ -4,6 +4,7 @@ from importlib import machinery import sys from test import support +import types import unittest @@ -175,12 +176,22 @@ machinery.FrozenImporter)) +class StartupTests(unittest.TestCase): + + def test_everyone_has___loader__(self): + # Issue #17098: all modules should have __loader__ defined. + for name, module in sys.modules.items(): + if isinstance(module, types.ModuleType): + self.assertTrue(hasattr(module, '__loader__'), + '{!r} lacks a __loader__ attribute'.format(name)) + def test_main(): from test.support import run_unittest run_unittest(ImportModuleTests, FindLoaderTests, InvalidateCacheTests, - FrozenImportlibTests) + FrozenImportlibTests, + StartupTests) if __name__ == '__main__': -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:38:01 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 22:38:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_merge_with_3=2E3?= Message-ID: <3YyWrT62xszSSQ@mail.python.org> http://hg.python.org/cpython/rev/370882f297a4 changeset: 81883:370882f297a4 parent: 81870:1f1a1b3cc416 parent: 81882:4a4688b865ff user: Brett Cannon date: Fri Feb 01 14:51:43 2013 -0500 summary: merge with 3.3 files: Lib/test/test_importlib/test_api.py | 17 ++++++++++------- 1 files changed, 10 insertions(+), 7 deletions(-) diff --git a/Lib/test/test_importlib/test_api.py b/Lib/test/test_importlib/test_api.py --- a/Lib/test/test_importlib/test_api.py +++ b/Lib/test/test_importlib/test_api.py @@ -4,6 +4,7 @@ from importlib import machinery import sys from test import support +import types import unittest @@ -175,13 +176,15 @@ machinery.FrozenImporter)) -def test_main(): - from test.support import run_unittest - run_unittest(ImportModuleTests, - FindLoaderTests, - InvalidateCacheTests, - FrozenImportlibTests) +class StartupTests(unittest.TestCase): + + def test_everyone_has___loader__(self): + # Issue #17098: all modules should have __loader__ defined. + for name, module in sys.modules.items(): + if isinstance(module, types.ModuleType): + self.assertTrue(hasattr(module, '__loader__'), + '{!r} lacks a __loader__ attribute'.format(name)) if __name__ == '__main__': - test_main() + unittest.main() -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:38:03 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 22:38:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MDk4?= =?utf-8?q?=3A_Be_more_stringent_of_setting_=5F=5Floader=5F=5F_on_early_im?= =?utf-8?q?ported?= Message-ID: <3YyWrW1hC2zSTB@mail.python.org> http://hg.python.org/cpython/rev/19ea454ccdf7 changeset: 81884:19ea454ccdf7 branch: 3.3 parent: 81882:4a4688b865ff user: Brett Cannon date: Fri Feb 01 15:31:49 2013 -0500 summary: Issue #17098: Be more stringent of setting __loader__ on early imported modules. Also made test more rigorous. files: Lib/importlib/_bootstrap.py | 7 +- Lib/test/test_importlib/test_api.py | 6 + Python/importlib.h | 591 ++++++++------- 3 files changed, 310 insertions(+), 294 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -1704,10 +1704,13 @@ BYTECODE_SUFFIXES = DEBUG_BYTECODE_SUFFIXES module_type = type(sys) - for module in sys.modules.values(): + for name, module in sys.modules.items(): if isinstance(module, module_type): if not hasattr(module, '__loader__'): - module.__loader__ = BuiltinImporter + if name in sys.builtin_module_names: + module.__loader__ = BuiltinImporter + elif _imp.is_frozen(name): + module.__loader__ = FrozenImporter self_module = sys.modules[__name__] for builtin_name in ('_io', '_warnings', 'builtins', 'marshal'): diff --git a/Lib/test/test_importlib/test_api.py b/Lib/test/test_importlib/test_api.py --- a/Lib/test/test_importlib/test_api.py +++ b/Lib/test/test_importlib/test_api.py @@ -184,6 +184,12 @@ if isinstance(module, types.ModuleType): self.assertTrue(hasattr(module, '__loader__'), '{!r} lacks a __loader__ attribute'.format(name)) + if name in sys.builtin_module_names: + self.assertEqual(importlib.machinery.BuiltinImporter, + module.__loader__) + elif imp.is_frozen(name): + self.assertEqual(importlib.machinery.FrozenImporter, + module.__loader__) def test_main(): from test.support import run_unittest diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:38:04 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 22:38:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_w/_3=2E3_more_fixes_thanks_to_issue_=2317098?= Message-ID: <3YyWrX4WnKzSTF@mail.python.org> http://hg.python.org/cpython/rev/306f066e6a33 changeset: 81885:306f066e6a33 parent: 81883:370882f297a4 parent: 81884:19ea454ccdf7 user: Brett Cannon date: Fri Feb 01 16:36:29 2013 -0500 summary: Merge w/ 3.3 more fixes thanks to issue #17098 files: Lib/importlib/_bootstrap.py | 7 +- Lib/test/test_importlib/test_api.py | 9 + Python/importlib.h | 551 ++++++++------- 3 files changed, 296 insertions(+), 271 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -1724,10 +1724,13 @@ BYTECODE_SUFFIXES = DEBUG_BYTECODE_SUFFIXES module_type = type(sys) - for module in sys.modules.values(): + for name, module in sys.modules.items(): if isinstance(module, module_type): if not hasattr(module, '__loader__'): - module.__loader__ = BuiltinImporter + if name in sys.builtin_module_names: + module.__loader__ = BuiltinImporter + elif _imp.is_frozen(name): + module.__loader__ = FrozenImporter self_module = sys.modules[__name__] for builtin_name in ('_io', '_warnings', 'builtins', 'marshal'): diff --git a/Lib/test/test_importlib/test_api.py b/Lib/test/test_importlib/test_api.py --- a/Lib/test/test_importlib/test_api.py +++ b/Lib/test/test_importlib/test_api.py @@ -1,6 +1,7 @@ from . import util import imp import importlib +from importlib import _bootstrap from importlib import machinery import sys from test import support @@ -184,6 +185,14 @@ if isinstance(module, types.ModuleType): self.assertTrue(hasattr(module, '__loader__'), '{!r} lacks a __loader__ attribute'.format(name)) + if name in sys.builtin_module_names: + self.assertIn(module.__loader__, + (importlib.machinery.BuiltinImporter, + importlib._bootstrap.BuiltinImporter)) + elif imp.is_frozen(name): + self.assertIn(module.__loader__, + (importlib.machinery.FrozenImporter, + importlib._bootstrap.FrozenImporter)) if __name__ == '__main__': diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:38:06 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 22:38:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_default_-=3E_default?= =?utf-8?q?=29=3A_merge?= Message-ID: <3YyWrZ1S6bzST6@mail.python.org> http://hg.python.org/cpython/rev/b7a0c91b2174 changeset: 81886:b7a0c91b2174 parent: 81885:306f066e6a33 parent: 81881:8f72519fd0e9 user: Brett Cannon date: Fri Feb 01 16:36:49 2013 -0500 summary: merge files: Doc/library/zipfile.rst | 17 +++- Lib/test/test_zipfile.py | 86 +++++++++++++++++++++++++-- Lib/zipfile.py | 23 ++++-- Misc/NEWS | 9 ++ Modules/_io/iobase.c | 5 +- Modules/_io/textio.c | 7 +- 6 files changed, 121 insertions(+), 26 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -242,6 +242,16 @@ to extract to. *member* can be a filename or a :class:`ZipInfo` object. *pwd* is the password used for encrypted files. + .. note:: + + If a member filename is an absolute path, a drive/UNC sharepoint and + leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes + ``foo/bar`` on Unix, and ``?:\foo\bar`` becomes ``foo\bar`` on Windows. + And all ``".."`` components in a member filename will be removed, e.g.: + ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal + characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) + replaced by underscore (``_``). + .. method:: ZipFile.extractall(path=None, members=None, pwd=None) @@ -250,12 +260,9 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. warning:: + .. note:: - Never extract archives from untrusted sources without prior inspection. - It is possible that files are created outside of *path*, e.g. members - that have absolute filenames starting with ``"/"`` or filenames with two - dots ``".."``. + See :meth:`extract` note. .. method:: ZipFile.printdir() diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -24,7 +24,7 @@ SMALL_TEST_DATA = [('_ziptest1', '1q2w3e4r5t'), ('ziptest2dir/_ziptest2', 'qawsedrftg'), - ('/ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), + ('ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), ('ziptest2dir/ziptest3dir/ziptest4dir/_ziptest3', '6y7u8i9o0p')] @@ -501,10 +501,7 @@ writtenfile = zipfp.extract(fpath) # make sure it was written to the right place - if os.path.isabs(fpath): - correctfile = os.path.join(os.getcwd(), fpath[1:]) - else: - correctfile = os.path.join(os.getcwd(), fpath) + correctfile = os.path.join(os.getcwd(), fpath) correctfile = os.path.normpath(correctfile) self.assertEqual(writtenfile, correctfile) @@ -526,10 +523,7 @@ with zipfile.ZipFile(TESTFN2, "r") as zipfp: zipfp.extractall() for fpath, fdata in SMALL_TEST_DATA: - if os.path.isabs(fpath): - outfile = os.path.join(os.getcwd(), fpath[1:]) - else: - outfile = os.path.join(os.getcwd(), fpath) + outfile = os.path.join(os.getcwd(), fpath) with open(outfile, "rb") as f: self.assertEqual(fdata.encode(), f.read()) @@ -539,6 +533,80 @@ # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) + def check_file(self, filename, content): + self.assertTrue(os.path.isfile(filename)) + with open(filename, 'rb') as f: + self.assertEqual(f.read(), content) + + def test_extract_hackers_arcnames(self): + hacknames = [ + ('../foo/bar', 'foo/bar'), + ('foo/../bar', 'foo/bar'), + ('foo/../../bar', 'foo/bar'), + ('foo/bar/..', 'foo/bar'), + ('./../foo/bar', 'foo/bar'), + ('/foo/bar', 'foo/bar'), + ('/foo/../bar', 'foo/bar'), + ('/foo/../../bar', 'foo/bar'), + ('//foo/bar', 'foo/bar'), + ('../../foo../../ba..r', 'foo../ba..r'), + ] + if os.path.sep == '\\': # Windows. + hacknames.extend([ + (r'..\foo\bar', 'foo/bar'), + (r'..\/foo\/bar', 'foo/bar'), + (r'foo/\..\/bar', 'foo/bar'), + (r'foo\/../\bar', 'foo/bar'), + (r'C:foo/bar', 'foo/bar'), + (r'C:/foo/bar', 'foo/bar'), + (r'C://foo/bar', 'foo/bar'), + (r'C:\foo\bar', 'foo/bar'), + (r'//conky/mountpoint/foo/bar', 'foo/bar'), + (r'\\conky\mountpoint\foo\bar', 'foo/bar'), + (r'///conky/mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\\conky\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//conky//mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\conky\\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//?/C:/foo/bar', 'foo/bar'), + (r'\\?\C:\foo\bar', 'foo/bar'), + (r'C:/../C:/foo/bar', 'C_/foo/bar'), + (r'a:b\ce|f"g?h*i', 'b/c_d_e_f_g_h_i'), + ]) + + for arcname, fixedname in hacknames: + content = b'foobar' + arcname.encode() + with zipfile.ZipFile(TESTFN2, 'w', zipfile.ZIP_STORED) as zipfp: + zipfp.writestr(arcname, content) + + targetpath = os.path.join('target', 'subdir', 'subsub') + correctfile = os.path.join(targetpath, *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname, targetpath) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree('target') + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall(targetpath) + self.check_file(correctfile, content) + shutil.rmtree('target') + + correctfile = os.path.join(os.getcwd(), *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall() + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + os.remove(TESTFN2) + def test_writestr_compression_stored(self): zipfp = zipfile.ZipFile(TESTFN2, "w") zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) diff --git a/Lib/zipfile.py b/Lib/zipfile.py --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -1229,17 +1229,22 @@ """ # build the destination pathname, replacing # forward slashes to platform specific separators. - # Strip trailing path separator, unless it represents the root. - if (targetpath[-1:] in (os.path.sep, os.path.altsep) - and len(os.path.splitdrive(targetpath)[1]) > 1): - targetpath = targetpath[:-1] + arcname = member.filename.replace('/', os.path.sep) - # don't include leading "/" from file name if present - if member.filename[0] == '/': - targetpath = os.path.join(targetpath, member.filename[1:]) - else: - targetpath = os.path.join(targetpath, member.filename) + if os.path.altsep: + arcname = arcname.replace(os.path.altsep, os.path.sep) + # interpret absolute pathname as relative, remove drive letter or + # UNC path, redundant separators, "." and ".." components. + arcname = os.path.splitdrive(arcname)[1] + arcname = os.path.sep.join(x for x in arcname.split(os.path.sep) + if x not in ('', os.path.curdir, os.path.pardir)) + # filter illegal characters on Windows + if os.path.sep == '\\': + illegal = ':<>|"?*' + table = str.maketrans(illegal, '_' * len(illegal)) + arcname = arcname.translate(table) + targetpath = os.path.join(targetpath, arcname) targetpath = os.path.normpath(targetpath) # Create all upper directories if necessary. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -239,6 +239,9 @@ Library ------- +- Issue #6972: The zipfile module no longer overwrites files outside of + its destination path when extracting malicious zip files. + - Issue #4844: ZipFile now raises BadZipFile when opens a ZIP file with an incomplete "End of Central Directory" record. Original patch by Guilherme Polo and Alan McIntyre. @@ -712,6 +715,12 @@ `sha3_256`, `sha3_384` and `sha3_512`. As part of the patch some common code was moved from _hashopenssl.c to hashlib.h. +Extension Modules +----------------- + +- Issue #12268: The io module file object write methods no longer abort early + when one of its write system calls is interrupted (EINTR). + Tests ----- diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c --- a/Modules/_io/iobase.c +++ b/Modules/_io/iobase.c @@ -669,7 +669,10 @@ break; /* Stop Iteration */ } - res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + res = NULL; + do { + res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + } while (res == NULL && _PyIO_trap_eintr()); Py_DECREF(line); if (res == NULL) { Py_DECREF(iter); diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -1247,8 +1247,11 @@ Py_DECREF(pending); if (b == NULL) return -1; - ret = PyObject_CallMethodObjArgs(self->buffer, - _PyIO_str_write, b, NULL); + ret = NULL; + do { + ret = PyObject_CallMethodObjArgs(self->buffer, + _PyIO_str_write, b, NULL); + } while (ret == NULL && _PyIO_trap_eintr()); Py_DECREF(b); if (ret == NULL) return -1; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:38:07 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 22:38:07 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4zIC0+IDMuMyk6?= =?utf-8?q?_merge?= Message-ID: <3YyWrb5SpdzSSD@mail.python.org> http://hg.python.org/cpython/rev/4d9bcf328e64 changeset: 81887:4d9bcf328e64 branch: 3.3 parent: 81884:19ea454ccdf7 parent: 81880:29e3aa7f2f4b user: Brett Cannon date: Fri Feb 01 16:37:07 2013 -0500 summary: merge files: Doc/library/zipfile.rst | 17 +++- Lib/test/test_zipfile.py | 86 +++++++++++++++++++++++++-- Lib/zipfile.py | 23 ++++-- Misc/NEWS | 9 ++ Modules/_io/iobase.c | 5 +- Modules/_io/textio.c | 7 +- 6 files changed, 121 insertions(+), 26 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -242,6 +242,16 @@ to extract to. *member* can be a filename or a :class:`ZipInfo` object. *pwd* is the password used for encrypted files. + .. note:: + + If a member filename is an absolute path, a drive/UNC sharepoint and + leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes + ``foo/bar`` on Unix, and ``?:\foo\bar`` becomes ``foo\bar`` on Windows. + And all ``".."`` components in a member filename will be removed, e.g.: + ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal + characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) + replaced by underscore (``_``). + .. method:: ZipFile.extractall(path=None, members=None, pwd=None) @@ -250,12 +260,9 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. warning:: + .. note:: - Never extract archives from untrusted sources without prior inspection. - It is possible that files are created outside of *path*, e.g. members - that have absolute filenames starting with ``"/"`` or filenames with two - dots ``".."``. + See :meth:`extract` note. .. method:: ZipFile.printdir() diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -24,7 +24,7 @@ SMALL_TEST_DATA = [('_ziptest1', '1q2w3e4r5t'), ('ziptest2dir/_ziptest2', 'qawsedrftg'), - ('/ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), + ('ziptest2dir/ziptest3dir/_ziptest3', 'azsxdcfvgb'), ('ziptest2dir/ziptest3dir/ziptest4dir/_ziptest3', '6y7u8i9o0p')] @@ -501,10 +501,7 @@ writtenfile = zipfp.extract(fpath) # make sure it was written to the right place - if os.path.isabs(fpath): - correctfile = os.path.join(os.getcwd(), fpath[1:]) - else: - correctfile = os.path.join(os.getcwd(), fpath) + correctfile = os.path.join(os.getcwd(), fpath) correctfile = os.path.normpath(correctfile) self.assertEqual(writtenfile, correctfile) @@ -526,10 +523,7 @@ with zipfile.ZipFile(TESTFN2, "r") as zipfp: zipfp.extractall() for fpath, fdata in SMALL_TEST_DATA: - if os.path.isabs(fpath): - outfile = os.path.join(os.getcwd(), fpath[1:]) - else: - outfile = os.path.join(os.getcwd(), fpath) + outfile = os.path.join(os.getcwd(), fpath) with open(outfile, "rb") as f: self.assertEqual(fdata.encode(), f.read()) @@ -539,6 +533,80 @@ # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) + def check_file(self, filename, content): + self.assertTrue(os.path.isfile(filename)) + with open(filename, 'rb') as f: + self.assertEqual(f.read(), content) + + def test_extract_hackers_arcnames(self): + hacknames = [ + ('../foo/bar', 'foo/bar'), + ('foo/../bar', 'foo/bar'), + ('foo/../../bar', 'foo/bar'), + ('foo/bar/..', 'foo/bar'), + ('./../foo/bar', 'foo/bar'), + ('/foo/bar', 'foo/bar'), + ('/foo/../bar', 'foo/bar'), + ('/foo/../../bar', 'foo/bar'), + ('//foo/bar', 'foo/bar'), + ('../../foo../../ba..r', 'foo../ba..r'), + ] + if os.path.sep == '\\': # Windows. + hacknames.extend([ + (r'..\foo\bar', 'foo/bar'), + (r'..\/foo\/bar', 'foo/bar'), + (r'foo/\..\/bar', 'foo/bar'), + (r'foo\/../\bar', 'foo/bar'), + (r'C:foo/bar', 'foo/bar'), + (r'C:/foo/bar', 'foo/bar'), + (r'C://foo/bar', 'foo/bar'), + (r'C:\foo\bar', 'foo/bar'), + (r'//conky/mountpoint/foo/bar', 'foo/bar'), + (r'\\conky\mountpoint\foo\bar', 'foo/bar'), + (r'///conky/mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\\conky\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//conky//mountpoint/foo/bar', 'conky/mountpoint/foo/bar'), + (r'\\conky\\mountpoint\foo\bar', 'conky/mountpoint/foo/bar'), + (r'//?/C:/foo/bar', 'foo/bar'), + (r'\\?\C:\foo\bar', 'foo/bar'), + (r'C:/../C:/foo/bar', 'C_/foo/bar'), + (r'a:b\ce|f"g?h*i', 'b/c_d_e_f_g_h_i'), + ]) + + for arcname, fixedname in hacknames: + content = b'foobar' + arcname.encode() + with zipfile.ZipFile(TESTFN2, 'w', zipfile.ZIP_STORED) as zipfp: + zipfp.writestr(arcname, content) + + targetpath = os.path.join('target', 'subdir', 'subsub') + correctfile = os.path.join(targetpath, *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname, targetpath) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree('target') + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall(targetpath) + self.check_file(correctfile, content) + shutil.rmtree('target') + + correctfile = os.path.join(os.getcwd(), *fixedname.split('/')) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + writtenfile = zipfp.extract(arcname) + self.assertEqual(writtenfile, correctfile) + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + with zipfile.ZipFile(TESTFN2, 'r') as zipfp: + zipfp.extractall() + self.check_file(correctfile, content) + shutil.rmtree(fixedname.split('/')[0]) + + os.remove(TESTFN2) + def test_writestr_compression_stored(self): zipfp = zipfile.ZipFile(TESTFN2, "w") zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) diff --git a/Lib/zipfile.py b/Lib/zipfile.py --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -1229,17 +1229,22 @@ """ # build the destination pathname, replacing # forward slashes to platform specific separators. - # Strip trailing path separator, unless it represents the root. - if (targetpath[-1:] in (os.path.sep, os.path.altsep) - and len(os.path.splitdrive(targetpath)[1]) > 1): - targetpath = targetpath[:-1] + arcname = member.filename.replace('/', os.path.sep) - # don't include leading "/" from file name if present - if member.filename[0] == '/': - targetpath = os.path.join(targetpath, member.filename[1:]) - else: - targetpath = os.path.join(targetpath, member.filename) + if os.path.altsep: + arcname = arcname.replace(os.path.altsep, os.path.sep) + # interpret absolute pathname as relative, remove drive letter or + # UNC path, redundant separators, "." and ".." components. + arcname = os.path.splitdrive(arcname)[1] + arcname = os.path.sep.join(x for x in arcname.split(os.path.sep) + if x not in ('', os.path.curdir, os.path.pardir)) + # filter illegal characters on Windows + if os.path.sep == '\\': + illegal = ':<>|"?*' + table = str.maketrans(illegal, '_' * len(illegal)) + arcname = arcname.translate(table) + targetpath = os.path.join(targetpath, arcname) targetpath = os.path.normpath(targetpath) # Create all upper directories if necessary. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -167,6 +167,9 @@ Library ------- +- Issue #6972: The zipfile module no longer overwrites files outside of + its destination path when extracting malicious zip files. + - Issue #4844: ZipFile now raises BadZipFile when opens a ZIP file with an incomplete "End of Central Directory" record. Original patch by Guilherme Polo and Alan McIntyre. @@ -503,6 +506,12 @@ - Issue #15906: Fix a regression in `argparse` caused by the preceding change, when ``action='append'``, ``type='str'`` and ``default=[]``. +Extension Modules +----------------- + +- Issue #12268: The io module file object write methods no longer abort early + when one of its write system calls is interrupted (EINTR). + Tests ----- diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c --- a/Modules/_io/iobase.c +++ b/Modules/_io/iobase.c @@ -669,7 +669,10 @@ break; /* Stop Iteration */ } - res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + res = NULL; + do { + res = PyObject_CallMethodObjArgs(self, _PyIO_str_write, line, NULL); + } while (res == NULL && _PyIO_trap_eintr()); Py_DECREF(line); if (res == NULL) { Py_DECREF(iter); diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -1247,8 +1247,11 @@ Py_DECREF(pending); if (b == NULL) return -1; - ret = PyObject_CallMethodObjArgs(self->buffer, - _PyIO_str_write, b, NULL); + ret = NULL; + do { + ret = PyObject_CallMethodObjArgs(self->buffer, + _PyIO_str_write, b, NULL); + } while (ret == NULL && _PyIO_trap_eintr()); Py_DECREF(b); if (ret == NULL) return -1; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 22:40:01 2013 From: python-checkins at python.org (brett.cannon) Date: Fri, 1 Feb 2013 22:40:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_merge_from_3=2E3?= Message-ID: <3YyWtn0qGhzSSD@mail.python.org> http://hg.python.org/cpython/rev/3029319d2d8a changeset: 81888:3029319d2d8a parent: 81886:b7a0c91b2174 parent: 81887:4d9bcf328e64 user: Brett Cannon date: Fri Feb 01 16:39:50 2013 -0500 summary: merge from 3.3 files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 23:06:51 2013 From: python-checkins at python.org (ned.deily) Date: Fri, 1 Feb 2013 23:06:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2MjU2?= =?utf-8?q?=3A_OS_X_installer_now_sets_correct_permissions_for_doc_directo?= =?utf-8?b?cnku?= Message-ID: <3YyXTl5HtwzST0@mail.python.org> http://hg.python.org/cpython/rev/d64e0cf5f1a7 changeset: 81889:d64e0cf5f1a7 branch: 2.7 parent: 81878:c8f8708d509a user: Ned Deily date: Fri Feb 01 13:58:00 2013 -0800 summary: Issue #16256: OS X installer now sets correct permissions for doc directory. files: Mac/BuildScript/scripts/postflight.documentation | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Mac/BuildScript/scripts/postflight.documentation b/Mac/BuildScript/scripts/postflight.documentation --- a/Mac/BuildScript/scripts/postflight.documentation +++ b/Mac/BuildScript/scripts/postflight.documentation @@ -16,7 +16,7 @@ # make share/doc link in framework for command line users if [ -d "${SHARE_DIR}" ]; then - mkdir -p "${SHARE_DOCDIR}" + mkdir -m 775 -p "${SHARE_DOCDIR}" # make relative link to html doc directory ln -fhs "${SHARE_DOCDIR_TO_FWK}/${FWK_DOCDIR_SUBPATH}" "${SHARE_DOCDIR}/html" fi diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -789,6 +789,8 @@ - Issue #14018: Fix OS X Tcl/Tk framework checking when using OS X SDKs. +- Issue #16256: OS X installer now sets correct permissions for doc directory. + - Issue #8767: Restore building with --disable-unicode. Patch by Stefano Taschini. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 23:06:53 2013 From: python-checkins at python.org (ned.deily) Date: Fri, 1 Feb 2013 23:06:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2MjU2?= =?utf-8?q?=3A_OS_X_installer_now_sets_correct_permissions_for_doc_directo?= =?utf-8?b?cnku?= Message-ID: <3YyXTn0rc0zPrk@mail.python.org> http://hg.python.org/cpython/rev/e8a1b5757067 changeset: 81890:e8a1b5757067 branch: 3.2 parent: 81877:81f7bdf7bbb6 user: Ned Deily date: Fri Feb 01 13:59:42 2013 -0800 summary: Issue #16256: OS X installer now sets correct permissions for doc directory. files: Mac/BuildScript/scripts/postflight.documentation | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Mac/BuildScript/scripts/postflight.documentation b/Mac/BuildScript/scripts/postflight.documentation --- a/Mac/BuildScript/scripts/postflight.documentation +++ b/Mac/BuildScript/scripts/postflight.documentation @@ -16,7 +16,7 @@ # make share/doc link in framework for command line users if [ -d "${SHARE_DIR}" ]; then - mkdir -p "${SHARE_DOCDIR}" + mkdir -m 775 -p "${SHARE_DOCDIR}" # make relative link to html doc directory ln -fhs "${SHARE_DOCDIR_TO_FWK}/${FWK_DOCDIR_SUBPATH}" "${SHARE_DOCDIR}/html" fi diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -958,6 +958,8 @@ - Issue #14018: Fix OS X Tcl/Tk framework checking when using OS X SDKs. +- Issue #16256: OS X installer now sets correct permissions for doc directory. + Tools/Demos ----------- -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 23:06:54 2013 From: python-checkins at python.org (ned.deily) Date: Fri, 1 Feb 2013 23:06:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316256=3A_merge_from_3=2E2?= Message-ID: <3YyXTp3YLdzSTZ@mail.python.org> http://hg.python.org/cpython/rev/1db5ed6a2dc2 changeset: 81891:1db5ed6a2dc2 branch: 3.3 parent: 81887:4d9bcf328e64 parent: 81890:e8a1b5757067 user: Ned Deily date: Fri Feb 01 14:05:26 2013 -0800 summary: Issue #16256: merge from 3.2 files: Mac/BuildScript/scripts/postflight.documentation | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Mac/BuildScript/scripts/postflight.documentation b/Mac/BuildScript/scripts/postflight.documentation --- a/Mac/BuildScript/scripts/postflight.documentation +++ b/Mac/BuildScript/scripts/postflight.documentation @@ -16,7 +16,7 @@ # make share/doc link in framework for command line users if [ -d "${SHARE_DIR}" ]; then - mkdir -p "${SHARE_DOCDIR}" + mkdir -m 775 -p "${SHARE_DOCDIR}" # make relative link to html doc directory ln -fhs "${SHARE_DOCDIR_TO_FWK}/${FWK_DOCDIR_SUBPATH}" "${SHARE_DOCDIR}/html" fi diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1313,6 +1313,8 @@ - Issue #14018: Fix OS X Tcl/Tk framework checking when using OS X SDKs. +- Issue #16256: OS X installer now sets correct permissions for doc directory. + - Issue #15431: Add _freeze_importlib project to regenerate importlib.h on Windows. Patch by Kristj?n Valur J?nsson. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 1 23:06:55 2013 From: python-checkins at python.org (ned.deily) Date: Fri, 1 Feb 2013 23:06:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316256=3A_merge_from_3=2E3?= Message-ID: <3YyXTq70CFzSSF@mail.python.org> http://hg.python.org/cpython/rev/bc2c40e84b58 changeset: 81892:bc2c40e84b58 parent: 81888:3029319d2d8a parent: 81891:1db5ed6a2dc2 user: Ned Deily date: Fri Feb 01 14:06:24 2013 -0800 summary: Issue #16256: merge from 3.3 files: Mac/BuildScript/scripts/postflight.documentation | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Mac/BuildScript/scripts/postflight.documentation b/Mac/BuildScript/scripts/postflight.documentation --- a/Mac/BuildScript/scripts/postflight.documentation +++ b/Mac/BuildScript/scripts/postflight.documentation @@ -16,7 +16,7 @@ # make share/doc link in framework for command line users if [ -d "${SHARE_DIR}" ]; then - mkdir -p "${SHARE_DOCDIR}" + mkdir -m 775 -p "${SHARE_DOCDIR}" # make relative link to html doc directory ln -fhs "${SHARE_DOCDIR_TO_FWK}/${FWK_DOCDIR_SUBPATH}" "${SHARE_DOCDIR}/html" fi diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1558,6 +1558,8 @@ - Issue #14018: Fix OS X Tcl/Tk framework checking when using OS X SDKs. +- Issue #16256: OS X installer now sets correct permissions for doc directory. + - Issue #15431: Add _freeze_importlib project to regenerate importlib.h on Windows. Patch by Kristj?n Valur J?nsson. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 01:15:55 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 01:15:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Silence_a_-Wfo?= =?utf-8?q?rmat-extra-argument_warning_when_compiling=2E?= Message-ID: <3YybLg59DDzM75@mail.python.org> http://hg.python.org/cpython/rev/59acde449f8d changeset: 81893:59acde449f8d branch: 2.7 parent: 81889:d64e0cf5f1a7 user: Gregory P. Smith date: Fri Feb 01 16:13:27 2013 -0800 summary: Silence a -Wformat-extra-argument warning when compiling. files: Objects/weakrefobject.c | 22 +++++++++++++++------- 1 files changed, 15 insertions(+), 7 deletions(-) diff --git a/Objects/weakrefobject.c b/Objects/weakrefobject.c --- a/Objects/weakrefobject.c +++ b/Objects/weakrefobject.c @@ -167,13 +167,21 @@ PyErr_Clear(); else if (PyString_Check(nameobj)) name = PyString_AS_STRING(nameobj); - PyOS_snprintf(buffer, sizeof(buffer), - name ? "" - : "", - self, - Py_TYPE(PyWeakref_GET_OBJECT(self))->tp_name, - PyWeakref_GET_OBJECT(self), - name); + if (name != NULL) { + PyOS_snprintf(buffer, sizeof(buffer), + "", + self, + Py_TYPE(PyWeakref_GET_OBJECT(self))->tp_name, + PyWeakref_GET_OBJECT(self), + name); + } + else { + PyOS_snprintf(buffer, sizeof(buffer), + "", + self, + Py_TYPE(PyWeakref_GET_OBJECT(self))->tp_name, + PyWeakref_GET_OBJECT(self)); + } Py_XDECREF(nameobj); } return PyString_FromString(buffer); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 01:15:57 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 01:15:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Silence_a_-Wfo?= =?utf-8?q?rmat-extra-argument_warning_when_compiling=2E?= Message-ID: <3YybLj0lw2zM75@mail.python.org> http://hg.python.org/cpython/rev/11047da3e5f6 changeset: 81894:11047da3e5f6 branch: 3.2 parent: 81890:e8a1b5757067 user: Gregory P. Smith date: Fri Feb 01 16:14:00 2013 -0800 summary: Silence a -Wformat-extra-argument warning when compiling. files: Objects/weakrefobject.c | 22 +++++++++++++++------- 1 files changed, 15 insertions(+), 7 deletions(-) diff --git a/Objects/weakrefobject.c b/Objects/weakrefobject.c --- a/Objects/weakrefobject.c +++ b/Objects/weakrefobject.c @@ -167,13 +167,21 @@ PyErr_Clear(); else if (PyUnicode_Check(nameobj)) name = _PyUnicode_AsString(nameobj); - PyOS_snprintf(buffer, sizeof(buffer), - name ? "" - : "", - self, - Py_TYPE(PyWeakref_GET_OBJECT(self))->tp_name, - PyWeakref_GET_OBJECT(self), - name); + if (name != NULL) { + PyOS_snprintf(buffer, sizeof(buffer), + "", + self, + Py_TYPE(PyWeakref_GET_OBJECT(self))->tp_name, + PyWeakref_GET_OBJECT(self), + name); + } + else { + PyOS_snprintf(buffer, sizeof(buffer), + "", + self, + Py_TYPE(PyWeakref_GET_OBJECT(self))->tp_name, + PyWeakref_GET_OBJECT(self)); + } Py_XDECREF(nameobj); } return PyUnicode_FromString(buffer); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 01:15:58 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 01:15:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_null_merge_from_3=2E2_=283=2E3=27s_code_already_refactored=29?= Message-ID: <3YybLk3X9jzQ0F@mail.python.org> http://hg.python.org/cpython/rev/6a4eb999d4b2 changeset: 81895:6a4eb999d4b2 branch: 3.3 parent: 81891:1db5ed6a2dc2 parent: 81894:11047da3e5f6 user: Gregory P. Smith date: Fri Feb 01 16:15:01 2013 -0800 summary: null merge from 3.2 (3.3's code already refactored) files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 01:15:59 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 01:15:59 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_null_merge_from_3=2E3?= Message-ID: <3YybLl6BzpzQ1C@mail.python.org> http://hg.python.org/cpython/rev/90f78d138dc9 changeset: 81896:90f78d138dc9 parent: 81892:bc2c40e84b58 parent: 81895:6a4eb999d4b2 user: Gregory P. Smith date: Fri Feb 01 16:15:45 2013 -0800 summary: null merge from 3.3 files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 02:10:27 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 02:10:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSW4gdGhlIF9oYXNo?= =?utf-8?q?lib_module=2C_only_initialize_the_static_data_for_OpenSSL=27s?= Message-ID: <3YycYb0b18zPx0@mail.python.org> http://hg.python.org/cpython/rev/09fc7f466bd4 changeset: 81897:09fc7f466bd4 branch: 2.7 parent: 81893:59acde449f8d user: Gregory P. Smith date: Fri Feb 01 17:00:14 2013 -0800 summary: In the _hashlib module, only initialize the static data for OpenSSL's constructors once, to avoid memory leaks when finalizing and re-initializing the Python interpreter. files: Modules/_hashopenssl.c | 13 ++++++++----- 1 files changed, 8 insertions(+), 5 deletions(-) diff --git a/Modules/_hashopenssl.c b/Modules/_hashopenssl.c --- a/Modules/_hashopenssl.c +++ b/Modules/_hashopenssl.c @@ -67,7 +67,7 @@ #define DEFINE_CONSTS_FOR_NEW(Name) \ - static PyObject *CONST_ ## Name ## _name_obj; \ + static PyObject *CONST_ ## Name ## _name_obj = NULL; \ static EVP_MD_CTX CONST_new_ ## Name ## _ctx; \ static EVP_MD_CTX *CONST_new_ ## Name ## _ctx_p = NULL; @@ -525,12 +525,15 @@ " hash object; optionally initialized with a string") \ } -/* used in the init function to setup a constructor */ +/* used in the init function to setup a constructor: initialize OpenSSL + constructor constants if they haven't been initialized already. */ #define INIT_CONSTRUCTOR_CONSTANTS(NAME) do { \ + if (CONST_ ## NAME ## _name_obj == NULL) { \ CONST_ ## NAME ## _name_obj = PyString_FromString(#NAME); \ - if (EVP_get_digestbyname(#NAME)) { \ - CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ - EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + if (EVP_get_digestbyname(#NAME)) { \ + CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ + EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + } \ } \ } while (0); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 02:10:28 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 02:10:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSW4gdGhlIF9oYXNo?= =?utf-8?q?lib_module=2C_only_initialize_the_static_data_for_OpenSSL=27s?= Message-ID: <3YycYc398YzRC3@mail.python.org> http://hg.python.org/cpython/rev/b6792067aafa changeset: 81898:b6792067aafa branch: 3.2 parent: 81894:11047da3e5f6 user: Gregory P. Smith date: Fri Feb 01 17:05:29 2013 -0800 summary: In the _hashlib module, only initialize the static data for OpenSSL's constructors once, to avoid memory leaks when finalizing and re-initializing the Python interpreter. files: Modules/_hashopenssl.c | 15 +++++++++------ 1 files changed, 9 insertions(+), 6 deletions(-) diff --git a/Modules/_hashopenssl.c b/Modules/_hashopenssl.c --- a/Modules/_hashopenssl.c +++ b/Modules/_hashopenssl.c @@ -70,7 +70,7 @@ #define DEFINE_CONSTS_FOR_NEW(Name) \ - static PyObject *CONST_ ## Name ## _name_obj; \ + static PyObject *CONST_ ## Name ## _name_obj = NULL; \ static EVP_MD_CTX CONST_new_ ## Name ## _ctx; \ static EVP_MD_CTX *CONST_new_ ## Name ## _ctx_p = NULL; @@ -587,12 +587,15 @@ " hash object; optionally initialized with a string") \ } -/* used in the init function to setup a constructor */ +/* used in the init function to setup a constructor: initialize OpenSSL + constructor constants if they haven't been initialized already. */ #define INIT_CONSTRUCTOR_CONSTANTS(NAME) do { \ - CONST_ ## NAME ## _name_obj = PyUnicode_FromString(#NAME); \ - if (EVP_get_digestbyname(#NAME)) { \ - CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ - EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + if (CONST_ ## NAME ## _name_obj == NULL) { \ + CONST_ ## NAME ## _name_obj = PyUnicode_FromString(#NAME); \ + if (EVP_get_digestbyname(#NAME)) { \ + CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ + EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + } \ } \ } while (0); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 02:10:29 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 02:10:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_In_the_=5Fhashlib_module=2C_only_initialize_the_static_data_fo?= =?utf-8?q?r_OpenSSL=27s?= Message-ID: <3YycYd5xjgzRGB@mail.python.org> http://hg.python.org/cpython/rev/80499534179b changeset: 81899:80499534179b branch: 3.3 parent: 81895:6a4eb999d4b2 parent: 81898:b6792067aafa user: Gregory P. Smith date: Fri Feb 01 17:07:39 2013 -0800 summary: In the _hashlib module, only initialize the static data for OpenSSL's constructors once, to avoid memory leaks when finalizing and re-initializing the Python interpreter. files: Modules/_hashopenssl.c | 15 +++++++++------ 1 files changed, 9 insertions(+), 6 deletions(-) diff --git a/Modules/_hashopenssl.c b/Modules/_hashopenssl.c --- a/Modules/_hashopenssl.c +++ b/Modules/_hashopenssl.c @@ -70,7 +70,7 @@ #define DEFINE_CONSTS_FOR_NEW(Name) \ - static PyObject *CONST_ ## Name ## _name_obj; \ + static PyObject *CONST_ ## Name ## _name_obj = NULL; \ static EVP_MD_CTX CONST_new_ ## Name ## _ctx; \ static EVP_MD_CTX *CONST_new_ ## Name ## _ctx_p = NULL; @@ -585,12 +585,15 @@ " hash object; optionally initialized with a string") \ } -/* used in the init function to setup a constructor */ +/* used in the init function to setup a constructor: initialize OpenSSL + constructor constants if they haven't been initialized already. */ #define INIT_CONSTRUCTOR_CONSTANTS(NAME) do { \ - CONST_ ## NAME ## _name_obj = PyUnicode_FromString(#NAME); \ - if (EVP_get_digestbyname(#NAME)) { \ - CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ - EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + if (CONST_ ## NAME ## _name_obj == NULL) { \ + CONST_ ## NAME ## _name_obj = PyUnicode_FromString(#NAME); \ + if (EVP_get_digestbyname(#NAME)) { \ + CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ + EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + } \ } \ } while (0); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 2 02:10:31 2013 From: python-checkins at python.org (gregory.p.smith) Date: Sat, 2 Feb 2013 02:10:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_In_the_=5Fhashlib_module=2C_only_initialize_the_static_d?= =?utf-8?q?ata_for_OpenSSL=27s?= Message-ID: <3YycYg1PD4zRC0@mail.python.org> http://hg.python.org/cpython/rev/0cc290da01b2 changeset: 81900:0cc290da01b2 parent: 81896:90f78d138dc9 parent: 81899:80499534179b user: Gregory P. Smith date: Fri Feb 01 17:07:56 2013 -0800 summary: In the _hashlib module, only initialize the static data for OpenSSL's constructors once, to avoid memory leaks when finalizing and re-initializing the Python interpreter. files: Modules/_hashopenssl.c | 15 +++++++++------ 1 files changed, 9 insertions(+), 6 deletions(-) diff --git a/Modules/_hashopenssl.c b/Modules/_hashopenssl.c --- a/Modules/_hashopenssl.c +++ b/Modules/_hashopenssl.c @@ -48,7 +48,7 @@ #define DEFINE_CONSTS_FOR_NEW(Name) \ - static PyObject *CONST_ ## Name ## _name_obj; \ + static PyObject *CONST_ ## Name ## _name_obj = NULL; \ static EVP_MD_CTX CONST_new_ ## Name ## _ctx; \ static EVP_MD_CTX *CONST_new_ ## Name ## _ctx_p = NULL; @@ -563,12 +563,15 @@ " hash object; optionally initialized with a string") \ } -/* used in the init function to setup a constructor */ +/* used in the init function to setup a constructor: initialize OpenSSL + constructor constants if they haven't been initialized already. */ #define INIT_CONSTRUCTOR_CONSTANTS(NAME) do { \ - CONST_ ## NAME ## _name_obj = PyUnicode_FromString(#NAME); \ - if (EVP_get_digestbyname(#NAME)) { \ - CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ - EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + if (CONST_ ## NAME ## _name_obj == NULL) { \ + CONST_ ## NAME ## _name_obj = PyUnicode_FromString(#NAME); \ + if (EVP_get_digestbyname(#NAME)) { \ + CONST_new_ ## NAME ## _ctx_p = &CONST_new_ ## NAME ## _ctx; \ + EVP_DigestInit(CONST_new_ ## NAME ## _ctx_p, EVP_get_digestbyname(#NAME)); \ + } \ } \ } while (0); -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Sat Feb 2 06:03:26 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Sat, 02 Feb 2013 06:03:26 +0100 Subject: [Python-checkins] Daily reference leaks (0cc290da01b2): sum=2 Message-ID: results for 0cc290da01b2 on branch "default" -------------------------------------------- test_support leaked [0, -1, 1] references, sum=0 test_support leaked [0, -1, 3] memory blocks, sum=2 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflog_CTvvs', '-x'] From python-checkins at python.org Sat Feb 2 08:18:15 2013 From: python-checkins at python.org (ned.deily) Date: Sat, 2 Feb 2013 08:18:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE1NTg3?= =?utf-8?q?=3A_Enable_Tk_high-resolution_text_rendering_on_Macs_with?= Message-ID: <3Yymjz0CvJzPrk@mail.python.org> http://hg.python.org/cpython/rev/2274f3196a44 changeset: 81901:2274f3196a44 branch: 2.7 parent: 81897:09fc7f466bd4 user: Ned Deily date: Fri Feb 01 23:10:56 2013 -0800 summary: Issue #15587: Enable Tk high-resolution text rendering on Macs with Retina displays. Applies to Tkinter apps, such as IDLE, on OS X framework builds linked with Cocoa Tk 8.5+. Suggested by Kevin Walzer files: Mac/IDLE/Info.plist.in | 2 ++ Mac/Resources/app/Info.plist.in | 2 ++ Misc/NEWS | 4 ++++ 3 files changed, 8 insertions(+), 0 deletions(-) diff --git a/Mac/IDLE/Info.plist.in b/Mac/IDLE/Info.plist.in --- a/Mac/IDLE/Info.plist.in +++ b/Mac/IDLE/Info.plist.in @@ -51,6 +51,8 @@ %VERSION% CFBundleVersion %VERSION% + NSHighResolutionCapable + 45 ", 1) + self.parser.Parse(b"12345 ", 1) self.assertEqual(self.stuff, ["", "1", "", "", "2", "", "", "3", "", "4", "", "5", ""], @@ -400,7 +411,7 @@ parser = expat.ParserCreate() parser.StartElementHandler = self.StartElementHandler try: - parser.Parse("", 1) + parser.Parse(b"", 1) self.fail() except RuntimeError as e: self.assertEqual(e.args[0], 'a', @@ -436,7 +447,7 @@ self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - xml = '\n \n \n \n' + xml = b'\n \n \n \n' self.parser.Parse(xml, 1) @@ -457,7 +468,7 @@ parser = expat.ParserCreate() parser.CharacterDataHandler = handler - self.assertRaises(Exception, parser.Parse, xml) + self.assertRaises(Exception, parser.Parse, xml.encode('iso8859')) class ChardataBufferTest(unittest.TestCase): """ @@ -480,8 +491,8 @@ self.assertRaises(ValueError, f, 0) def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' + xml1 = b"" + b'a' * 512 + xml2 = b'a'*512 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_size = 512 @@ -503,9 +514,9 @@ def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) + xml1 = b"" + b'a' * 512 + xml2 = b'b' * 1024 + xml3 = b'c' * 1024 + b''; parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -532,16 +543,11 @@ parser.Parse(xml3, 1) self.assertEqual(self.n, 12) - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - def counting_handler(self, text): self.n += 1 def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) + xml = b"" + b'a' * buffer_len + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_size = 1024 @@ -552,8 +558,8 @@ return self.n def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) + xml1 = b"" + b'a' * 1024 + xml2 = b'aaa' + b'a' * 1025 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -568,8 +574,8 @@ self.assertEqual(self.n, 2) def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) + xml1 = b"a" + b'a' * 1023 + xml2 = b'aaa' + b'a' * 1025 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -585,7 +591,7 @@ class MalformedInputTest(unittest.TestCase): def test1(self): - xml = "\0\r\n" + xml = b"\0\r\n" parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -594,7 +600,8 @@ self.assertEqual(str(e), 'unclosed token: line 2, column 0') def test2(self): - xml = "\r\n" + #?\xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) + xml = b"\r\n" parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -609,7 +616,7 @@ errors.messages[errors.codes[errors.XML_ERROR_SYNTAX]]) def test_expaterror(self): - xml = '<' + xml = b'<' parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -638,7 +645,7 @@ parser.UseForeignDTD(True) parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity - parser.Parse("") + parser.Parse(b"") self.assertEqual(handler_call_args, [(None, None)]) # test UseForeignDTD() is equal to UseForeignDTD(True) @@ -648,7 +655,7 @@ parser.UseForeignDTD() parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity - parser.Parse("") + parser.Parse(b"") self.assertEqual(handler_call_args, [(None, None)]) def test_ignore_use_foreign_dtd(self): @@ -667,7 +674,7 @@ parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity parser.Parse( - "") + b"") self.assertEqual(handler_call_args, [("bar", "baz")]) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -212,6 +212,10 @@ Library ------- +- Issue #17089: Expat parser now correctly works with string input not only when + an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and + strings larger than 2 GiB. + - Issue #16903: Popen.communicate() on Unix now accepts strings when universal_newlines is true as on Windows. diff --git a/Modules/pyexpat.c b/Modules/pyexpat.c --- a/Modules/pyexpat.c +++ b/Modules/pyexpat.c @@ -777,17 +777,52 @@ "Parse(data[, isfinal])\n\ Parse XML data. `isfinal' should be true at end of input."); +#define MAX_CHUNK_SIZE (1 << 20) + static PyObject * xmlparse_Parse(xmlparseobject *self, PyObject *args) { - char *s; - int slen; + PyObject *data; int isFinal = 0; + const char *s; + Py_ssize_t slen; + Py_buffer view; + int rc; - if (!PyArg_ParseTuple(args, "s#|i:Parse", &s, &slen, &isFinal)) + if (!PyArg_ParseTuple(args, "O|i:Parse", &data, &isFinal)) return NULL; - return get_parse_result(self, XML_Parse(self->itself, s, slen, isFinal)); + if (PyUnicode_Check(data)) { + PyObject *bytes; + bytes = PyUnicode_AsUTF8String(data); + if (bytes == NULL) + return NULL; + view.buf = NULL; + s = PyBytes_AS_STRING(bytes); + slen = PyBytes_GET_SIZE(bytes); + /* Explicitly set UTF-8 encoding. Return code ignored. */ + (void)XML_SetEncoding(self->itself, "utf-8"); + } + else { + if (PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) < 0) + return NULL; + s = view.buf; + slen = view.len; + } + + while (slen > MAX_CHUNK_SIZE) { + rc = XML_Parse(self->itself, s, MAX_CHUNK_SIZE, 0); + if (!rc) + goto done; + s += MAX_CHUNK_SIZE; + slen -= MAX_CHUNK_SIZE; + } + rc = XML_Parse(self->itself, s, slen, isFinal); + +done: + if (view.buf != NULL) + PyBuffer_Release(&view); + return get_parse_result(self, rc); } /* File reading copied from cPickle */ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 4 17:32:56 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 4 Feb 2013 17:32:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317089=3A_Expat_parser_now_correctly_works_with_string?= =?utf-8?q?_input_not_only_when?= Message-ID: <3Z0Dx43G8PzSdw@mail.python.org> http://hg.python.org/cpython/rev/6c27b0e09c43 changeset: 82008:6c27b0e09c43 branch: 3.3 parent: 82004:b414b2dfd3d3 parent: 82007:3cc2a2de36e3 user: Serhiy Storchaka date: Mon Feb 04 18:28:01 2013 +0200 summary: Issue #17089: Expat parser now correctly works with string input not only when an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and strings larger than 2 GiB. files: Lib/test/test_pyexpat.py | 79 +++++++++++++++------------ Misc/NEWS | 4 + Modules/pyexpat.c | 40 ++++++++++++- 3 files changed, 83 insertions(+), 40 deletions(-) diff --git a/Lib/test/test_pyexpat.py b/Lib/test/test_pyexpat.py --- a/Lib/test/test_pyexpat.py +++ b/Lib/test/test_pyexpat.py @@ -52,6 +52,7 @@ &external_entity; &skipped_entity; +\xb5 ''' @@ -195,13 +196,13 @@ "End element: 'sub2'", "External entity ref: (None, 'entity.file', None)", ('Skipped entity', ('skipped_entity', 0)), + "Character data: '\xb5'", "End element: 'root'", ] for operation, expected_operation in zip(operations, expected_operations): self.assertEqual(operation, expected_operation) - def test_unicode(self): - # Try the parse again, this time producing Unicode output + def test_parse_bytes(self): out = self.Outputter() parser = expat.ParserCreate(namespace_separator='!') self._hookup_callbacks(parser, out) @@ -213,6 +214,16 @@ # Issue #6697. self.assertRaises(AttributeError, getattr, parser, '\uD800') + def test_parse_str(self): + out = self.Outputter() + parser = expat.ParserCreate(namespace_separator='!') + self._hookup_callbacks(parser, out) + + parser.Parse(data.decode('iso-8859-1'), 1) + + operations = out.out + self._verify_parse_output(operations) + def test_parse_file(self): # Try parsing a file out = self.Outputter() @@ -269,7 +280,7 @@ L.append(name) p.StartElementHandler = collector p.EndElementHandler = collector - p.Parse(" ", 1) + p.Parse(b" ", 1) tag = L[0] self.assertEqual(len(L), 6) for entry in L: @@ -285,7 +296,7 @@ def ExternalEntityRefHandler(self, context, base, sysId, pubId): external_parser = self.parser.ExternalEntityParserCreate("") - self.parser_result = external_parser.Parse("", 1) + self.parser_result = external_parser.Parse(b"", 1) return 1 parser = expat.ParserCreate(namespace_separator='!') @@ -336,7 +347,7 @@ def test_buffering_enabled(self): # Make sure buffering is turned on self.assertTrue(self.parser.buffer_text) - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ['123'], "buffered text not properly collapsed") @@ -344,39 +355,39 @@ # XXX This test exposes more detail of Expat's text chunking than we # XXX like, but it tests what we need to concisely. self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) + self.parser.Parse(b"12\n34\n5", 1) self.assertEqual(self.stuff, ["", "1", "", "2", "\n", "3", "", "4\n5"], "buffering control not reacting as expected") def test2(self): - self.parser.Parse("1<2> \n 3", 1) + self.parser.Parse(b"1<2> \n 3", 1) self.assertEqual(self.stuff, ["1<2> \n 3"], "buffered text not properly collapsed") def test3(self): self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ["", "1", "", "2", "", "3"], "buffered text not properly split") def test4(self): self.setHandlers(["StartElementHandler", "EndElementHandler"]) self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ["", "", "", "", "", ""]) def test5(self): self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ["", "1", "", "", "2", "", "", "3", ""]) def test6(self): self.setHandlers(["CommentHandler", "EndElementHandler", "StartElementHandler"]) - self.parser.Parse("12345 ", 1) + self.parser.Parse(b"12345 ", 1) self.assertEqual(self.stuff, ["", "1", "", "", "2", "", "", "345", ""], "buffered text not properly split") @@ -384,7 +395,7 @@ def test7(self): self.setHandlers(["CommentHandler", "EndElementHandler", "StartElementHandler"]) - self.parser.Parse("12345 ", 1) + self.parser.Parse(b"12345 ", 1) self.assertEqual(self.stuff, ["", "1", "", "", "2", "", "", "3", "", "4", "", "5", ""], @@ -400,7 +411,7 @@ parser = expat.ParserCreate() parser.StartElementHandler = self.StartElementHandler try: - parser.Parse("", 1) + parser.Parse(b"", 1) self.fail() except RuntimeError as e: self.assertEqual(e.args[0], 'a', @@ -436,7 +447,7 @@ self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - xml = '\n \n \n \n' + xml = b'\n \n \n \n' self.parser.Parse(xml, 1) @@ -457,7 +468,7 @@ parser = expat.ParserCreate() parser.CharacterDataHandler = handler - self.assertRaises(Exception, parser.Parse, xml) + self.assertRaises(Exception, parser.Parse, xml.encode('iso8859')) class ChardataBufferTest(unittest.TestCase): """ @@ -480,8 +491,8 @@ self.assertRaises(ValueError, f, 0) def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' + xml1 = b"" + b'a' * 512 + xml2 = b'a'*512 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_size = 512 @@ -503,9 +514,9 @@ def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) + xml1 = b"" + b'a' * 512 + xml2 = b'b' * 1024 + xml3 = b'c' * 1024 + b''; parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -532,16 +543,11 @@ parser.Parse(xml3, 1) self.assertEqual(self.n, 12) - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - def counting_handler(self, text): self.n += 1 def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) + xml = b"" + b'a' * buffer_len + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_size = 1024 @@ -552,8 +558,8 @@ return self.n def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) + xml1 = b"" + b'a' * 1024 + xml2 = b'aaa' + b'a' * 1025 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -568,8 +574,8 @@ self.assertEqual(self.n, 2) def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) + xml1 = b"a" + b'a' * 1023 + xml2 = b'aaa' + b'a' * 1025 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -585,7 +591,7 @@ class MalformedInputTest(unittest.TestCase): def test1(self): - xml = "\0\r\n" + xml = b"\0\r\n" parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -594,7 +600,8 @@ self.assertEqual(str(e), 'unclosed token: line 2, column 0') def test2(self): - xml = "\r\n" + #?\xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) + xml = b"\r\n" parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -609,7 +616,7 @@ errors.messages[errors.codes[errors.XML_ERROR_SYNTAX]]) def test_expaterror(self): - xml = '<' + xml = b'<' parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -638,7 +645,7 @@ parser.UseForeignDTD(True) parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity - parser.Parse("") + parser.Parse(b"") self.assertEqual(handler_call_args, [(None, None)]) # test UseForeignDTD() is equal to UseForeignDTD(True) @@ -648,7 +655,7 @@ parser.UseForeignDTD() parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity - parser.Parse("") + parser.Parse(b"") self.assertEqual(handler_call_args, [(None, None)]) def test_ignore_use_foreign_dtd(self): @@ -667,7 +674,7 @@ parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity parser.Parse( - "") + b"") self.assertEqual(handler_call_args, [("bar", "baz")]) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -163,6 +163,10 @@ Library ------- +- Issue #17089: Expat parser now correctly works with string input not only when + an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and + strings larger than 2 GiB. + - Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple parses nested mutating sequence. diff --git a/Modules/pyexpat.c b/Modules/pyexpat.c --- a/Modules/pyexpat.c +++ b/Modules/pyexpat.c @@ -778,17 +778,49 @@ "Parse(data[, isfinal])\n\ Parse XML data. `isfinal' should be true at end of input."); +#define MAX_CHUNK_SIZE (1 << 20) + static PyObject * xmlparse_Parse(xmlparseobject *self, PyObject *args) { - char *s; - int slen; + PyObject *data; int isFinal = 0; + const char *s; + Py_ssize_t slen; + Py_buffer view; + int rc; - if (!PyArg_ParseTuple(args, "s#|i:Parse", &s, &slen, &isFinal)) + if (!PyArg_ParseTuple(args, "O|i:Parse", &data, &isFinal)) return NULL; - return get_parse_result(self, XML_Parse(self->itself, s, slen, isFinal)); + if (PyUnicode_Check(data)) { + view.buf = NULL; + s = PyUnicode_AsUTF8AndSize(data, &slen); + if (s == NULL) + return NULL; + /* Explicitly set UTF-8 encoding. Return code ignored. */ + (void)XML_SetEncoding(self->itself, "utf-8"); + } + else { + if (PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) < 0) + return NULL; + s = view.buf; + slen = view.len; + } + + while (slen > MAX_CHUNK_SIZE) { + rc = XML_Parse(self->itself, s, MAX_CHUNK_SIZE, 0); + if (!rc) + goto done; + s += MAX_CHUNK_SIZE; + slen -= MAX_CHUNK_SIZE; + } + rc = XML_Parse(self->itself, s, slen, isFinal); + +done: + if (view.buf != NULL) + PyBuffer_Release(&view); + return get_parse_result(self, rc); } /* File reading copied from cPickle */ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 4 17:32:58 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 4 Feb 2013 17:32:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317089=3A_Expat_parser_now_correctly_works_with_?= =?utf-8?q?string_input_not_only_when?= Message-ID: <3Z0Dx60FBqzSdr@mail.python.org> http://hg.python.org/cpython/rev/c4e6e560e6f5 changeset: 82009:c4e6e560e6f5 parent: 82005:a80abb179ba1 parent: 82008:6c27b0e09c43 user: Serhiy Storchaka date: Mon Feb 04 18:29:47 2013 +0200 summary: Issue #17089: Expat parser now correctly works with string input not only when an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and strings larger than 2 GiB. files: Lib/test/test_pyexpat.py | 79 +++++++++++++++------------ Misc/NEWS | 4 + Modules/pyexpat.c | 40 ++++++++++++- 3 files changed, 83 insertions(+), 40 deletions(-) diff --git a/Lib/test/test_pyexpat.py b/Lib/test/test_pyexpat.py --- a/Lib/test/test_pyexpat.py +++ b/Lib/test/test_pyexpat.py @@ -52,6 +52,7 @@ &external_entity; &skipped_entity; +\xb5 ''' @@ -195,13 +196,13 @@ "End element: 'sub2'", "External entity ref: (None, 'entity.file', None)", ('Skipped entity', ('skipped_entity', 0)), + "Character data: '\xb5'", "End element: 'root'", ] for operation, expected_operation in zip(operations, expected_operations): self.assertEqual(operation, expected_operation) - def test_unicode(self): - # Try the parse again, this time producing Unicode output + def test_parse_bytes(self): out = self.Outputter() parser = expat.ParserCreate(namespace_separator='!') self._hookup_callbacks(parser, out) @@ -213,6 +214,16 @@ # Issue #6697. self.assertRaises(AttributeError, getattr, parser, '\uD800') + def test_parse_str(self): + out = self.Outputter() + parser = expat.ParserCreate(namespace_separator='!') + self._hookup_callbacks(parser, out) + + parser.Parse(data.decode('iso-8859-1'), 1) + + operations = out.out + self._verify_parse_output(operations) + def test_parse_file(self): # Try parsing a file out = self.Outputter() @@ -269,7 +280,7 @@ L.append(name) p.StartElementHandler = collector p.EndElementHandler = collector - p.Parse(" ", 1) + p.Parse(b" ", 1) tag = L[0] self.assertEqual(len(L), 6) for entry in L: @@ -285,7 +296,7 @@ def ExternalEntityRefHandler(self, context, base, sysId, pubId): external_parser = self.parser.ExternalEntityParserCreate("") - self.parser_result = external_parser.Parse("", 1) + self.parser_result = external_parser.Parse(b"", 1) return 1 parser = expat.ParserCreate(namespace_separator='!') @@ -336,7 +347,7 @@ def test_buffering_enabled(self): # Make sure buffering is turned on self.assertTrue(self.parser.buffer_text) - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ['123'], "buffered text not properly collapsed") @@ -344,39 +355,39 @@ # XXX This test exposes more detail of Expat's text chunking than we # XXX like, but it tests what we need to concisely. self.setHandlers(["StartElementHandler"]) - self.parser.Parse("12\n34\n5", 1) + self.parser.Parse(b"12\n34\n5", 1) self.assertEqual(self.stuff, ["", "1", "", "2", "\n", "3", "", "4\n5"], "buffering control not reacting as expected") def test2(self): - self.parser.Parse("1<2> \n 3", 1) + self.parser.Parse(b"1<2> \n 3", 1) self.assertEqual(self.stuff, ["1<2> \n 3"], "buffered text not properly collapsed") def test3(self): self.setHandlers(["StartElementHandler"]) - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ["", "1", "", "2", "", "3"], "buffered text not properly split") def test4(self): self.setHandlers(["StartElementHandler", "EndElementHandler"]) self.parser.CharacterDataHandler = None - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ["", "", "", "", "", ""]) def test5(self): self.setHandlers(["StartElementHandler", "EndElementHandler"]) - self.parser.Parse("123", 1) + self.parser.Parse(b"123", 1) self.assertEqual(self.stuff, ["", "1", "", "", "2", "", "", "3", ""]) def test6(self): self.setHandlers(["CommentHandler", "EndElementHandler", "StartElementHandler"]) - self.parser.Parse("12345 ", 1) + self.parser.Parse(b"12345 ", 1) self.assertEqual(self.stuff, ["", "1", "", "", "2", "", "", "345", ""], "buffered text not properly split") @@ -384,7 +395,7 @@ def test7(self): self.setHandlers(["CommentHandler", "EndElementHandler", "StartElementHandler"]) - self.parser.Parse("12345 ", 1) + self.parser.Parse(b"12345 ", 1) self.assertEqual(self.stuff, ["", "1", "", "", "2", "", "", "3", "", "4", "", "5", ""], @@ -400,7 +411,7 @@ parser = expat.ParserCreate() parser.StartElementHandler = self.StartElementHandler try: - parser.Parse("", 1) + parser.Parse(b"", 1) self.fail() except RuntimeError as e: self.assertEqual(e.args[0], 'a', @@ -436,7 +447,7 @@ self.expected_list = [('s', 0, 1, 0), ('s', 5, 2, 1), ('s', 11, 3, 2), ('e', 15, 3, 6), ('e', 17, 4, 1), ('e', 22, 5, 0)] - xml = '\n \n \n \n' + xml = b'\n \n \n \n' self.parser.Parse(xml, 1) @@ -457,7 +468,7 @@ parser = expat.ParserCreate() parser.CharacterDataHandler = handler - self.assertRaises(Exception, parser.Parse, xml) + self.assertRaises(Exception, parser.Parse, xml.encode('iso8859')) class ChardataBufferTest(unittest.TestCase): """ @@ -480,8 +491,8 @@ self.assertRaises(ValueError, f, 0) def test_unchanged_size(self): - xml1 = ("%s" % ('a' * 512)) - xml2 = 'a'*512 + '' + xml1 = b"" + b'a' * 512 + xml2 = b'a'*512 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_size = 512 @@ -503,9 +514,9 @@ def test_disabling_buffer(self): - xml1 = "%s" % ('a' * 512) - xml2 = ('b' * 1024) - xml3 = "%s" % ('c' * 1024) + xml1 = b"" + b'a' * 512 + xml2 = b'b' * 1024 + xml3 = b'c' * 1024 + b''; parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -532,16 +543,11 @@ parser.Parse(xml3, 1) self.assertEqual(self.n, 12) - - - def make_document(self, bytes): - return ("" + bytes * 'a' + '') - def counting_handler(self, text): self.n += 1 def small_buffer_test(self, buffer_len): - xml = "%s" % ('a' * buffer_len) + xml = b"" + b'a' * buffer_len + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_size = 1024 @@ -552,8 +558,8 @@ return self.n def test_change_size_1(self): - xml1 = "%s" % ('a' * 1024) - xml2 = "aaa%s" % ('a' * 1025) + xml1 = b"" + b'a' * 1024 + xml2 = b'aaa' + b'a' * 1025 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -568,8 +574,8 @@ self.assertEqual(self.n, 2) def test_change_size_2(self): - xml1 = "a%s" % ('a' * 1023) - xml2 = "aaa%s" % ('a' * 1025) + xml1 = b"a" + b'a' * 1023 + xml2 = b'aaa' + b'a' * 1025 + b'' parser = expat.ParserCreate() parser.CharacterDataHandler = self.counting_handler parser.buffer_text = 1 @@ -585,7 +591,7 @@ class MalformedInputTest(unittest.TestCase): def test1(self): - xml = "\0\r\n" + xml = b"\0\r\n" parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -594,7 +600,8 @@ self.assertEqual(str(e), 'unclosed token: line 2, column 0') def test2(self): - xml = "\r\n" + #?\xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) + xml = b"\r\n" parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -609,7 +616,7 @@ errors.messages[errors.codes[errors.XML_ERROR_SYNTAX]]) def test_expaterror(self): - xml = '<' + xml = b'<' parser = expat.ParserCreate() try: parser.Parse(xml, True) @@ -638,7 +645,7 @@ parser.UseForeignDTD(True) parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity - parser.Parse("") + parser.Parse(b"") self.assertEqual(handler_call_args, [(None, None)]) # test UseForeignDTD() is equal to UseForeignDTD(True) @@ -648,7 +655,7 @@ parser.UseForeignDTD() parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity - parser.Parse("") + parser.Parse(b"") self.assertEqual(handler_call_args, [(None, None)]) def test_ignore_use_foreign_dtd(self): @@ -667,7 +674,7 @@ parser.SetParamEntityParsing(expat.XML_PARAM_ENTITY_PARSING_ALWAYS) parser.ExternalEntityRefHandler = resolve_entity parser.Parse( - "") + b"") self.assertEqual(handler_call_args, [("bar", "baz")]) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,10 @@ Library ------- +- Issue #17089: Expat parser now correctly works with string input not only when + an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and + strings larger than 2 GiB. + - Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple parses nested mutating sequence. diff --git a/Modules/pyexpat.c b/Modules/pyexpat.c --- a/Modules/pyexpat.c +++ b/Modules/pyexpat.c @@ -778,17 +778,49 @@ "Parse(data[, isfinal])\n\ Parse XML data. `isfinal' should be true at end of input."); +#define MAX_CHUNK_SIZE (1 << 20) + static PyObject * xmlparse_Parse(xmlparseobject *self, PyObject *args) { - char *s; - int slen; + PyObject *data; int isFinal = 0; + const char *s; + Py_ssize_t slen; + Py_buffer view; + int rc; - if (!PyArg_ParseTuple(args, "s#|i:Parse", &s, &slen, &isFinal)) + if (!PyArg_ParseTuple(args, "O|i:Parse", &data, &isFinal)) return NULL; - return get_parse_result(self, XML_Parse(self->itself, s, slen, isFinal)); + if (PyUnicode_Check(data)) { + view.buf = NULL; + s = PyUnicode_AsUTF8AndSize(data, &slen); + if (s == NULL) + return NULL; + /* Explicitly set UTF-8 encoding. Return code ignored. */ + (void)XML_SetEncoding(self->itself, "utf-8"); + } + else { + if (PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) < 0) + return NULL; + s = view.buf; + slen = view.len; + } + + while (slen > MAX_CHUNK_SIZE) { + rc = XML_Parse(self->itself, s, MAX_CHUNK_SIZE, 0); + if (!rc) + goto done; + s += MAX_CHUNK_SIZE; + slen -= MAX_CHUNK_SIZE; + } + rc = XML_Parse(self->itself, s, slen, isFinal); + +done: + if (view.buf != NULL) + PyBuffer_Release(&view); + return get_parse_result(self, rc); } /* File reading copied from cPickle */ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 4 21:25:23 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 4 Feb 2013 21:25:23 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzE2ODExOiBGaXgg?= =?utf-8?q?folding_of_headers_with_no_value_in_provisional_policies=2E?= Message-ID: <3Z0L5H6HyBzSj3@mail.python.org> http://hg.python.org/cpython/rev/e64b74227198 changeset: 82010:e64b74227198 branch: 3.3 parent: 82008:6c27b0e09c43 user: R David Murray date: Mon Feb 04 15:22:53 2013 -0500 summary: #16811: Fix folding of headers with no value in provisional policies. files: Lib/email/policy.py | 2 +- Lib/test/test_email/test_inversion.py | 45 +++++++++++++++ 2 files changed, 46 insertions(+), 1 deletions(-) diff --git a/Lib/email/policy.py b/Lib/email/policy.py --- a/Lib/email/policy.py +++ b/Lib/email/policy.py @@ -173,7 +173,7 @@ lines = value.splitlines() refold = (self.refold_source == 'all' or self.refold_source == 'long' and - (len(lines[0])+len(name)+2 > maxlen or + (lines and len(lines[0])+len(name)+2 > maxlen or any(len(x) > maxlen for x in lines[1:]))) if refold or refold_binary and _has_surrogates(value): return self.header_factory(name, ''.join(lines)).fold(policy=self) diff --git a/Lib/test/test_email/test_inversion.py b/Lib/test/test_email/test_inversion.py new file mode 100644 --- /dev/null +++ b/Lib/test/test_email/test_inversion.py @@ -0,0 +1,45 @@ +"""Test the parser and generator are inverses. + +Note that this is only strictly true if we are parsing RFC valid messages and +producing RFC valid messages. +""" + +import io +import unittest +from email import policy, message_from_bytes +from email.generator import BytesGenerator +from test.test_email import TestEmailBase, parameterize + +# This is like textwrap.dedent for bytes, except that it uses \r\n for the line +# separators on the rebuilt string. +def dedent(bstr): + lines = bstr.splitlines() + if not lines[0].strip(): + raise ValueError("First line must contain text") + stripamt = len(lines[0]) - len(lines[0].lstrip()) + return b'\r\n'.join( + [x[stripamt:] if len(x)>=stripamt else b'' + for x in lines]) + + + at parameterize +class TestInversion(TestEmailBase, unittest.TestCase): + + def msg_as_input(self, msg): + m = message_from_bytes(msg, policy=policy.SMTP) + b = io.BytesIO() + g = BytesGenerator(b) + g.flatten(m) + self.assertEqual(b.getvalue(), msg) + + # XXX: spaces are not preserved correctly here yet in the general case. + msg_params = { + 'header_with_one_space_body': (dedent(b"""\ + From: abc at xyz.com + X-Status:\x20 + Subject: test + + foo + """),), + + } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 4 21:25:25 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 4 Feb 2013 21:25:25 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_=2316811=3A_Fix_folding_of_headers_with_no_value_i?= =?utf-8?q?n_provisional_policies=2E?= Message-ID: <3Z0L5K1t1RzSjb@mail.python.org> http://hg.python.org/cpython/rev/fe7f3e2e49ce changeset: 82011:fe7f3e2e49ce parent: 82009:c4e6e560e6f5 parent: 82010:e64b74227198 user: R David Murray date: Mon Feb 04 15:25:06 2013 -0500 summary: Merge #16811: Fix folding of headers with no value in provisional policies. files: Lib/email/policy.py | 2 +- Lib/test/test_email/test_inversion.py | 45 +++++++++++++++ 2 files changed, 46 insertions(+), 1 deletions(-) diff --git a/Lib/email/policy.py b/Lib/email/policy.py --- a/Lib/email/policy.py +++ b/Lib/email/policy.py @@ -173,7 +173,7 @@ lines = value.splitlines() refold = (self.refold_source == 'all' or self.refold_source == 'long' and - (len(lines[0])+len(name)+2 > maxlen or + (lines and len(lines[0])+len(name)+2 > maxlen or any(len(x) > maxlen for x in lines[1:]))) if refold or refold_binary and _has_surrogates(value): return self.header_factory(name, ''.join(lines)).fold(policy=self) diff --git a/Lib/test/test_email/test_inversion.py b/Lib/test/test_email/test_inversion.py new file mode 100644 --- /dev/null +++ b/Lib/test/test_email/test_inversion.py @@ -0,0 +1,45 @@ +"""Test the parser and generator are inverses. + +Note that this is only strictly true if we are parsing RFC valid messages and +producing RFC valid messages. +""" + +import io +import unittest +from email import policy, message_from_bytes +from email.generator import BytesGenerator +from test.test_email import TestEmailBase, parameterize + +# This is like textwrap.dedent for bytes, except that it uses \r\n for the line +# separators on the rebuilt string. +def dedent(bstr): + lines = bstr.splitlines() + if not lines[0].strip(): + raise ValueError("First line must contain text") + stripamt = len(lines[0]) - len(lines[0].lstrip()) + return b'\r\n'.join( + [x[stripamt:] if len(x)>=stripamt else b'' + for x in lines]) + + + at parameterize +class TestInversion(TestEmailBase, unittest.TestCase): + + def msg_as_input(self, msg): + m = message_from_bytes(msg, policy=policy.SMTP) + b = io.BytesIO() + g = BytesGenerator(b) + g.flatten(m) + self.assertEqual(b.getvalue(), msg) + + # XXX: spaces are not preserved correctly here yet in the general case. + msg_params = { + 'header_with_one_space_body': (dedent(b"""\ + From: abc at xyz.com + X-Status:\x20 + Subject: test + + foo + """),), + + } -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Tue Feb 5 06:01:49 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Tue, 05 Feb 2013 06:01:49 +0100 Subject: [Python-checkins] Daily reference leaks (fe7f3e2e49ce): sum=0 Message-ID: results for fe7f3e2e49ce on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogu36eg3', '-x'] From python-checkins at python.org Tue Feb 5 07:30:56 2013 From: python-checkins at python.org (raymond.hettinger) Date: Tue, 5 Feb 2013 07:30:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Minor_variable_access_clea?= =?utf-8?q?n-ups_for_deque=2Erotate=28=29=2E?= Message-ID: <3Z0bX04w3mzSsy@mail.python.org> http://hg.python.org/cpython/rev/22cac8350d68 changeset: 82012:22cac8350d68 user: Raymond Hettinger date: Tue Feb 05 01:30:46 2013 -0500 summary: Minor variable access clean-ups for deque.rotate(). files: Modules/_collectionsmodule.c | 26 ++++++++++++------------ 1 files changed, 13 insertions(+), 13 deletions(-) diff --git a/Modules/_collectionsmodule.c b/Modules/_collectionsmodule.c --- a/Modules/_collectionsmodule.c +++ b/Modules/_collectionsmodule.c @@ -413,7 +413,7 @@ static int _deque_rotate(dequeobject *deque, Py_ssize_t n) { - Py_ssize_t i, m, len=deque->len, halflen=len>>1; + Py_ssize_t m, len=deque->len, halflen=len>>1; block *prevblock; if (len <= 1) @@ -425,13 +425,13 @@ else if (n < -halflen) n += len; } - assert(deque->len > 1); + assert(len > 1); assert(-halflen <= n && n <= halflen); deque->state++; - for (i=0 ; i 0) { if (deque->leftindex == 0) { - block *b = newblock(NULL, deque->leftblock, deque->len); + block *b = newblock(NULL, deque->leftblock, len); if (b == NULL) return -1; assert(deque->leftblock->leftlink == NULL); @@ -441,18 +441,18 @@ } assert(deque->leftindex > 0); - m = n - i; + m = n; if (m > deque->rightindex + 1) m = deque->rightindex + 1; if (m > deque->leftindex) m = deque->leftindex; - assert (m > 0 && m <= deque->len); + assert (m > 0 && m <= len); memcpy(&deque->leftblock->data[deque->leftindex - m], - &deque->rightblock->data[deque->rightindex - m + 1], + &deque->rightblock->data[deque->rightindex + 1 - m], m * sizeof(PyObject *)); deque->rightindex -= m; deque->leftindex -= m; - i += m; + n -= m; if (deque->rightindex == -1) { assert(deque->rightblock != NULL); @@ -464,9 +464,9 @@ deque->rightindex = BLOCKLEN - 1; } } - for (i=0 ; i>n ; ) { + while (n < 0) { if (deque->rightindex == BLOCKLEN - 1) { - block *b = newblock(deque->rightblock, NULL, deque->len); + block *b = newblock(deque->rightblock, NULL, len); if (b == NULL) return -1; assert(deque->rightblock->rightlink == NULL); @@ -476,18 +476,18 @@ } assert (deque->rightindex < BLOCKLEN - 1); - m = i - n; + m = -n; if (m > BLOCKLEN - deque->leftindex) m = BLOCKLEN - deque->leftindex; if (m > BLOCKLEN - 1 - deque->rightindex) m = BLOCKLEN - 1 - deque->rightindex; - assert (m > 0 && m <= deque->len); + assert (m > 0 && m <= len); memcpy(&deque->rightblock->data[deque->rightindex + 1], &deque->leftblock->data[deque->leftindex], m * sizeof(PyObject *)); deque->leftindex += m; deque->rightindex += m; - i -= m; + n += m; if (deque->leftindex == BLOCKLEN) { assert(deque->leftblock != deque->rightblock); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 08:26:21 2013 From: python-checkins at python.org (hynek.schlawack) Date: Tue, 5 Feb 2013 08:26:21 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzE3MDc2OiBNYWtl?= =?utf-8?q?_copying_of_xattrs_more_permissive_of_missing_FS_support?= Message-ID: <3Z0clx0wn8zSqD@mail.python.org> http://hg.python.org/cpython/rev/47c65639390d changeset: 82013:47c65639390d branch: 3.3 parent: 82010:e64b74227198 user: Hynek Schlawack date: Tue Feb 05 08:22:44 2013 +0100 summary: #17076: Make copying of xattrs more permissive of missing FS support Patch by Thomas Wouters. files: Lib/shutil.py | 8 +++++++- Lib/test/test_shutil.py | 11 +++++++++++ Misc/NEWS | 3 +++ 3 files changed, 21 insertions(+), 1 deletions(-) diff --git a/Lib/shutil.py b/Lib/shutil.py --- a/Lib/shutil.py +++ b/Lib/shutil.py @@ -142,7 +142,13 @@ """ - for name in os.listxattr(src, follow_symlinks=follow_symlinks): + try: + names = os.listxattr(src, follow_symlinks=follow_symlinks) + except OSError as e: + if e.errno not in (errno.ENOTSUP, errno.ENODATA): + raise + return + for name in names: try: value = os.getxattr(src, name, follow_symlinks=follow_symlinks) os.setxattr(dst, name, value, follow_symlinks=follow_symlinks) diff --git a/Lib/test/test_shutil.py b/Lib/test/test_shutil.py --- a/Lib/test/test_shutil.py +++ b/Lib/test/test_shutil.py @@ -449,6 +449,17 @@ self.assertIn('user.bar', os.listxattr(dst)) finally: os.setxattr = orig_setxattr + # the source filesystem not supporting xattrs should be ok, too. + def _raise_on_src(fname, *, follow_symlinks=True): + if fname == src: + raise OSError(errno.ENOTSUP, 'Operation not supported') + return orig_listxattr(fname, follow_symlinks=follow_symlinks) + try: + orig_listxattr = os.listxattr + os.listxattr = _raise_on_src + shutil._copyxattr(src, dst) + finally: + os.listxattr = orig_listxattr # test that shutil.copystat copies xattrs src = os.path.join(tmp_dir, 'the_original') diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -163,6 +163,9 @@ Library ------- +- Issue #17076: Make copying of xattrs more permissive of missing FS support. + Patch by Thomas Wouters. + - Issue #17089: Expat parser now correctly works with string input not only when an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and strings larger than 2 GiB. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 08:26:22 2013 From: python-checkins at python.org (hynek.schlawack) Date: Tue, 5 Feb 2013 08:26:22 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_=2317076=3A_Make_copying_of_xattrs_more_permissive_of_mi?= =?utf-8?q?ssing_FS_support?= Message-ID: <3Z0cly3yXQzStS@mail.python.org> http://hg.python.org/cpython/rev/7ccdbd1cd213 changeset: 82014:7ccdbd1cd213 parent: 82012:22cac8350d68 parent: 82013:47c65639390d user: Hynek Schlawack date: Tue Feb 05 08:25:24 2013 +0100 summary: #17076: Make copying of xattrs more permissive of missing FS support Patch by Thomas Wouters. files: Lib/shutil.py | 8 +++++++- Lib/test/test_shutil.py | 11 +++++++++++ Misc/NEWS | 3 +++ 3 files changed, 21 insertions(+), 1 deletions(-) diff --git a/Lib/shutil.py b/Lib/shutil.py --- a/Lib/shutil.py +++ b/Lib/shutil.py @@ -140,7 +140,13 @@ """ - for name in os.listxattr(src, follow_symlinks=follow_symlinks): + try: + names = os.listxattr(src, follow_symlinks=follow_symlinks) + except OSError as e: + if e.errno not in (errno.ENOTSUP, errno.ENODATA): + raise + return + for name in names: try: value = os.getxattr(src, name, follow_symlinks=follow_symlinks) os.setxattr(dst, name, value, follow_symlinks=follow_symlinks) diff --git a/Lib/test/test_shutil.py b/Lib/test/test_shutil.py --- a/Lib/test/test_shutil.py +++ b/Lib/test/test_shutil.py @@ -450,6 +450,17 @@ self.assertIn('user.bar', os.listxattr(dst)) finally: os.setxattr = orig_setxattr + # the source filesystem not supporting xattrs should be ok, too. + def _raise_on_src(fname, *, follow_symlinks=True): + if fname == src: + raise OSError(errno.ENOTSUP, 'Operation not supported') + return orig_listxattr(fname, follow_symlinks=follow_symlinks) + try: + orig_listxattr = os.listxattr + os.listxattr = _raise_on_src + shutil._copyxattr(src, dst) + finally: + os.listxattr = orig_listxattr # test that shutil.copystat copies xattrs src = os.path.join(tmp_dir, 'the_original') diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,9 @@ Library ------- +- Issue #17076: Make copying of xattrs more permissive of missing FS support. + Patch by Thomas Wouters. + - Issue #17089: Expat parser now correctly works with string input not only when an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and strings larger than 2 GiB. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 16:14:16 2013 From: python-checkins at python.org (benjamin.peterson) Date: Tue, 5 Feb 2013 16:14:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogdG9rZW4ubWFpbiBp?= =?utf-8?q?s_now_token=2E=5Fmain?= Message-ID: <3Z0q7r57qYzSgV@mail.python.org> http://hg.python.org/cpython/rev/c2278cb6cd44 changeset: 82015:c2278cb6cd44 branch: 3.3 parent: 82013:47c65639390d user: Benjamin Peterson date: Tue Feb 05 10:11:13 2013 -0500 summary: token.main is now token._main files: Lib/symbol.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/symbol.py b/Lib/symbol.py --- a/Lib/symbol.py +++ b/Lib/symbol.py @@ -104,7 +104,7 @@ import token if len(sys.argv) == 1: sys.argv = sys.argv + ["Include/graminit.h", "Lib/symbol.py"] - token.main() + token._main() if __name__ == "__main__": main() -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 16:14:18 2013 From: python-checkins at python.org (benjamin.peterson) Date: Tue, 5 Feb 2013 16:14:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_update_symbol?= =?utf-8?q?=2Epy_for_yield_from_grammar_changes_=28closes_=2317132=29?= Message-ID: <3Z0q7t10pDzSxQ@mail.python.org> http://hg.python.org/cpython/rev/23850c3899e8 changeset: 82016:23850c3899e8 branch: 3.3 user: Benjamin Peterson date: Tue Feb 05 10:12:14 2013 -0500 summary: update symbol.py for yield from grammar changes (closes #17132) files: Lib/symbol.py | 1 + Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 0 deletions(-) diff --git a/Lib/symbol.py b/Lib/symbol.py --- a/Lib/symbol.py +++ b/Lib/symbol.py @@ -91,6 +91,7 @@ comp_if = 334 encoding_decl = 335 yield_expr = 336 +yield_arg = 337 #--end constants-- sym_name = {} diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -163,6 +163,8 @@ Library ------- +- Issue #17132: Update symbol for "yield from" grammar changes. + - Issue #17076: Make copying of xattrs more permissive of missing FS support. Patch by Thomas Wouters. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 16:14:19 2013 From: python-checkins at python.org (benjamin.peterson) Date: Tue, 5 Feb 2013 16:14:19 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogbWVyZ2UgMy4zICgjMTcxMzIp?= Message-ID: <3Z0q7v3g0RzSxR@mail.python.org> http://hg.python.org/cpython/rev/0fcd975765a7 changeset: 82017:0fcd975765a7 parent: 82014:7ccdbd1cd213 parent: 82016:23850c3899e8 user: Benjamin Peterson date: Tue Feb 05 10:12:31 2013 -0500 summary: merge 3.3 (#17132) files: Lib/symbol.py | 3 ++- Misc/NEWS | 2 ++ 2 files changed, 4 insertions(+), 1 deletions(-) diff --git a/Lib/symbol.py b/Lib/symbol.py --- a/Lib/symbol.py +++ b/Lib/symbol.py @@ -91,6 +91,7 @@ comp_if = 334 encoding_decl = 335 yield_expr = 336 +yield_arg = 337 #--end constants-- sym_name = {} @@ -104,7 +105,7 @@ import token if len(sys.argv) == 1: sys.argv = sys.argv + ["Include/graminit.h", "Lib/symbol.py"] - token.main() + token._main() if __name__ == "__main__": main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,8 @@ Library ------- +- Issue #17132: Update symbol for "yield from" grammar changes. + - Issue #17076: Make copying of xattrs more permissive of missing FS support. Patch by Thomas Wouters. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 16:14:20 2013 From: python-checkins at python.org (benjamin.peterson) Date: Tue, 5 Feb 2013 16:14:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_remain_symbol=2Emain_to_sy?= =?utf-8?q?mbol=2E=5Fmain_mirroring_token=2Epy?= Message-ID: <3Z0q7w65jSzSxh@mail.python.org> http://hg.python.org/cpython/rev/704a38a1d048 changeset: 82018:704a38a1d048 user: Benjamin Peterson date: Tue Feb 05 10:13:22 2013 -0500 summary: remain symbol.main to symbol._main mirroring token.py files: Lib/symbol.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/symbol.py b/Lib/symbol.py --- a/Lib/symbol.py +++ b/Lib/symbol.py @@ -100,7 +100,7 @@ sym_name[_value] = _name -def main(): +def _main(): import sys import token if len(sys.argv) == 1: @@ -108,4 +108,4 @@ token._main() if __name__ == "__main__": - main() + _main() -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 16:18:58 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 5 Feb 2013 16:18:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_News_item_for_?= =?utf-8?q?issue_=2316811_fix=2E?= Message-ID: <3Z0qFG6ShRzSxH@mail.python.org> http://hg.python.org/cpython/rev/4553dfcafac7 changeset: 82019:4553dfcafac7 branch: 3.3 parent: 82016:23850c3899e8 user: R David Murray date: Tue Feb 05 10:17:09 2013 -0500 summary: News item for issue #16811 fix. files: Misc/NEWS | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -163,6 +163,9 @@ Library ------- +- Issue #16811: Fix folding of headers with no value in the provisional email + policies. + - Issue #17132: Update symbol for "yield from" grammar changes. - Issue #17076: Make copying of xattrs more permissive of missing FS support. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 16:19:00 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 5 Feb 2013 16:19:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_News_item_for_issue_=2316811_fix=2E?= Message-ID: <3Z0qFJ28PCzSvF@mail.python.org> http://hg.python.org/cpython/rev/68be406e76e1 changeset: 82020:68be406e76e1 parent: 82018:704a38a1d048 parent: 82019:4553dfcafac7 user: R David Murray date: Tue Feb 05 10:18:46 2013 -0500 summary: Merge: News item for issue #16811 fix. files: Misc/NEWS | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,9 @@ Library ------- +- Issue #16811: Fix folding of headers with no value in the provisional email + policies. + - Issue #17132: Update symbol for "yield from" grammar changes. - Issue #17076: Make copying of xattrs more permissive of missing FS support. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 17:35:03 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 5 Feb 2013 17:35:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE2OTQ4OiBGaXgg?= =?utf-8?q?quopri_encoding_of_non-latin1_character_sets=2E?= Message-ID: <3Z0rx32gX0zSXS@mail.python.org> http://hg.python.org/cpython/rev/801cb3918212 changeset: 82021:801cb3918212 branch: 3.2 parent: 82007:3cc2a2de36e3 user: R David Murray date: Tue Feb 05 10:49:49 2013 -0500 summary: #16948: Fix quopri encoding of non-latin1 character sets. files: Lib/email/charset.py | 13 +++++++++++++ Lib/email/test/test_email.py | 21 +++++++++++++++++++++ Misc/NEWS | 4 ++++ 3 files changed, 38 insertions(+), 0 deletions(-) diff --git a/Lib/email/charset.py b/Lib/email/charset.py --- a/Lib/email/charset.py +++ b/Lib/email/charset.py @@ -392,6 +392,19 @@ string = string.encode(self.output_charset) return email.base64mime.body_encode(string) elif self.body_encoding is QP: + # quopromime.body_encode takes a string, but operates on it as if + # it were a list of byte codes. For a (minimal) history on why + # this is so, see changeset 0cf700464177. To correctly encode a + # character set, then, we must turn it into pseudo bytes via the + # latin1 charset, which will encode any byte as a single code point + # between 0 and 255, which is what body_encode is expecting. + # + # Note that this clause doesn't handle the case of a _payload that + # is already bytes. It never did, and the semantics of _payload + # being bytes has never been nailed down, so fixing that is a + # longer term TODO. + if isinstance(string, str): + string = string.encode(self.output_charset).decode('latin1') return email.quoprimime.body_encode(string) else: if isinstance(string, str): diff --git a/Lib/email/test/test_email.py b/Lib/email/test/test_email.py --- a/Lib/email/test/test_email.py +++ b/Lib/email/test/test_email.py @@ -670,6 +670,27 @@ msg = MIMEText('?', _charset='euc-jp') eq(msg['content-transfer-encoding'], '7bit') + def test_qp_encode_latin1(self): + msg = MIMEText('\xe1\xf6\n', 'text', 'ISO-8859-1') + self.assertEqual(str(msg), textwrap.dedent("""\ + MIME-Version: 1.0 + Content-Type: text/text; charset="iso-8859-1" + Content-Transfer-Encoding: quoted-printable + + =E1=F6 + """)) + + def test_qp_encode_non_latin1(self): + # Issue 16948 + msg = MIMEText('\u017c\n', 'text', 'ISO-8859-2') + self.assertEqual(str(msg), textwrap.dedent("""\ + MIME-Version: 1.0 + Content-Type: text/text; charset="iso-8859-2" + Content-Transfer-Encoding: quoted-printable + + =BF + """)) + # Test long header wrapping class TestLongHeaders(TestEmailBase): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -212,6 +212,10 @@ Library ------- + +- Issue #16948: Fix quoted printable body encoding for non-latin1 character + sets in the email package. + - Issue #17089: Expat parser now correctly works with string input not only when an internal XML encoding is UTF-8 or US-ASCII. It now accepts bytes and strings larger than 2 GiB. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 17:35:04 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 5 Feb 2013 17:35:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge=3A_=2316948=3A_Fix_quopri_encoding_of_non-latin1_charact?= =?utf-8?q?er_sets=2E?= Message-ID: <3Z0rx45Q3mzSWM@mail.python.org> http://hg.python.org/cpython/rev/e644058e8e7b changeset: 82022:e644058e8e7b branch: 3.3 parent: 82019:4553dfcafac7 parent: 82021:801cb3918212 user: R David Murray date: Tue Feb 05 10:55:27 2013 -0500 summary: Merge: #16948: Fix quopri encoding of non-latin1 character sets. files: Lib/email/charset.py | 13 +++++++++++ Lib/test/test_email/test_email.py | 21 +++++++++++++++++++ Misc/NEWS | 3 ++ 3 files changed, 37 insertions(+), 0 deletions(-) diff --git a/Lib/email/charset.py b/Lib/email/charset.py --- a/Lib/email/charset.py +++ b/Lib/email/charset.py @@ -392,6 +392,19 @@ string = string.encode(self.output_charset) return email.base64mime.body_encode(string) elif self.body_encoding is QP: + # quopromime.body_encode takes a string, but operates on it as if + # it were a list of byte codes. For a (minimal) history on why + # this is so, see changeset 0cf700464177. To correctly encode a + # character set, then, we must turn it into pseudo bytes via the + # latin1 charset, which will encode any byte as a single code point + # between 0 and 255, which is what body_encode is expecting. + # + # Note that this clause doesn't handle the case of a _payload that + # is already bytes. It never did, and the semantics of _payload + # being bytes has never been nailed down, so fixing that is a + # longer term TODO. + if isinstance(string, str): + string = string.encode(self.output_charset).decode('latin1') return email.quoprimime.body_encode(string) else: if isinstance(string, str): diff --git a/Lib/test/test_email/test_email.py b/Lib/test/test_email/test_email.py --- a/Lib/test/test_email/test_email.py +++ b/Lib/test/test_email/test_email.py @@ -677,6 +677,27 @@ msg = MIMEText('?', _charset='euc-jp') eq(msg['content-transfer-encoding'], '7bit') + def test_qp_encode_latin1(self): + msg = MIMEText('\xe1\xf6\n', 'text', 'ISO-8859-1') + self.assertEqual(str(msg), textwrap.dedent("""\ + MIME-Version: 1.0 + Content-Type: text/text; charset="iso-8859-1" + Content-Transfer-Encoding: quoted-printable + + =E1=F6 + """)) + + def test_qp_encode_non_latin1(self): + # Issue 16948 + msg = MIMEText('\u017c\n', 'text', 'ISO-8859-2') + self.assertEqual(str(msg), textwrap.dedent("""\ + MIME-Version: 1.0 + Content-Type: text/text; charset="iso-8859-2" + Content-Transfer-Encoding: quoted-printable + + =BF + """)) + # Test long header wrapping class TestLongHeaders(TestEmailBase): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -163,6 +163,9 @@ Library ------- +- Issue #16948: Fix quoted printable body encoding for non-latin1 character + sets in the email package. + - Issue #16811: Fix folding of headers with no value in the provisional email policies. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 17:35:06 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 5 Feb 2013 17:35:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=2316948=3A_Fix_quopri_encoding_of_non-latin1_c?= =?utf-8?q?haracter_sets=2E?= Message-ID: <3Z0rx614FxzSXK@mail.python.org> http://hg.python.org/cpython/rev/009dc81e8bc9 changeset: 82023:009dc81e8bc9 parent: 82020:68be406e76e1 parent: 82022:e644058e8e7b user: R David Murray date: Tue Feb 05 11:34:39 2013 -0500 summary: Merge: #16948: Fix quopri encoding of non-latin1 character sets. files: Lib/email/charset.py | 13 +++++++++++ Lib/test/test_email/test_email.py | 21 +++++++++++++++++++ Misc/NEWS | 3 ++ 3 files changed, 37 insertions(+), 0 deletions(-) diff --git a/Lib/email/charset.py b/Lib/email/charset.py --- a/Lib/email/charset.py +++ b/Lib/email/charset.py @@ -392,6 +392,19 @@ string = string.encode(self.output_charset) return email.base64mime.body_encode(string) elif self.body_encoding is QP: + # quopromime.body_encode takes a string, but operates on it as if + # it were a list of byte codes. For a (minimal) history on why + # this is so, see changeset 0cf700464177. To correctly encode a + # character set, then, we must turn it into pseudo bytes via the + # latin1 charset, which will encode any byte as a single code point + # between 0 and 255, which is what body_encode is expecting. + # + # Note that this clause doesn't handle the case of a _payload that + # is already bytes. It never did, and the semantics of _payload + # being bytes has never been nailed down, so fixing that is a + # longer term TODO. + if isinstance(string, str): + string = string.encode(self.output_charset).decode('latin1') return email.quoprimime.body_encode(string) else: if isinstance(string, str): diff --git a/Lib/test/test_email/test_email.py b/Lib/test/test_email/test_email.py --- a/Lib/test/test_email/test_email.py +++ b/Lib/test/test_email/test_email.py @@ -677,6 +677,27 @@ msg = MIMEText('?', _charset='euc-jp') eq(msg['content-transfer-encoding'], '7bit') + def test_qp_encode_latin1(self): + msg = MIMEText('\xe1\xf6\n', 'text', 'ISO-8859-1') + self.assertEqual(str(msg), textwrap.dedent("""\ + MIME-Version: 1.0 + Content-Type: text/text; charset="iso-8859-1" + Content-Transfer-Encoding: quoted-printable + + =E1=F6 + """)) + + def test_qp_encode_non_latin1(self): + # Issue 16948 + msg = MIMEText('\u017c\n', 'text', 'ISO-8859-2') + self.assertEqual(str(msg), textwrap.dedent("""\ + MIME-Version: 1.0 + Content-Type: text/text; charset="iso-8859-2" + Content-Transfer-Encoding: quoted-printable + + =BF + """)) + # Test long header wrapping class TestLongHeaders(TestEmailBase): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,9 @@ Library ------- +- Issue #16948: Fix quoted printable body encoding for non-latin1 character + sets in the email package. + - Issue #16811: Fix folding of headers with no value in the provisional email policies. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 19:43:00 2013 From: python-checkins at python.org (charles-francois.natali) Date: Tue, 5 Feb 2013 19:43:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2315359=3A_Add_CAN?= =?utf-8?q?=5FBCM_protocol_support_to_the_socket_module=2E_Patch_by_Brian?= Message-ID: <3Z0vmh5zGjzSXk@mail.python.org> http://hg.python.org/cpython/rev/f714af60508d changeset: 82024:f714af60508d user: Charles-Fran?ois Natali date: Tue Feb 05 19:42:01 2013 +0100 summary: Issue #15359: Add CAN_BCM protocol support to the socket module. Patch by Brian Thorne. files: Doc/library/socket.rst | 32 +++++++- Lib/test/test_socket.py | 106 +++++++++++++++++++++++++-- Misc/NEWS | 3 + Modules/socketmodule.c | 17 ++++ Modules/socketmodule.h | 4 + configure | 2 +- configure.ac | 2 +- pyconfig.h.in | 3 + 8 files changed, 151 insertions(+), 18 deletions(-) diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst --- a/Doc/library/socket.rst +++ b/Doc/library/socket.rst @@ -107,8 +107,8 @@ .. versionadded:: 3.3 -- Certain other address families (:const:`AF_BLUETOOTH`, :const:`AF_PACKET`) - support specific representations. +- Certain other address families (:const:`AF_BLUETOOTH`, :const:`AF_PACKET`, + :const:`AF_CAN`) support specific representations. .. XXX document them! @@ -257,6 +257,16 @@ .. versionadded:: 3.3 +.. data:: CAN_BCM + CAN_BCM_* + + CAN_BCM, in the CAN protocol family, is the broadcast manager (BCM) protocol. + Broadcast manager constants, documented in the Linux documentation, are also + defined in the socket module. + + Availability: Linux >= 2.6.25. + + .. versionadded:: 3.4 .. data:: AF_RDS PF_RDS @@ -452,13 +462,16 @@ :const:`AF_INET6`, :const:`AF_UNIX`, :const:`AF_CAN` or :const:`AF_RDS`. The socket type should be :const:`SOCK_STREAM` (the default), :const:`SOCK_DGRAM`, :const:`SOCK_RAW` or perhaps one of the other ``SOCK_`` - constants. The protocol number is usually zero and may be omitted in that - case or :const:`CAN_RAW` in case the address family is :const:`AF_CAN`. + constants. The protocol number is usually zero and may be omitted or in the + case where the address family is :const:`AF_CAN` the protocol should be one + of :const:`CAN_RAW` or :const:`CAN_BCM`. .. versionchanged:: 3.3 The AF_CAN family was added. The AF_RDS family was added. + .. versionchanged:: 3.4 + The CAN_BCM protocol was added. .. function:: socketpair([family[, type[, proto]]]) @@ -1331,7 +1344,16 @@ s.ioctl(socket.SIO_RCVALL, socket.RCVALL_OFF) The last example shows how to use the socket interface to communicate to a CAN -network. This example might require special priviledge:: +network using the raw socket protocol. To use CAN with the broadcast +manager protocol instead, open a socket with:: + + socket.socket(socket.AF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) + +After binding (:const:`CAN_RAW`) or connecting (:const:`CAN_BCM`) the socket, you +can use the :method:`socket.send`, and the :method:`socket.recv` operations (and +their counterparts) on the socket object as usual. + +This example might require special priviledge:: import socket import struct diff --git a/Lib/test/test_socket.py b/Lib/test/test_socket.py --- a/Lib/test/test_socket.py +++ b/Lib/test/test_socket.py @@ -121,6 +121,36 @@ interface = 'vcan0' bufsize = 128 + """The CAN frame structure is defined in : + + struct can_frame { + canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ + __u8 can_dlc; /* data length code: 0 .. 8 */ + __u8 data[8] __attribute__((aligned(8))); + }; + """ + can_frame_fmt = "=IB3x8s" + can_frame_size = struct.calcsize(can_frame_fmt) + + """The Broadcast Management Command frame structure is defined + in : + + struct bcm_msg_head { + __u32 opcode; + __u32 flags; + __u32 count; + struct timeval ival1, ival2; + canid_t can_id; + __u32 nframes; + struct can_frame frames[0]; + } + + `bcm_msg_head` must be 8 bytes aligned because of the `frames` member (see + `struct can_frame` definition). Must use native not standard types for packing. + """ + bcm_cmd_msg_fmt = "@3I4l2I" + bcm_cmd_msg_fmt += "x" * (struct.calcsize(bcm_cmd_msg_fmt) % 8) + def setUp(self): self.s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) self.addCleanup(self.s.close) @@ -1291,10 +1321,35 @@ socket.PF_CAN socket.CAN_RAW + @unittest.skipUnless(hasattr(socket, "CAN_BCM"), + 'socket.CAN_BCM required for this test.') + def testBCMConstants(self): + socket.CAN_BCM + + # opcodes + socket.CAN_BCM_TX_SETUP # create (cyclic) transmission task + socket.CAN_BCM_TX_DELETE # remove (cyclic) transmission task + socket.CAN_BCM_TX_READ # read properties of (cyclic) transmission task + socket.CAN_BCM_TX_SEND # send one CAN frame + socket.CAN_BCM_RX_SETUP # create RX content filter subscription + socket.CAN_BCM_RX_DELETE # remove RX content filter subscription + socket.CAN_BCM_RX_READ # read properties of RX content filter subscription + socket.CAN_BCM_TX_STATUS # reply to TX_READ request + socket.CAN_BCM_TX_EXPIRED # notification on performed transmissions (count=0) + socket.CAN_BCM_RX_STATUS # reply to RX_READ request + socket.CAN_BCM_RX_TIMEOUT # cyclic message is absent + socket.CAN_BCM_RX_CHANGED # updated CAN frame (detected content change) + def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass + @unittest.skipUnless(hasattr(socket, "CAN_BCM"), + 'socket.CAN_BCM required for this test.') + def testCreateBCMSocket(self): + with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) as s: + pass + def testBindAny(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: s.bind(('', )) @@ -1327,19 +1382,8 @@ @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') - at unittest.skipUnless(thread, 'Threading required for this test.') class CANTest(ThreadedCANSocketTest): - """The CAN frame structure is defined in : - - struct can_frame { - canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ - __u8 can_dlc; /* data length code: 0 .. 8 */ - __u8 data[8] __attribute__((aligned(8))); - }; - """ - can_frame_fmt = "=IB3x8s" - def __init__(self, methodName='runTest'): ThreadedCANSocketTest.__init__(self, methodName=methodName) @@ -1388,6 +1432,46 @@ self.cf2 = self.build_can_frame(0x12, b'\x99\x22\x33') self.cli.send(self.cf2) + @unittest.skipUnless(hasattr(socket, "CAN_BCM"), + 'socket.CAN_BCM required for this test.') + def _testBCM(self): + cf, addr = self.cli.recvfrom(self.bufsize) + self.assertEqual(self.cf, cf) + can_id, can_dlc, data = self.dissect_can_frame(cf) + self.assertEqual(self.can_id, can_id) + self.assertEqual(self.data, data) + + @unittest.skipUnless(hasattr(socket, "CAN_BCM"), + 'socket.CAN_BCM required for this test.') + def testBCM(self): + bcm = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) + self.addCleanup(bcm.close) + bcm.connect((self.interface,)) + self.can_id = 0x123 + self.data = bytes([0xc0, 0xff, 0xee]) + self.cf = self.build_can_frame(self.can_id, self.data) + opcode = socket.CAN_BCM_TX_SEND + flags = 0 + count = 0 + ival1_seconds = ival1_usec = ival2_seconds = ival2_usec = 0 + bcm_can_id = 0x0222 + nframes = 1 + assert len(self.cf) == 16 + header = struct.pack(self.bcm_cmd_msg_fmt, + opcode, + flags, + count, + ival1_seconds, + ival1_usec, + ival2_seconds, + ival2_usec, + bcm_can_id, + nframes, + ) + header_plus_frame = header + self.cf + bytes_sent = bcm.send(header_plus_frame) + self.assertEqual(bytes_sent, len(header_plus_frame)) + @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class BasicRDSTest(unittest.TestCase): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,9 @@ Library ------- +- Issue #15359: Add CAN_BCM protocol support to the socket module. Patch by + Brian Thorne. + - Issue #16948: Fix quoted printable body encoding for non-latin1 character sets in the email package. diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c --- a/Modules/socketmodule.c +++ b/Modules/socketmodule.c @@ -1598,6 +1598,8 @@ case AF_CAN: switch (s->sock_proto) { case CAN_RAW: + /* fall-through */ + case CAN_BCM: { struct sockaddr_can *addr; PyObject *interfaceName; @@ -6031,6 +6033,21 @@ PyModule_AddIntConstant(m, "CAN_RAW_LOOPBACK", CAN_RAW_LOOPBACK); PyModule_AddIntConstant(m, "CAN_RAW_RECV_OWN_MSGS", CAN_RAW_RECV_OWN_MSGS); #endif +#ifdef HAVE_LINUX_CAN_BCM_H + PyModule_AddIntConstant(m, "CAN_BCM", CAN_BCM); + PyModule_AddIntConstant(m, "CAN_BCM_TX_SETUP", TX_SETUP); + PyModule_AddIntConstant(m, "CAN_BCM_TX_DELETE", TX_DELETE); + PyModule_AddIntConstant(m, "CAN_BCM_TX_READ", TX_READ); + PyModule_AddIntConstant(m, "CAN_BCM_TX_SEND", TX_SEND); + PyModule_AddIntConstant(m, "CAN_BCM_RX_SETUP", RX_SETUP); + PyModule_AddIntConstant(m, "CAN_BCM_RX_DELETE", RX_DELETE); + PyModule_AddIntConstant(m, "CAN_BCM_RX_READ", RX_READ); + PyModule_AddIntConstant(m, "CAN_BCM_TX_STATUS", TX_STATUS); + PyModule_AddIntConstant(m, "CAN_BCM_TX_EXPIRED", TX_EXPIRED); + PyModule_AddIntConstant(m, "CAN_BCM_RX_STATUS", RX_STATUS); + PyModule_AddIntConstant(m, "CAN_BCM_RX_TIMEOUT", RX_TIMEOUT); + PyModule_AddIntConstant(m, "CAN_BCM_RX_CHANGED", RX_CHANGED); +#endif #ifdef SOL_RDS PyModule_AddIntConstant(m, "SOL_RDS", SOL_RDS); #endif diff --git a/Modules/socketmodule.h b/Modules/socketmodule.h --- a/Modules/socketmodule.h +++ b/Modules/socketmodule.h @@ -80,6 +80,10 @@ #include #endif +#ifdef HAVE_LINUX_CAN_BCM_H +#include +#endif + #ifdef HAVE_SYS_SYS_DOMAIN_H #include #endif diff --git a/configure b/configure --- a/configure +++ b/configure @@ -7224,7 +7224,7 @@ # On Linux, can.h and can/raw.h require sys/socket.h -for ac_header in linux/can.h linux/can/raw.h +for ac_header in linux/can.h linux/can/raw.h linux/can/bcm.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_compile "$LINENO" "$ac_header" "$as_ac_Header" " diff --git a/configure.ac b/configure.ac --- a/configure.ac +++ b/configure.ac @@ -1568,7 +1568,7 @@ ]) # On Linux, can.h and can/raw.h require sys/socket.h -AC_CHECK_HEADERS(linux/can.h linux/can/raw.h,,,[ +AC_CHECK_HEADERS(linux/can.h linux/can/raw.h linux/can/bcm.h,,,[ #ifdef HAVE_SYS_SOCKET_H #include #endif diff --git a/pyconfig.h.in b/pyconfig.h.in --- a/pyconfig.h.in +++ b/pyconfig.h.in @@ -501,6 +501,9 @@ /* Define to 1 if you have the `linkat' function. */ #undef HAVE_LINKAT +/* Define to 1 if you have the header file. */ +#undef HAVE_LINUX_CAN_BCM_H + /* Define to 1 if you have the header file. */ #undef HAVE_LINUX_CAN_H -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 21:14:35 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 5 Feb 2013 21:14:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2317122=3A_Fix_and_?= =?utf-8?q?cleanup_test=5Ffunctools=2Epy=2E?= Message-ID: <3Z0xpM3lMczSS3@mail.python.org> http://hg.python.org/cpython/rev/2a369c32f1f1 changeset: 82025:2a369c32f1f1 user: Serhiy Storchaka date: Tue Feb 05 22:12:59 2013 +0200 summary: Issue #17122: Fix and cleanup test_functools.py. files: Lib/test/test_functools.py | 71 +++++++------------------ 1 files changed, 21 insertions(+), 50 deletions(-) diff --git a/Lib/test/test_functools.py b/Lib/test/test_functools.py --- a/Lib/test/test_functools.py +++ b/Lib/test/test_functools.py @@ -8,30 +8,9 @@ import functools -original_functools = functools py_functools = support.import_fresh_module('functools', blocked=['_functools']) c_functools = support.import_fresh_module('functools', fresh=['_functools']) -class BaseTest(unittest.TestCase): - - """Base class required for testing C and Py implementations.""" - - def setUp(self): - - # The module must be explicitly set so that the proper - # interaction between the c module and the python module - # can be controlled. - self.partial = self.module.partial - super(BaseTest, self).setUp() - -class BaseTestC(BaseTest): - module = c_functools - -class BaseTestPy(BaseTest): - module = py_functools - -PythonPartial = py_functools.partial - def capture(*args, **kw): """capture all positional and keyword arguments""" return args, kw @@ -40,9 +19,7 @@ """ return the signature of a partial object """ return (part.func, part.args, part.keywords, part.__dict__) -class TestPartial(object): - - partial = functools.partial +class TestPartial: def test_basic_examples(self): p = self.partial(capture, 1, 2, a=10, b=20) @@ -161,12 +138,17 @@ join = self.partial(''.join) self.assertEqual(join(data), '0123456789') + at unittest.skipUnless(c_functools, 'requires the C _functools module') +class TestPartialC(TestPartial, unittest.TestCase): + if c_functools: + partial = c_functools.partial + def test_repr(self): args = (object(), object()) args_repr = ', '.join(repr(a) for a in args) kwargs = {'a': object(), 'b': object()} kwargs_repr = ', '.join("%s=%r" % (k, v) for k, v in kwargs.items()) - if self.partial is functools.partial: + if self.partial is c_functools.partial: name = 'functools.partial' else: name = self.partial.__name__ @@ -193,8 +175,6 @@ f_copy = pickle.loads(pickle.dumps(f)) self.assertEqual(signature(f), signature(f_copy)) -class TestPartialC(BaseTestC, TestPartial): - # Issue 6083: Reference counting bug def test_setstate_refcount(self): class BadSequence: @@ -214,27 +194,17 @@ "new style getargs format but argument is not a tuple", f.__setstate__, BadSequence()) -class TestPartialPy(BaseTestPy, TestPartial): +class TestPartialPy(TestPartial, unittest.TestCase): + partial = staticmethod(py_functools.partial) - def test_pickle(self): - raise unittest.SkipTest("Python implementation of partial isn't picklable") - - def test_repr(self): - raise unittest.SkipTest("Python implementation of partial uses own repr") - -class TestPartialCSubclass(TestPartialC): - +if c_functools: class PartialSubclass(c_functools.partial): pass - partial = staticmethod(PartialSubclass) - -class TestPartialPySubclass(TestPartialPy): - - class PartialSubclass(c_functools.partial): - pass - - partial = staticmethod(PartialSubclass) + at unittest.skipUnless(c_functools, 'requires the C _functools module') +class TestPartialCSubclass(TestPartialC): + if c_functools: + partial = PartialSubclass class TestUpdateWrapper(unittest.TestCase): @@ -482,7 +452,7 @@ d = {"one": 1, "two": 2, "three": 3} self.assertEqual(self.func(add, d), "".join(d.keys())) -class TestCmpToKey(object): +class TestCmpToKey: def test_cmp_to_key(self): def cmp1(x, y): @@ -513,7 +483,7 @@ with self.assertRaises(TypeError): key = self.cmp_to_key() # too few args with self.assertRaises(TypeError): - key = self.module.cmp_to_key(cmp1, None) # too many args + key = self.cmp_to_key(cmp1, None) # too many args key = self.cmp_to_key(cmp1) with self.assertRaises(TypeError): key() # too few args @@ -564,10 +534,12 @@ self.assertRaises(TypeError, hash, k) self.assertNotIsInstance(k, collections.Hashable) -class TestCmpToKeyC(BaseTestC, TestCmpToKey): - cmp_to_key = c_functools.cmp_to_key + at unittest.skipUnless(c_functools, 'requires the C _functools module') +class TestCmpToKeyC(TestCmpToKey, unittest.TestCase): + if c_functools: + cmp_to_key = c_functools.cmp_to_key -class TestCmpToKeyPy(BaseTestPy, TestCmpToKey): +class TestCmpToKeyPy(TestCmpToKey, unittest.TestCase): cmp_to_key = staticmethod(py_functools.cmp_to_key) class TestTotalOrdering(unittest.TestCase): @@ -842,7 +814,6 @@ TestPartialC, TestPartialPy, TestPartialCSubclass, - TestPartialPySubclass, TestUpdateWrapper, TestTotalOrdering, TestCmpToKeyC, -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 21:24:58 2013 From: python-checkins at python.org (antoine.pitrou) Date: Tue, 5 Feb 2013 21:24:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2317107=3A_Test_cli?= =?utf-8?q?ent-side_SNI_support_in_urllib=2Erequest_thanks_to_the_new?= Message-ID: <3Z0y2L2J31zSZx@mail.python.org> http://hg.python.org/cpython/rev/f74a12e23aaa changeset: 82026:f74a12e23aaa user: Antoine Pitrou date: Tue Feb 05 21:20:51 2013 +0100 summary: Issue #17107: Test client-side SNI support in urllib.request thanks to the new server-side SNI support in the ssl module. Initial patch by Daniel Black. files: Lib/test/ssl_servers.py | 8 +++-- Lib/test/test_httplib.py | 2 +- Lib/test/test_ssl.py | 2 +- Lib/test/test_urllib2_localnet.py | 28 ++++++++++++++++-- Lib/test/test_urllib2net.py | 22 -------------- Misc/NEWS | 4 ++ 6 files changed, 36 insertions(+), 30 deletions(-) diff --git a/Lib/test/ssl_servers.py b/Lib/test/ssl_servers.py --- a/Lib/test/ssl_servers.py +++ b/Lib/test/ssl_servers.py @@ -147,9 +147,11 @@ self.server.shutdown() -def make_https_server(case, certfile=CERTFILE, host=HOST, handler_class=None): - # we assume the certfile contains both private key and certificate - context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) +def make_https_server(case, *, context=None, certfile=CERTFILE, + host=HOST, handler_class=None): + if context is None: + context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) + # We assume the certfile contains both private key and certificate context.load_cert_chain(certfile) server = HTTPSServerThread(context, host, handler_class) flag = threading.Event() diff --git a/Lib/test/test_httplib.py b/Lib/test/test_httplib.py --- a/Lib/test/test_httplib.py +++ b/Lib/test/test_httplib.py @@ -703,7 +703,7 @@ def make_server(self, certfile): from test.ssl_servers import make_https_server - return make_https_server(self, certfile) + return make_https_server(self, certfile=certfile) def test_attributes(self): # simple test to check it's storing the timeout diff --git a/Lib/test/test_ssl.py b/Lib/test/test_ssl.py --- a/Lib/test/test_ssl.py +++ b/Lib/test/test_ssl.py @@ -1637,7 +1637,7 @@ def test_socketserver(self): """Using a SocketServer to create and manage SSL connections.""" - server = make_https_server(self, CERTFILE) + server = make_https_server(self, certfile=CERTFILE) # try to connect if support.verbose: sys.stdout.write('\n') diff --git a/Lib/test/test_urllib2_localnet.py b/Lib/test/test_urllib2_localnet.py --- a/Lib/test/test_urllib2_localnet.py +++ b/Lib/test/test_urllib2_localnet.py @@ -9,7 +9,10 @@ import hashlib from test import support threading = support.import_module('threading') - +try: + import ssl +except ImportError: + ssl = None here = os.path.dirname(__file__) # Self-signed cert file for 'localhost' @@ -17,6 +20,7 @@ # Self-signed cert file for 'fakehostname' CERT_fakehostname = os.path.join(here, 'keycert2.pem') + # Loopback http server infrastructure class LoopbackHttpServer(http.server.HTTPServer): @@ -353,12 +357,15 @@ def setUp(self): super(TestUrlopen, self).setUp() # Ignore proxies for localhost tests. + self.old_environ = os.environ.copy() os.environ['NO_PROXY'] = '*' self.server = None def tearDown(self): if self.server is not None: self.server.stop() + os.environ.clear() + os.environ.update(self.old_environ) super(TestUrlopen, self).tearDown() def urlopen(self, url, data=None, **kwargs): @@ -386,14 +393,14 @@ handler.port = port return handler - def start_https_server(self, responses=None, certfile=CERT_localhost): + def start_https_server(self, responses=None, **kwargs): if not hasattr(urllib.request, 'HTTPSHandler'): self.skipTest('ssl support required') from test.ssl_servers import make_https_server if responses is None: responses = [(200, [], b"we care a bit")] handler = GetRequestHandler(responses) - server = make_https_server(self, certfile=certfile, handler_class=handler) + server = make_https_server(self, handler_class=handler, **kwargs) handler.port = server.port return handler @@ -483,6 +490,21 @@ self.urlopen("https://localhost:%s/bizarre" % handler.port, cadefault=True) + def test_https_sni(self): + if ssl is None: + self.skipTest("ssl module required") + if not ssl.HAS_SNI: + self.skipTest("SNI support required in OpenSSL") + sni_name = None + def cb_sni(ssl_sock, server_name, initial_context): + nonlocal sni_name + sni_name = server_name + context = ssl.SSLContext(ssl.PROTOCOL_TLSv1) + context.set_servername_callback(cb_sni) + handler = self.start_https_server(context=context, certfile=CERT_localhost) + self.urlopen("https://localhost:%s" % handler.port) + self.assertEqual(sni_name, "localhost") + def test_sending_headers(self): handler = self.start_server() req = urllib.request.Request("http://localhost:%s/" % handler.port, diff --git a/Lib/test/test_urllib2net.py b/Lib/test/test_urllib2net.py --- a/Lib/test/test_urllib2net.py +++ b/Lib/test/test_urllib2net.py @@ -330,31 +330,9 @@ self.assertEqual(u.fp.fp.raw._sock.gettimeout(), 60) - at unittest.skipUnless(ssl, "requires SSL support") -class HTTPSTests(unittest.TestCase): - - def test_sni(self): - self.skipTest("test disabled - test server needed") - # Checks that Server Name Indication works, if supported by the - # OpenSSL linked to. - # The ssl module itself doesn't have server-side support for SNI, - # so we rely on a third-party test site. - expect_sni = ssl.HAS_SNI - with support.transient_internet("XXX"): - u = urllib.request.urlopen("XXX") - contents = u.readall() - if expect_sni: - self.assertIn(b"Great", contents) - self.assertNotIn(b"Unfortunately", contents) - else: - self.assertNotIn(b"Great", contents) - self.assertIn(b"Unfortunately", contents) - - def test_main(): support.requires("network") support.run_unittest(AuthTests, - HTTPSTests, OtherNetworkTests, CloseSocketTest, TimeoutTest, diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -754,6 +754,10 @@ Tests ----- +- Issue #17107: Test client-side SNI support in urllib.request thanks to + the new server-side SNI support in the ssl module. Initial patch by + Daniel Black. + - Issue #17041: Fix testing when Python is configured with the --without-doc-strings. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 5 23:15:10 2013 From: python-checkins at python.org (guido.van.rossum) Date: Tue, 5 Feb 2013 23:15:10 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Get_rid_of_add=5Fconnector=28?= =?utf-8?q?=29=2E_The_code_will_follow_suit_soon=2E?= Message-ID: <3Z10TV4JvRzQNp@mail.python.org> http://hg.python.org/peps/rev/abfbb4ee96a6 changeset: 4717:abfbb4ee96a6 user: Guido van Rossum date: Tue Feb 05 14:15:06 2013 -0800 summary: Get rid of add_connector(). The code will follow suit soon. files: pep-3156.txt | 7 ------- 1 files changed, 0 insertions(+), 7 deletions(-) diff --git a/pep-3156.txt b/pep-3156.txt --- a/pep-3156.txt +++ b/pep-3156.txt @@ -399,13 +399,6 @@ - ``remove_writer(fd)``. This is to ``add_writer()`` as ``remove_reader()`` is to ``add_reader()``. -- ``add_connector(fd, callback, *args)``. Like ``add_writer()`` but - meant to wait for ``connect()`` operations, which on some platforms - require different handling (e.g. ``WSAPoll()`` on Windows). - -- ``remove_connector(fd)``. This is to ``remove_writer()`` as - ``add_connector()`` is to ``add_writer()``. - TBD: What about multiple callbacks per fd? The current semantics is that ``add_reader()/add_writer()`` replace a previously registered callback. Change this to raise an exception if a callback is already -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Wed Feb 6 05:59:32 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Wed, 06 Feb 2013 05:59:32 +0100 Subject: [Python-checkins] Daily reference leaks (f74a12e23aaa): sum=0 Message-ID: results for f74a12e23aaa on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogSBxIDr', '-x'] From python-checkins at python.org Wed Feb 6 09:38:54 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 6 Feb 2013 09:38:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2NzIz?= =?utf-8?q?=3A_httplib=2EHTTPResponse_no_longer_marked_closed_when_the_con?= =?utf-8?q?nection?= Message-ID: <3Z1GKB4qCDzSPv@mail.python.org> http://hg.python.org/cpython/rev/6cc5bbfcf04e changeset: 82027:6cc5bbfcf04e branch: 3.2 parent: 82021:801cb3918212 user: Serhiy Storchaka date: Wed Feb 06 10:31:57 2013 +0200 summary: Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. files: Lib/http/client.py | 34 +++++++++++++++------------ Lib/test/test_httplib.py | 18 ++++++++++++++ Misc/NEWS | 2 + 3 files changed, 39 insertions(+), 15 deletions(-) diff --git a/Lib/http/client.py b/Lib/http/client.py --- a/Lib/http/client.py +++ b/Lib/http/client.py @@ -324,7 +324,7 @@ # empty version will cause next test to fail. version = "" if not version.startswith("HTTP/"): - self.close() + self._close_conn() raise BadStatusLine(line) # The status code is a three-digit number @@ -446,22 +446,25 @@ # otherwise, assume it will close return True + def _close_conn(self): + fp = self.fp + self.fp = None + fp.close() + def close(self): + super().close() # set "closed" flag if self.fp: - self.fp.close() - self.fp = None + self._close_conn() # These implementations are for the benefit of io.BufferedReader. # XXX This class should probably be revised to act more like # the "raw stream" that BufferedReader expects. - @property - def closed(self): - return self.isclosed() - def flush(self): - self.fp.flush() + super().flush() + if self.fp: + self.fp.flush() def readable(self): return True @@ -469,6 +472,7 @@ # End of "raw stream" methods def isclosed(self): + """True if the connection is closed.""" # NOTE: it is possible that we will not ever call self.close(). This # case occurs when will_close is TRUE, length is None, and we # read up to the last byte, but NOT past it. @@ -482,7 +486,7 @@ return b"" if self._method == "HEAD": - self.close() + self._close_conn() return b"" if self.chunked: @@ -496,10 +500,10 @@ try: s = self._safe_read(self.length) except IncompleteRead: - self.close() + self._close_conn() raise self.length = 0 - self.close() # we read everything + self._close_conn() # we read everything return s if self.length is not None: @@ -514,11 +518,11 @@ if not s: # Ideally, we would raise IncompleteRead if the content-length # wasn't satisfied, but it might break compatibility. - self.close() + self._close_conn() elif self.length is not None: self.length -= len(s) if not self.length: - self.close() + self._close_conn() return s @@ -539,7 +543,7 @@ except ValueError: # close the connection as protocol synchronisation is # probably lost - self.close() + self._close_conn() raise IncompleteRead(b''.join(value)) if chunk_left == 0: break @@ -576,7 +580,7 @@ break # we read everything; close the "file" - self.close() + self._close_conn() return b''.join(value) diff --git a/Lib/test/test_httplib.py b/Lib/test/test_httplib.py --- a/Lib/test/test_httplib.py +++ b/Lib/test/test_httplib.py @@ -164,6 +164,9 @@ resp.begin() self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) @@ -185,6 +188,9 @@ self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when @@ -198,6 +204,9 @@ self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_reads_incomplete_body(self): # if the server shuts down the connection before the whole @@ -211,6 +220,9 @@ self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port @@ -355,6 +367,9 @@ self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( @@ -430,6 +445,9 @@ resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) class OfflineTest(TestCase): def test_responses(self): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -212,6 +212,8 @@ Library ------- +- Issue #16723: httplib.HTTPResponse no longer marked closed when the connection + is automatically closed. - Issue #16948: Fix quoted printable body encoding for non-latin1 character sets in the email package. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 6 09:38:56 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 6 Feb 2013 09:38:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316723=3A_httplib=2EHTTPResponse_no_longer_marked_clos?= =?utf-8?q?ed_when_the_connection?= Message-ID: <3Z1GKD20FzzSZL@mail.python.org> http://hg.python.org/cpython/rev/0461ed77ee4e changeset: 82028:0461ed77ee4e branch: 3.3 parent: 82022:e644058e8e7b parent: 82027:6cc5bbfcf04e user: Serhiy Storchaka date: Wed Feb 06 10:35:40 2013 +0200 summary: Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. files: Lib/http/client.py | 38 +++++++++++++++------------ Lib/test/test_httplib.py | 24 +++++++++++++++++ Misc/NEWS | 3 ++ 3 files changed, 48 insertions(+), 17 deletions(-) diff --git a/Lib/http/client.py b/Lib/http/client.py --- a/Lib/http/client.py +++ b/Lib/http/client.py @@ -332,7 +332,7 @@ # empty version will cause next test to fail. version = "" if not version.startswith("HTTP/"): - self.close() + self._close_conn() raise BadStatusLine(line) # The status code is a three-digit number @@ -454,22 +454,25 @@ # otherwise, assume it will close return True + def _close_conn(self): + fp = self.fp + self.fp = None + fp.close() + def close(self): + super().close() # set "closed" flag if self.fp: - self.fp.close() - self.fp = None + self._close_conn() # These implementations are for the benefit of io.BufferedReader. # XXX This class should probably be revised to act more like # the "raw stream" that BufferedReader expects. - @property - def closed(self): - return self.isclosed() - def flush(self): - self.fp.flush() + super().flush() + if self.fp: + self.fp.flush() def readable(self): return True @@ -477,6 +480,7 @@ # End of "raw stream" methods def isclosed(self): + """True if the connection is closed.""" # NOTE: it is possible that we will not ever call self.close(). This # case occurs when will_close is TRUE, length is None, and we # read up to the last byte, but NOT past it. @@ -490,7 +494,7 @@ return b"" if self._method == "HEAD": - self.close() + self._close_conn() return b"" if amt is not None: @@ -510,10 +514,10 @@ try: s = self._safe_read(self.length) except IncompleteRead: - self.close() + self._close_conn() raise self.length = 0 - self.close() # we read everything + self._close_conn() # we read everything return s def readinto(self, b): @@ -521,7 +525,7 @@ return 0 if self._method == "HEAD": - self.close() + self._close_conn() return 0 if self.chunked: @@ -539,11 +543,11 @@ if not n: # Ideally, we would raise IncompleteRead if the content-length # wasn't satisfied, but it might break compatibility. - self.close() + self._close_conn() elif self.length is not None: self.length -= n if not self.length: - self.close() + self._close_conn() return n def _read_next_chunk_size(self): @@ -559,7 +563,7 @@ except ValueError: # close the connection as protocol synchronisation is # probably lost - self.close() + self._close_conn() raise def _read_and_discard_trailer(self): @@ -597,7 +601,7 @@ self._read_and_discard_trailer() # we read everything; close the "file" - self.close() + self._close_conn() return b''.join(value) @@ -638,7 +642,7 @@ self._read_and_discard_trailer() # we read everything; close the "file" - self.close() + self._close_conn() return total_bytes diff --git a/Lib/test/test_httplib.py b/Lib/test/test_httplib.py --- a/Lib/test/test_httplib.py +++ b/Lib/test/test_httplib.py @@ -164,6 +164,9 @@ resp.begin() self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) @@ -185,6 +188,9 @@ self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_readintos(self): # if we have a length, the system knows when to close itself @@ -202,6 +208,9 @@ self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when @@ -215,6 +224,9 @@ self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_readintos_no_content_length(self): # when no length is present, the socket should be gracefully closed when @@ -266,6 +278,9 @@ n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port @@ -493,6 +508,9 @@ self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_readinto_chunked_head(self): chunked_start = ( @@ -513,6 +531,9 @@ self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( @@ -588,6 +609,9 @@ resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) class OfflineTest(TestCase): def test_responses(self): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -163,6 +163,9 @@ Library ------- +- Issue #16723: httplib.HTTPResponse no longer marked closed when the connection + is automatically closed. + - Issue #16948: Fix quoted printable body encoding for non-latin1 character sets in the email package. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 6 09:38:57 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 6 Feb 2013 09:38:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316723=3A_httplib=2EHTTPResponse_no_longer_marke?= =?utf-8?q?d_closed_when_the_connection?= Message-ID: <3Z1GKF60SPzSZl@mail.python.org> http://hg.python.org/cpython/rev/5f8c68281d18 changeset: 82029:5f8c68281d18 parent: 82026:f74a12e23aaa parent: 82028:0461ed77ee4e user: Serhiy Storchaka date: Wed Feb 06 10:37:19 2013 +0200 summary: Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. files: Lib/http/client.py | 38 +++++++++++++++------------ Lib/test/test_httplib.py | 24 +++++++++++++++++ Misc/NEWS | 3 ++ 3 files changed, 48 insertions(+), 17 deletions(-) diff --git a/Lib/http/client.py b/Lib/http/client.py --- a/Lib/http/client.py +++ b/Lib/http/client.py @@ -332,7 +332,7 @@ # empty version will cause next test to fail. version = "" if not version.startswith("HTTP/"): - self.close() + self._close_conn() raise BadStatusLine(line) # The status code is a three-digit number @@ -454,22 +454,25 @@ # otherwise, assume it will close return True + def _close_conn(self): + fp = self.fp + self.fp = None + fp.close() + def close(self): + super().close() # set "closed" flag if self.fp: - self.fp.close() - self.fp = None + self._close_conn() # These implementations are for the benefit of io.BufferedReader. # XXX This class should probably be revised to act more like # the "raw stream" that BufferedReader expects. - @property - def closed(self): - return self.isclosed() - def flush(self): - self.fp.flush() + super().flush() + if self.fp: + self.fp.flush() def readable(self): return True @@ -477,6 +480,7 @@ # End of "raw stream" methods def isclosed(self): + """True if the connection is closed.""" # NOTE: it is possible that we will not ever call self.close(). This # case occurs when will_close is TRUE, length is None, and we # read up to the last byte, but NOT past it. @@ -490,7 +494,7 @@ return b"" if self._method == "HEAD": - self.close() + self._close_conn() return b"" if amt is not None: @@ -510,10 +514,10 @@ try: s = self._safe_read(self.length) except IncompleteRead: - self.close() + self._close_conn() raise self.length = 0 - self.close() # we read everything + self._close_conn() # we read everything return s def readinto(self, b): @@ -521,7 +525,7 @@ return 0 if self._method == "HEAD": - self.close() + self._close_conn() return 0 if self.chunked: @@ -539,11 +543,11 @@ if not n: # Ideally, we would raise IncompleteRead if the content-length # wasn't satisfied, but it might break compatibility. - self.close() + self._close_conn() elif self.length is not None: self.length -= n if not self.length: - self.close() + self._close_conn() return n def _read_next_chunk_size(self): @@ -559,7 +563,7 @@ except ValueError: # close the connection as protocol synchronisation is # probably lost - self.close() + self._close_conn() raise def _read_and_discard_trailer(self): @@ -597,7 +601,7 @@ self._read_and_discard_trailer() # we read everything; close the "file" - self.close() + self._close_conn() return b''.join(value) @@ -638,7 +642,7 @@ self._read_and_discard_trailer() # we read everything; close the "file" - self.close() + self._close_conn() return total_bytes diff --git a/Lib/test/test_httplib.py b/Lib/test/test_httplib.py --- a/Lib/test/test_httplib.py +++ b/Lib/test/test_httplib.py @@ -166,6 +166,9 @@ resp.begin() self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) @@ -187,6 +190,9 @@ self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_readintos(self): # if we have a length, the system knows when to close itself @@ -204,6 +210,9 @@ self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when @@ -217,6 +226,9 @@ self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_partial_readintos_no_content_length(self): # when no length is present, the socket should be gracefully closed when @@ -268,6 +280,9 @@ n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port @@ -495,6 +510,9 @@ self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_readinto_chunked_head(self): chunked_start = ( @@ -515,6 +533,9 @@ self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( @@ -590,6 +611,9 @@ resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) + self.assertFalse(resp.closed) + resp.close() + self.assertTrue(resp.closed) def test_delayed_ack_opt(self): # Test that Nagle/delayed_ack optimistaion works correctly. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,9 @@ Library ------- +- Issue #16723: httplib.HTTPResponse no longer marked closed when the connection + is automatically closed. + - Issue #15359: Add CAN_BCM protocol support to the socket module. Patch by Brian Thorne. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 6 16:06:26 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 6 Feb 2013 16:06:26 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MTQyOiBmaXgg?= =?utf-8?q?apparent_copy_and_paste_error_in_test=5Fall=2E?= Message-ID: <3Z1QwL6FrTzSXK@mail.python.org> http://hg.python.org/cpython/rev/1fc87fa05333 changeset: 82030:1fc87fa05333 branch: 3.2 parent: 82027:6cc5bbfcf04e user: R David Murray date: Wed Feb 06 09:56:19 2013 -0500 summary: #17142: fix apparent copy and paste error in test_all. files: Lib/test/test_builtin.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -164,7 +164,7 @@ self.assertEqual(any([None, None, None]), False) self.assertEqual(any([None, 4, None]), True) self.assertRaises(RuntimeError, any, [None, TestFailingBool(), 6]) - self.assertRaises(RuntimeError, all, TestFailingIter()) + self.assertRaises(RuntimeError, any, TestFailingIter()) self.assertRaises(TypeError, any, 10) # Non-iterable self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 6 16:06:28 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 6 Feb 2013 16:06:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge=3A_=2317142=3A_fix_apparent_copy_and_paste_error_in_test?= =?utf-8?b?X2FsbC4=?= Message-ID: <3Z1QwN1qn3zSXB@mail.python.org> http://hg.python.org/cpython/rev/4db932a303b4 changeset: 82031:4db932a303b4 branch: 3.3 parent: 82028:0461ed77ee4e parent: 82030:1fc87fa05333 user: R David Murray date: Wed Feb 06 09:57:51 2013 -0500 summary: Merge: #17142: fix apparent copy and paste error in test_all. files: Lib/test/test_builtin.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -189,7 +189,7 @@ self.assertEqual(any([None, None, None]), False) self.assertEqual(any([None, 4, None]), True) self.assertRaises(RuntimeError, any, [None, TestFailingBool(), 6]) - self.assertRaises(RuntimeError, all, TestFailingIter()) + self.assertRaises(RuntimeError, any, TestFailingIter()) self.assertRaises(TypeError, any, 10) # Non-iterable self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 6 16:06:29 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 6 Feb 2013 16:06:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=2317142=3A_fix_apparent_copy_and_paste_error_i?= =?utf-8?q?n_test=5Fall=2E?= Message-ID: <3Z1QwP4V1TzSWF@mail.python.org> http://hg.python.org/cpython/rev/acdb0da0df2b changeset: 82032:acdb0da0df2b parent: 82029:5f8c68281d18 parent: 82031:4db932a303b4 user: R David Murray date: Wed Feb 06 10:05:56 2013 -0500 summary: Merge: #17142: fix apparent copy and paste error in test_all. files: Lib/test/test_builtin.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -189,7 +189,7 @@ self.assertEqual(any([None, None, None]), False) self.assertEqual(any([None, 4, None]), True) self.assertRaises(RuntimeError, any, [None, TestFailingBool(), 6]) - self.assertRaises(RuntimeError, all, TestFailingIter()) + self.assertRaises(RuntimeError, any, TestFailingIter()) self.assertRaises(TypeError, any, 10) # Non-iterable self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 6 16:06:31 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 6 Feb 2013 16:06:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MTQyOiBmaXgg?= =?utf-8?q?apparent_copy_and_paste_error_in_test=5Fall=2E?= Message-ID: <3Z1QwR1KK3zQXF@mail.python.org> http://hg.python.org/cpython/rev/d0cfabed2ef3 changeset: 82033:d0cfabed2ef3 branch: 2.7 parent: 82006:20f0c5398e97 user: R David Murray date: Wed Feb 06 10:06:10 2013 -0500 summary: #17142: fix apparent copy and paste error in test_all. files: Lib/test/test_builtin.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -119,7 +119,7 @@ self.assertEqual(any([None, None, None]), False) self.assertEqual(any([None, 4, None]), True) self.assertRaises(RuntimeError, any, [None, TestFailingBool(), 6]) - self.assertRaises(RuntimeError, all, TestFailingIter()) + self.assertRaises(RuntimeError, any, TestFailingIter()) self.assertRaises(TypeError, any, 10) # Non-iterable self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 6 23:37:42 2013 From: python-checkins at python.org (guido.van.rossum) Date: Wed, 6 Feb 2013 23:37:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Link_to_Tulip_repo=2E?= Message-ID: <3Z1cx23FHkzSRJ@mail.python.org> http://hg.python.org/peps/rev/ba64738031da changeset: 4718:ba64738031da user: Guido van Rossum date: Wed Feb 06 14:37:40 2013 -0800 summary: Link to Tulip repo. files: pep-3156.txt | 5 ++++- 1 files changed, 4 insertions(+), 1 deletions(-) diff --git a/pep-3156.txt b/pep-3156.txt --- a/pep-3156.txt +++ b/pep-3156.txt @@ -17,7 +17,8 @@ PEP 3153. The proposal includes a pluggable event loop API, transport and protocol abstractions similar to those in Twisted, and a higher-level scheduler based on ``yield from`` (PEP 380). A reference -implementation is in the works under the code name tulip. +implementation is in the works under the code name Tulip (the Tulip +repo is linked from the References section at the end). Introduction @@ -1027,6 +1028,8 @@ - PEP 3153, while rejected, has a good write-up explaining the need to separate transports and protocols. +- Tulip repo: http://code.google.com/p/tulip/ + - Nick Coghlan wrote a nice blog post with some background, thoughts about different approaches to async I/O, gevent, and how to use futures with constructs like ``while``, ``for`` and ``with``: -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Thu Feb 7 06:01:44 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Thu, 07 Feb 2013 06:01:44 +0100 Subject: [Python-checkins] Daily reference leaks (acdb0da0df2b): sum=1 Message-ID: results for acdb0da0df2b on branch "default" -------------------------------------------- test_concurrent_futures leaked [0, -2, 3] memory blocks, sum=1 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflog2jEKDV', '-x'] From python-checkins at python.org Thu Feb 7 06:57:30 2013 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 7 Feb 2013 06:57:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Minor_tweaks_to_varnames?= =?utf-8?q?=2C_declarations=2C_and_comments=2E?= Message-ID: <3Z1phV3DCXzRV1@mail.python.org> http://hg.python.org/cpython/rev/2f3669aedc9a changeset: 82034:2f3669aedc9a parent: 82032:acdb0da0df2b user: Raymond Hettinger date: Thu Feb 07 00:57:19 2013 -0500 summary: Minor tweaks to varnames, declarations, and comments. files: Modules/_collectionsmodule.c | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-) diff --git a/Modules/_collectionsmodule.c b/Modules/_collectionsmodule.c --- a/Modules/_collectionsmodule.c +++ b/Modules/_collectionsmodule.c @@ -9,7 +9,7 @@ /* The block length may be set to any number over 1. Larger numbers * reduce the number of calls to the memory allocator but take more - * memory. Ideally, BLOCKLEN should be set to a multiple of the + * memory. Ideally, (BLOCKLEN+2) should be set to a multiple of the * length of a cache line. */ @@ -71,7 +71,7 @@ return NULL; } if (numfreeblocks) { - numfreeblocks -= 1; + numfreeblocks--; b = freeblocks[numfreeblocks]; } else { b = PyMem_Malloc(sizeof(block)); @@ -414,7 +414,6 @@ _deque_rotate(dequeobject *deque, Py_ssize_t n) { Py_ssize_t m, len=deque->len, halflen=len>>1; - block *prevblock; if (len <= 1) return 0; @@ -455,8 +454,8 @@ n -= m; if (deque->rightindex == -1) { + block *prevblock = deque->rightblock->leftlink; assert(deque->rightblock != NULL); - prevblock = deque->rightblock->leftlink; assert(deque->leftblock != deque->rightblock); freeblock(deque->rightblock); prevblock->rightlink = NULL; @@ -490,12 +489,12 @@ n += m; if (deque->leftindex == BLOCKLEN) { + block *nextblock = deque->leftblock->rightlink; assert(deque->leftblock != deque->rightblock); - prevblock = deque->leftblock->rightlink; freeblock(deque->leftblock); - assert(prevblock != NULL); - prevblock->leftlink = NULL; - deque->leftblock = prevblock; + assert(nextblock != NULL); + nextblock->leftlink = NULL; + deque->leftblock = nextblock; deque->leftindex = 0; } } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 09:49:54 2013 From: python-checkins at python.org (senthil.kumaran) Date: Thu, 7 Feb 2013 09:49:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fix_Issue17069?= =?utf-8?q?=3A_Document_getcode_method_in_urllib=2Erequest=2Erst?= Message-ID: <3Z1tWQ1FnPzMRY@mail.python.org> http://hg.python.org/cpython/rev/fae8e212e870 changeset: 82035:fae8e212e870 branch: 3.2 parent: 82030:1fc87fa05333 user: Senthil Kumaran date: Thu Feb 07 00:47:01 2013 -0800 summary: Fix Issue17069: Document getcode method in urllib.request.rst files: Doc/library/urllib.request.rst | 16 ++++++++++++---- 1 files changed, 12 insertions(+), 4 deletions(-) diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -57,16 +57,24 @@ If neither *cafile* nor *capath* is specified, an HTTPS request will not do any verification of the server's certificate. - This function returns a file-like object that works as a :term:`context manager`, - with two additional methods from the :mod:`urllib.response` module + For http and https urls, this function returns a + :class:`http.client.HTTPResponse` object which has the following + :ref:`httpresponse-objects` methods. - * :meth:`geturl` --- return the URL of the resource retrieved, + For ftp, file, data urls and requests are explicity handled by legacy + :class:`URLopener` and :class:`FancyURLopener` class, this function returns + an :class:`urllib.response.addinfourl` object which can work as + :term:`context manager` and has methods such as + + * :meth:`~urllib.response.addinfourl.geturl` --- return the URL of the resource retrieved, commonly used to determine if a redirect was followed - * :meth:`info` --- return the meta-information of the page, such as headers, + * :meth:`~urllib.response.addinfourl.info` --- return the meta-information of the page, such as headers, in the form of an :func:`email.message_from_string` instance (see `Quick Reference to HTTP Headers `_) + * :meth:`~urllib.response.addinfourl.getcode` -- return the HTTP status code of the response. + Raises :exc:`URLError` on errors. Note that ``None`` may be returned if no handler handles the request (though -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 09:49:55 2013 From: python-checkins at python.org (senthil.kumaran) Date: Thu, 7 Feb 2013 09:49:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_Issue17069=3A_Document_getcode_method_in_urllib=2Erequest?= =?utf-8?q?=2Erst?= Message-ID: <3Z1tWR3zRpzSZQ@mail.python.org> http://hg.python.org/cpython/rev/e15d2ad42d93 changeset: 82036:e15d2ad42d93 branch: 3.3 parent: 82031:4db932a303b4 parent: 82035:fae8e212e870 user: Senthil Kumaran date: Thu Feb 07 00:49:12 2013 -0800 summary: Fix Issue17069: Document getcode method in urllib.request.rst files: Doc/library/urllib.request.rst | 16 ++++++++++++---- 1 files changed, 12 insertions(+), 4 deletions(-) diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -63,16 +63,24 @@ an HTTPS request will not do any verification of the server's certificate. - This function returns a file-like object that works as a :term:`context manager`, - with two additional methods from the :mod:`urllib.response` module + For http and https urls, this function returns a + :class:`http.client.HTTPResponse` object which has the following + :ref:`httpresponse-objects` methods. - * :meth:`geturl` --- return the URL of the resource retrieved, + For ftp, file, data urls and requests are explicity handled by legacy + :class:`URLopener` and :class:`FancyURLopener` class, this function returns + an :class:`urllib.response.addinfourl` object which can work as + :term:`context manager` and has methods such as + + * :meth:`~urllib.response.addinfourl.geturl` --- return the URL of the resource retrieved, commonly used to determine if a redirect was followed - * :meth:`info` --- return the meta-information of the page, such as headers, + * :meth:`~urllib.response.addinfourl.info` --- return the meta-information of the page, such as headers, in the form of an :func:`email.message_from_string` instance (see `Quick Reference to HTTP Headers `_) + * :meth:`~urllib.response.addinfourl.getcode` -- return the HTTP status code of the response. + Raises :exc:`URLError` on errors. Note that ``None`` may be returned if no handler handles the request (though -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 09:49:56 2013 From: python-checkins at python.org (senthil.kumaran) Date: Thu, 7 Feb 2013 09:49:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_Issue17069=3A_Document_getcode_method_in_urllib=2Ere?= =?utf-8?q?quest=2Erst?= Message-ID: <3Z1tWS6yhMzSXB@mail.python.org> http://hg.python.org/cpython/rev/b79df3e8a9a0 changeset: 82037:b79df3e8a9a0 parent: 82034:2f3669aedc9a parent: 82036:e15d2ad42d93 user: Senthil Kumaran date: Thu Feb 07 00:50:02 2013 -0800 summary: Fix Issue17069: Document getcode method in urllib.request.rst files: Doc/library/urllib.request.rst | 16 ++++++++++++---- 1 files changed, 12 insertions(+), 4 deletions(-) diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -63,16 +63,24 @@ an HTTPS request will not do any verification of the server's certificate. - This function returns a file-like object that works as a :term:`context manager`, - with two additional methods from the :mod:`urllib.response` module + For http and https urls, this function returns a + :class:`http.client.HTTPResponse` object which has the following + :ref:`httpresponse-objects` methods. - * :meth:`geturl` --- return the URL of the resource retrieved, + For ftp, file, data urls and requests are explicity handled by legacy + :class:`URLopener` and :class:`FancyURLopener` class, this function returns + an :class:`urllib.response.addinfourl` object which can work as + :term:`context manager` and has methods such as + + * :meth:`~urllib.response.addinfourl.geturl` --- return the URL of the resource retrieved, commonly used to determine if a redirect was followed - * :meth:`info` --- return the meta-information of the page, such as headers, + * :meth:`~urllib.response.addinfourl.info` --- return the meta-information of the page, such as headers, in the form of an :func:`email.message_from_string` instance (see `Quick Reference to HTTP Headers `_) + * :meth:`~urllib.response.addinfourl.getcode` -- return the HTTP status code of the response. + Raises :exc:`URLError` on errors. Note that ``None`` may be returned if no handler handles the request (though -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 09:49:58 2013 From: python-checkins at python.org (senthil.kumaran) Date: Thu, 7 Feb 2013 09:49:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_Issue17069?= =?utf-8?q?=3A_Document_getcode_method_in_urllib=2Erequest=2Erst?= Message-ID: <3Z1tWV2WsdzSbr@mail.python.org> http://hg.python.org/cpython/rev/5630f0aff6ac changeset: 82038:5630f0aff6ac branch: 2.7 parent: 82033:d0cfabed2ef3 user: Senthil Kumaran date: Thu Feb 07 00:51:34 2013 -0800 summary: Fix Issue17069: Document getcode method in urllib.request.rst files: Doc/library/urllib2.rst | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Doc/library/urllib2.rst b/Doc/library/urllib2.rst --- a/Doc/library/urllib2.rst +++ b/Doc/library/urllib2.rst @@ -52,6 +52,8 @@ in the form of an :class:`mimetools.Message` instance (see `Quick Reference to HTTP Headers `_) + * :meth:`getcode` --- return the HTTP status code of the response. + Raises :exc:`URLError` on errors. Note that ``None`` may be returned if no handler handles the request (though the -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:02:59 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:02:59 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogRml4IHRlc3RfZnJv?= =?utf-8?q?m=5Fdll*_in_test=5Freturnfuncptrs=2Epy=2E?= Message-ID: <3Z207R5LTfzMYT@mail.python.org> http://hg.python.org/cpython/rev/8fb98fb758e8 changeset: 82039:8fb98fb758e8 branch: 2.7 user: Serhiy Storchaka date: Thu Feb 07 14:57:53 2013 +0200 summary: Fix test_from_dll* in test_returnfuncptrs.py. files: Lib/ctypes/test/test_returnfuncptrs.py | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-) diff --git a/Lib/ctypes/test/test_returnfuncptrs.py b/Lib/ctypes/test/test_returnfuncptrs.py --- a/Lib/ctypes/test/test_returnfuncptrs.py +++ b/Lib/ctypes/test/test_returnfuncptrs.py @@ -32,31 +32,30 @@ self.assertRaises(ArgumentError, strchr, "abcdef", 3) self.assertRaises(TypeError, strchr, "abcdef") - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') def test_from_dll(self): dll = CDLL(_ctypes_test.__file__) # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("strchr", dll)) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("my_strchr", dll)) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) self.assertRaises(TypeError, strchr, b"abcdef") # Issue 6083: Reference counting bug - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') def test_from_dll_refcount(self): class BadSequence(tuple): def __getitem__(self, key): if key == 0: - return "strchr" + return "my_strchr" if key == 1: return CDLL(_ctypes_test.__file__) raise IndexError # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(BadSequence(("strchr", CDLL(_ctypes_test.__file__)))) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)( + BadSequence(("my_strchr", CDLL(_ctypes_test.__file__)))) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:03:01 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:03:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogRml4IHRlc3RfZnJv?= =?utf-8?q?m=5Fdll*_in_test=5Freturnfuncptrs=2Epy=2E?= Message-ID: <3Z207T0rjPzRNk@mail.python.org> http://hg.python.org/cpython/rev/ec70abe8c886 changeset: 82040:ec70abe8c886 branch: 3.2 parent: 82035:fae8e212e870 user: Serhiy Storchaka date: Thu Feb 07 14:58:44 2013 +0200 summary: Fix test_from_dll* in test_returnfuncptrs.py. files: Lib/ctypes/test/test_returnfuncptrs.py | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-) diff --git a/Lib/ctypes/test/test_returnfuncptrs.py b/Lib/ctypes/test/test_returnfuncptrs.py --- a/Lib/ctypes/test/test_returnfuncptrs.py +++ b/Lib/ctypes/test/test_returnfuncptrs.py @@ -34,31 +34,30 @@ self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) self.assertRaises(TypeError, strchr, b"abcdef") - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') def test_from_dll(self): dll = CDLL(_ctypes_test.__file__) # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("strchr", dll)) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("my_strchr", dll)) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) self.assertRaises(TypeError, strchr, b"abcdef") - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') # Issue 6083: Reference counting bug def test_from_dll_refcount(self): class BadSequence(tuple): def __getitem__(self, key): if key == 0: - return "strchr" + return "my_strchr" if key == 1: return CDLL(_ctypes_test.__file__) raise IndexError # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(BadSequence(("strchr", CDLL(_ctypes_test.__file__)))) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)( + BadSequence(("my_strchr", CDLL(_ctypes_test.__file__)))) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:03:02 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:03:02 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_test=5Ffrom=5Fdll*_in_test=5Freturnfuncptrs=2Epy=2E?= Message-ID: <3Z207V3dljzSbS@mail.python.org> http://hg.python.org/cpython/rev/e49cc1585966 changeset: 82041:e49cc1585966 branch: 3.3 parent: 82036:e15d2ad42d93 parent: 82040:ec70abe8c886 user: Serhiy Storchaka date: Thu Feb 07 14:59:25 2013 +0200 summary: Fix test_from_dll* in test_returnfuncptrs.py. files: Lib/ctypes/test/test_returnfuncptrs.py | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-) diff --git a/Lib/ctypes/test/test_returnfuncptrs.py b/Lib/ctypes/test/test_returnfuncptrs.py --- a/Lib/ctypes/test/test_returnfuncptrs.py +++ b/Lib/ctypes/test/test_returnfuncptrs.py @@ -34,31 +34,30 @@ self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) self.assertRaises(TypeError, strchr, b"abcdef") - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') def test_from_dll(self): dll = CDLL(_ctypes_test.__file__) # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("strchr", dll)) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("my_strchr", dll)) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) self.assertRaises(TypeError, strchr, b"abcdef") - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') # Issue 6083: Reference counting bug def test_from_dll_refcount(self): class BadSequence(tuple): def __getitem__(self, key): if key == 0: - return "strchr" + return "my_strchr" if key == 1: return CDLL(_ctypes_test.__file__) raise IndexError # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(BadSequence(("strchr", CDLL(_ctypes_test.__file__)))) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)( + BadSequence(("my_strchr", CDLL(_ctypes_test.__file__)))) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:03:03 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:03:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_test=5Ffrom=5Fdll*_in_test=5Freturnfuncptrs=2Epy=2E?= Message-ID: <3Z207W6XNNzSc3@mail.python.org> http://hg.python.org/cpython/rev/3236ebe7dd82 changeset: 82042:3236ebe7dd82 parent: 82037:b79df3e8a9a0 parent: 82041:e49cc1585966 user: Serhiy Storchaka date: Thu Feb 07 15:00:02 2013 +0200 summary: Fix test_from_dll* in test_returnfuncptrs.py. files: Lib/ctypes/test/test_returnfuncptrs.py | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-) diff --git a/Lib/ctypes/test/test_returnfuncptrs.py b/Lib/ctypes/test/test_returnfuncptrs.py --- a/Lib/ctypes/test/test_returnfuncptrs.py +++ b/Lib/ctypes/test/test_returnfuncptrs.py @@ -34,31 +34,30 @@ self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) self.assertRaises(TypeError, strchr, b"abcdef") - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') def test_from_dll(self): dll = CDLL(_ctypes_test.__file__) # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("strchr", dll)) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(("my_strchr", dll)) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) self.assertRaises(TypeError, strchr, b"abcdef") - @unittest.skipIf(os.name == 'nt', 'Temporarily disabled for Windows') # Issue 6083: Reference counting bug def test_from_dll_refcount(self): class BadSequence(tuple): def __getitem__(self, key): if key == 0: - return "strchr" + return "my_strchr" if key == 1: return CDLL(_ctypes_test.__file__) raise IndexError # _CFuncPtr instances are now callable with a tuple argument # which denotes a function name and a dll: - strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)(BadSequence(("strchr", CDLL(_ctypes_test.__file__)))) + strchr = CFUNCTYPE(c_char_p, c_char_p, c_char)( + BadSequence(("my_strchr", CDLL(_ctypes_test.__file__)))) self.assertTrue(strchr(b"abcdef", b"b"), "bcdef") self.assertEqual(strchr(b"abcdef", b"x"), None) self.assertRaises(ArgumentError, strchr, b"abcdef", 3.0) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:27:15 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:27:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTE0?= =?utf-8?q?=3A_IDLE=C2=A0now_uses_non-strict_config_parser=2E?= Message-ID: <3Z20gR0cdSzSMW@mail.python.org> http://hg.python.org/cpython/rev/cf98766f464e changeset: 82043:cf98766f464e branch: 3.2 parent: 82040:ec70abe8c886 user: Serhiy Storchaka date: Thu Feb 07 15:24:36 2013 +0200 summary: Issue #17114: IDLE?now uses non-strict config parser. files: Lib/idlelib/configHandler.py | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Lib/idlelib/configHandler.py b/Lib/idlelib/configHandler.py --- a/Lib/idlelib/configHandler.py +++ b/Lib/idlelib/configHandler.py @@ -37,7 +37,7 @@ cfgFile - string, fully specified configuration file name """ self.file=cfgFile - ConfigParser.__init__(self,defaults=cfgDefaults) + ConfigParser.__init__(self, defaults=cfgDefaults, strict=False) def Get(self, section, option, type=None, default=None, raw=False): """ diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -212,6 +212,8 @@ Library ------- +- Issue #17114: IDLE?now uses non-strict config parser. + - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:27:16 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:27:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317114=3A_IDLE=C2=A0now_uses_non-strict_config_parser?= =?utf-8?q?=2E?= Message-ID: <3Z20gS3cJvzSVb@mail.python.org> http://hg.python.org/cpython/rev/c2ed79fbb9c6 changeset: 82044:c2ed79fbb9c6 branch: 3.3 parent: 82041:e49cc1585966 parent: 82043:cf98766f464e user: Serhiy Storchaka date: Thu Feb 07 15:25:09 2013 +0200 summary: Issue #17114: IDLE?now uses non-strict config parser. files: Lib/idlelib/configHandler.py | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Lib/idlelib/configHandler.py b/Lib/idlelib/configHandler.py --- a/Lib/idlelib/configHandler.py +++ b/Lib/idlelib/configHandler.py @@ -37,7 +37,7 @@ cfgFile - string, fully specified configuration file name """ self.file=cfgFile - ConfigParser.__init__(self,defaults=cfgDefaults) + ConfigParser.__init__(self, defaults=cfgDefaults, strict=False) def Get(self, section, option, type=None, default=None, raw=False): """ diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -163,6 +163,8 @@ Library ------- +- Issue #17114: IDLE?now uses non-strict config parser. + - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:27:17 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:27:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317114=3A_IDLE=C2=A0now_uses_non-strict_config_p?= =?utf-8?q?arser=2E?= Message-ID: <3Z20gT6mQ2zSbS@mail.python.org> http://hg.python.org/cpython/rev/877fae8d6f5b changeset: 82045:877fae8d6f5b parent: 82042:3236ebe7dd82 parent: 82044:c2ed79fbb9c6 user: Serhiy Storchaka date: Thu Feb 07 15:25:33 2013 +0200 summary: Issue #17114: IDLE?now uses non-strict config parser. files: Lib/idlelib/configHandler.py | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Lib/idlelib/configHandler.py b/Lib/idlelib/configHandler.py --- a/Lib/idlelib/configHandler.py +++ b/Lib/idlelib/configHandler.py @@ -37,7 +37,7 @@ cfgFile - string, fully specified configuration file name """ self.file=cfgFile - ConfigParser.__init__(self,defaults=cfgDefaults) + ConfigParser.__init__(self, defaults=cfgDefaults, strict=False) def Get(self, section, option, type=None, default=None, raw=False): """ diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -235,6 +235,8 @@ Library ------- +- Issue #17114: IDLE?now uses non-strict config parser. + - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:42:49 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:42:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MTE4?= =?utf-8?q?=3A_Add_new_tests_for_testing_Python-Tcl_interaction=2E?= Message-ID: <3Z211P4N6rzSXk@mail.python.org> http://hg.python.org/cpython/rev/f7cc6fbd7ae1 changeset: 82046:f7cc6fbd7ae1 branch: 2.7 parent: 82039:8fb98fb758e8 user: Serhiy Storchaka date: Thu Feb 07 15:37:53 2013 +0200 summary: Issue #17118: Add new tests for testing Python-Tcl interaction. files: Lib/test/test_tcl.py | 22 ++++++++++++++++++++++ 1 files changed, 22 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_tcl.py b/Lib/test/test_tcl.py --- a/Lib/test/test_tcl.py +++ b/Lib/test/test_tcl.py @@ -1,6 +1,7 @@ #!/usr/bin/env python import unittest +import sys import os from test import test_support @@ -151,6 +152,27 @@ # exit code must be zero self.assertEqual(f.close(), None) + def test_passing_values(self): + def passValue(value): + return self.interp.call('set', '_', value) + self.assertEqual(passValue(True), True) + self.assertEqual(passValue(False), False) + self.assertEqual(passValue('string'), 'string') + self.assertEqual(passValue('string\u20ac'), 'string\u20ac') + self.assertEqual(passValue(u'string'), u'string') + self.assertEqual(passValue(u'string\u20ac'), u'string\u20ac') + for i in (0, 1, -1, int(2**31-1), int(-2**31)): + self.assertEqual(passValue(i), i) + for f in (0.0, 1.0, -1.0, 1/3, + sys.float_info.min, sys.float_info.max, + -sys.float_info.min, -sys.float_info.max): + self.assertEqual(passValue(f), f) + for f in float('nan'), float('inf'), -float('inf'): + if f != f: # NaN + self.assertNotEqual(passValue(f), f) + else: + self.assertEqual(passValue(f), f) + self.assertEqual(passValue((1, '2', (3.4,))), (1, '2', (3.4,))) def test_main(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:42:51 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:42:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTE4?= =?utf-8?q?=3A_Add_new_tests_for_testing_Python-Tcl_interaction=2E?= Message-ID: <3Z211R05B2zSbY@mail.python.org> http://hg.python.org/cpython/rev/148e6ebfe854 changeset: 82047:148e6ebfe854 branch: 3.2 parent: 82043:cf98766f464e user: Serhiy Storchaka date: Thu Feb 07 15:40:03 2013 +0200 summary: Issue #17118: Add new tests for testing Python-Tcl interaction. files: Lib/test/test_tcl.py | 20 ++++++++++++++++++++ 1 files changed, 20 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_tcl.py b/Lib/test/test_tcl.py --- a/Lib/test/test_tcl.py +++ b/Lib/test/test_tcl.py @@ -151,6 +151,26 @@ # exit code must be zero self.assertEqual(f.close(), None) + def test_passing_values(self): + def passValue(value): + return self.interp.call('set', '_', value) + + self.assertEqual(passValue(True), True) + self.assertEqual(passValue(False), False) + self.assertEqual(passValue('string'), 'string') + self.assertEqual(passValue('string\u20ac'), 'string\u20ac') + for i in (0, 1, -1, 2**31-1, -2**31): + self.assertEqual(passValue(i), i) + for f in (0.0, 1.0, -1.0, 1/3, + sys.float_info.min, sys.float_info.max, + -sys.float_info.min, -sys.float_info.max): + self.assertEqual(passValue(f), f) + for f in float('nan'), float('inf'), -float('inf'): + if f != f: # NaN + self.assertNotEqual(passValue(f), f) + else: + self.assertEqual(passValue(f), f) + self.assertEqual(passValue((1, '2', (3.4,))), (1, '2', (3.4,))) def test_main(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:42:52 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:42:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317118=3A_Add_new_tests_for_testing_Python-Tcl_interac?= =?utf-8?q?tion=2E?= Message-ID: <3Z211S2gG7zScd@mail.python.org> http://hg.python.org/cpython/rev/452344620c97 changeset: 82048:452344620c97 branch: 3.3 parent: 82044:c2ed79fbb9c6 parent: 82047:148e6ebfe854 user: Serhiy Storchaka date: Thu Feb 07 15:40:26 2013 +0200 summary: Issue #17118: Add new tests for testing Python-Tcl interaction. files: Lib/test/test_tcl.py | 20 ++++++++++++++++++++ 1 files changed, 20 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_tcl.py b/Lib/test/test_tcl.py --- a/Lib/test/test_tcl.py +++ b/Lib/test/test_tcl.py @@ -151,6 +151,26 @@ # exit code must be zero self.assertEqual(f.close(), None) + def test_passing_values(self): + def passValue(value): + return self.interp.call('set', '_', value) + + self.assertEqual(passValue(True), True) + self.assertEqual(passValue(False), False) + self.assertEqual(passValue('string'), 'string') + self.assertEqual(passValue('string\u20ac'), 'string\u20ac') + for i in (0, 1, -1, 2**31-1, -2**31): + self.assertEqual(passValue(i), i) + for f in (0.0, 1.0, -1.0, 1/3, + sys.float_info.min, sys.float_info.max, + -sys.float_info.min, -sys.float_info.max): + self.assertEqual(passValue(f), f) + for f in float('nan'), float('inf'), -float('inf'): + if f != f: # NaN + self.assertNotEqual(passValue(f), f) + else: + self.assertEqual(passValue(f), f) + self.assertEqual(passValue((1, '2', (3.4,))), (1, '2', (3.4,))) def test_main(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 14:42:53 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 14:42:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317118=3A_Add_new_tests_for_testing_Python-Tcl_i?= =?utf-8?q?nteraction=2E?= Message-ID: <3Z211T5JzFzScM@mail.python.org> http://hg.python.org/cpython/rev/f0d603948cff changeset: 82049:f0d603948cff parent: 82045:877fae8d6f5b parent: 82048:452344620c97 user: Serhiy Storchaka date: Thu Feb 07 15:40:48 2013 +0200 summary: Issue #17118: Add new tests for testing Python-Tcl interaction. files: Lib/test/test_tcl.py | 20 ++++++++++++++++++++ 1 files changed, 20 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_tcl.py b/Lib/test/test_tcl.py --- a/Lib/test/test_tcl.py +++ b/Lib/test/test_tcl.py @@ -151,6 +151,26 @@ # exit code must be zero self.assertEqual(f.close(), None) + def test_passing_values(self): + def passValue(value): + return self.interp.call('set', '_', value) + + self.assertEqual(passValue(True), True) + self.assertEqual(passValue(False), False) + self.assertEqual(passValue('string'), 'string') + self.assertEqual(passValue('string\u20ac'), 'string\u20ac') + for i in (0, 1, -1, 2**31-1, -2**31): + self.assertEqual(passValue(i), i) + for f in (0.0, 1.0, -1.0, 1/3, + sys.float_info.min, sys.float_info.max, + -sys.float_info.min, -sys.float_info.max): + self.assertEqual(passValue(f), f) + for f in float('nan'), float('inf'), -float('inf'): + if f != f: # NaN + self.assertNotEqual(passValue(f), f) + else: + self.assertEqual(passValue(f), f) + self.assertEqual(passValue((1, '2', (3.4,))), (1, '2', (3.4,))) def test_main(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 15:30:48 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 15:30:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MDQz?= =?utf-8?q?=3A_The_unicode-internal_decoder_no_longer_read_past_the_end_of?= Message-ID: <3Z224m2NzqzSZ4@mail.python.org> http://hg.python.org/cpython/rev/498b54e0e856 changeset: 82050:498b54e0e856 branch: 2.7 parent: 82046:f7cc6fbd7ae1 user: Serhiy Storchaka date: Thu Feb 07 16:23:11 2013 +0200 summary: Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. files: Misc/NEWS | 3 + Objects/unicodeobject.c | 51 +++++++++++++--------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -9,6 +9,9 @@ Core and Builtins ----------------- +- Issue #17043: The unicode-internal decoder no longer read past the end of + input buffer. + - Issue #16979: Fix error handling bugs in the unicode-escape-decode decoder. - Issue #10156: In the interpreter's initialization phase, unicode globals diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -3376,37 +3376,34 @@ end = s + size; while (s < end) { + if (end-s < Py_UNICODE_SIZE) { + endinpos = end-starts; + reason = "truncated input"; + goto error; + } memcpy(p, s, sizeof(Py_UNICODE)); +#ifdef Py_UNICODE_WIDE /* We have to sanity check the raw data, otherwise doom looms for some malformed UCS-4 data. */ - if ( -#ifdef Py_UNICODE_WIDE - *p > unimax || *p < 0 || + if (*p > unimax || *p < 0) { + endinpos = s - starts + Py_UNICODE_SIZE; + reason = "illegal code point (> 0x10FFFF)"; + goto error; + } #endif - end-s < Py_UNICODE_SIZE - ) - { - startinpos = s - starts; - if (end-s < Py_UNICODE_SIZE) { - endinpos = end-starts; - reason = "truncated input"; - } - else { - endinpos = s - starts + Py_UNICODE_SIZE; - reason = "illegal code point (> 0x10FFFF)"; - } - outpos = p - PyUnicode_AS_UNICODE(v); - if (unicode_decode_call_errorhandler( - errors, &errorHandler, - "unicode_internal", reason, - starts, size, &startinpos, &endinpos, &exc, &s, - &v, &outpos, &p)) { - goto onError; - } - } - else { - p++; - s += Py_UNICODE_SIZE; + p++; + s += Py_UNICODE_SIZE; + continue; + + error: + startinpos = s - starts; + outpos = p - PyUnicode_AS_UNICODE(v); + if (unicode_decode_call_errorhandler( + errors, &errorHandler, + "unicode_internal", reason, + starts, size, &startinpos, &endinpos, &exc, &s, + &v, &outpos, &p)) { + goto onError; } } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 15:30:49 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 15:30:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MDQz?= =?utf-8?q?=3A_The_unicode-internal_decoder_no_longer_read_past_the_end_of?= Message-ID: <3Z224n683XzSZV@mail.python.org> http://hg.python.org/cpython/rev/0f1c2e2b6bc2 changeset: 82051:0f1c2e2b6bc2 branch: 3.2 parent: 82047:148e6ebfe854 user: Serhiy Storchaka date: Thu Feb 07 16:23:21 2013 +0200 summary: Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. files: Misc/NEWS | 3 + Objects/unicodeobject.c | 51 +++++++++++++--------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17043: The unicode-internal decoder no longer read past the end of + input buffer. + - Issue #16979: Fix error handling bugs in the unicode-escape-decode decoder. - Issue #10156: In the interpreter's initialization phase, unicode globals diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -4392,37 +4392,34 @@ end = s + size; while (s < end) { + if (end-s < Py_UNICODE_SIZE) { + endinpos = end-starts; + reason = "truncated input"; + goto error; + } memcpy(p, s, sizeof(Py_UNICODE)); +#ifdef Py_UNICODE_WIDE /* We have to sanity check the raw data, otherwise doom looms for some malformed UCS-4 data. */ - if ( -#ifdef Py_UNICODE_WIDE - *p > unimax || *p < 0 || + if (*p > unimax || *p < 0) { + endinpos = s - starts + Py_UNICODE_SIZE; + reason = "illegal code point (> 0x10FFFF)"; + goto error; + } #endif - end-s < Py_UNICODE_SIZE - ) - { - startinpos = s - starts; - if (end-s < Py_UNICODE_SIZE) { - endinpos = end-starts; - reason = "truncated input"; - } - else { - endinpos = s - starts + Py_UNICODE_SIZE; - reason = "illegal code point (> 0x10FFFF)"; - } - outpos = p - PyUnicode_AS_UNICODE(v); - if (unicode_decode_call_errorhandler( - errors, &errorHandler, - "unicode_internal", reason, - &starts, &end, &startinpos, &endinpos, &exc, &s, - &v, &outpos, &p)) { - goto onError; - } - } - else { - p++; - s += Py_UNICODE_SIZE; + p++; + s += Py_UNICODE_SIZE; + continue; + + error: + startinpos = s - starts; + outpos = p - PyUnicode_AS_UNICODE(v); + if (unicode_decode_call_errorhandler( + errors, &errorHandler, + "unicode_internal", reason, + &starts, &end, &startinpos, &endinpos, &exc, &s, + &v, &outpos, &p)) { + goto onError; } } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 15:30:51 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 15:30:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317043=3A_The_unicode-internal_decoder_no_longer_read_?= =?utf-8?q?past_the_end_of?= Message-ID: <3Z224q2dLhzScS@mail.python.org> http://hg.python.org/cpython/rev/fec2976c8503 changeset: 82052:fec2976c8503 branch: 3.3 parent: 82048:452344620c97 parent: 82051:0f1c2e2b6bc2 user: Serhiy Storchaka date: Thu Feb 07 16:25:25 2013 +0200 summary: Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. files: Misc/NEWS | 3 + Objects/unicodeobject.c | 50 +++++++++++++--------------- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #17043: The unicode-internal decoder no longer read past the end of + input buffer. + - Issue #17098: All modules now have __loader__ set even if they pre-exist the bootstrapping of importlib. diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -6103,6 +6103,11 @@ while (s < end) { Py_UNICODE uch; Py_UCS4 ch; + if (end - s < Py_UNICODE_SIZE) { + endinpos = end-starts; + reason = "truncated input"; + goto error; + } /* We copy the raw representation one byte at a time because the pointer may be unaligned (see test_codeccallbacks). */ ((char *) &uch)[0] = s[0]; @@ -6112,37 +6117,18 @@ ((char *) &uch)[3] = s[3]; #endif ch = uch; - +#ifdef Py_UNICODE_WIDE /* We have to sanity check the raw data, otherwise doom looms for some malformed UCS-4 data. */ - if ( -#ifdef Py_UNICODE_WIDE - ch > 0x10ffff || -#endif - end-s < Py_UNICODE_SIZE - ) - { - startinpos = s - starts; - if (end-s < Py_UNICODE_SIZE) { - endinpos = end-starts; - reason = "truncated input"; - } - else { - endinpos = s - starts + Py_UNICODE_SIZE; - reason = "illegal code point (> 0x10FFFF)"; - } - if (unicode_decode_call_errorhandler( - errors, &errorHandler, - "unicode_internal", reason, - &starts, &end, &startinpos, &endinpos, &exc, &s, - &v, &outpos)) - goto onError; - continue; - } - + if (ch > 0x10ffff) { + endinpos = s - starts + Py_UNICODE_SIZE; + reason = "illegal code point (> 0x10FFFF)"; + goto error; + } +#endif s += Py_UNICODE_SIZE; #ifndef Py_UNICODE_WIDE - if (Py_UNICODE_IS_HIGH_SURROGATE(ch) && s < end) + if (Py_UNICODE_IS_HIGH_SURROGATE(ch) && end - s >= Py_UNICODE_SIZE) { Py_UNICODE uch2; ((char *) &uch2)[0] = s[0]; @@ -6157,6 +6143,16 @@ if (unicode_putchar(&v, &outpos, ch) < 0) goto onError; + continue; + + error: + startinpos = s - starts; + if (unicode_decode_call_errorhandler( + errors, &errorHandler, + "unicode_internal", reason, + &starts, &end, &startinpos, &endinpos, &exc, &s, + &v, &outpos)) + goto onError; } if (unicode_resize(&v, outpos) < 0) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 15:30:52 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 15:30:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317043=3A_The_unicode-internal_decoder_no_longer?= =?utf-8?q?_read_past_the_end_of?= Message-ID: <3Z224r5gpDzScr@mail.python.org> http://hg.python.org/cpython/rev/eb0370d4686c changeset: 82053:eb0370d4686c parent: 82049:f0d603948cff parent: 82052:fec2976c8503 user: Serhiy Storchaka date: Thu Feb 07 16:26:55 2013 +0200 summary: Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. files: Misc/NEWS | 3 + Objects/unicodeobject.c | 50 +++++++++++++--------------- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17043: The unicode-internal decoder no longer read past the end of + input buffer. + - Issue #17098: All modules now have __loader__ set even if they pre-exist the bootstrapping of importlib. diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -5976,6 +5976,11 @@ while (s < end) { Py_UNICODE uch; Py_UCS4 ch; + if (end - s < Py_UNICODE_SIZE) { + endinpos = end-starts; + reason = "truncated input"; + goto error; + } /* We copy the raw representation one byte at a time because the pointer may be unaligned (see test_codeccallbacks). */ ((char *) &uch)[0] = s[0]; @@ -5985,37 +5990,18 @@ ((char *) &uch)[3] = s[3]; #endif ch = uch; - +#ifdef Py_UNICODE_WIDE /* We have to sanity check the raw data, otherwise doom looms for some malformed UCS-4 data. */ - if ( -#ifdef Py_UNICODE_WIDE - ch > 0x10ffff || -#endif - end-s < Py_UNICODE_SIZE - ) - { - startinpos = s - starts; - if (end-s < Py_UNICODE_SIZE) { - endinpos = end-starts; - reason = "truncated input"; - } - else { - endinpos = s - starts + Py_UNICODE_SIZE; - reason = "illegal code point (> 0x10FFFF)"; - } - if (unicode_decode_call_errorhandler_writer( - errors, &errorHandler, - "unicode_internal", reason, - &starts, &end, &startinpos, &endinpos, &exc, &s, - &writer)) - goto onError; - continue; - } - + if (ch > 0x10ffff) { + endinpos = s - starts + Py_UNICODE_SIZE; + reason = "illegal code point (> 0x10FFFF)"; + goto error; + } +#endif s += Py_UNICODE_SIZE; #ifndef Py_UNICODE_WIDE - if (Py_UNICODE_IS_HIGH_SURROGATE(ch) && s < end) + if (Py_UNICODE_IS_HIGH_SURROGATE(ch) && end - s >= Py_UNICODE_SIZE) { Py_UNICODE uch2; ((char *) &uch2)[0] = s[0]; @@ -6032,6 +6018,16 @@ goto onError; PyUnicode_WRITE(writer.kind, writer.data, writer.pos, ch); writer.pos++; + continue; + + error: + startinpos = s - starts; + if (unicode_decode_call_errorhandler_writer( + errors, &errorHandler, + "unicode_internal", reason, + &starts, &end, &startinpos, &endinpos, &exc, &s, + &writer)) + goto onError; } Py_XDECREF(errorHandler); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 16:08:28 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 16:08:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MDcz?= =?utf-8?q?=3A_Fix_some_integer_overflows_in_sqlite3_module=2E?= Message-ID: <3Z22wD508JzSZq@mail.python.org> http://hg.python.org/cpython/rev/649937bb8f1c changeset: 82054:649937bb8f1c branch: 2.7 parent: 82050:498b54e0e856 user: Serhiy Storchaka date: Thu Feb 07 16:59:34 2013 +0200 summary: Issue #17073: Fix some integer overflows in sqlite3 module. files: Lib/sqlite3/test/hooks.py | 19 ++++ Lib/sqlite3/test/userfunctions.py | 60 ++++++++++-- Misc/NEWS | 2 + Modules/_sqlite/connection.c | 84 ++++++++++-------- Modules/_sqlite/cursor.c | 20 +--- Modules/_sqlite/statement.c | 23 ++-- Modules/_sqlite/util.c | 66 ++++++++++++++ Modules/_sqlite/util.h | 4 + 8 files changed, 203 insertions(+), 75 deletions(-) diff --git a/Lib/sqlite3/test/hooks.py b/Lib/sqlite3/test/hooks.py --- a/Lib/sqlite3/test/hooks.py +++ b/Lib/sqlite3/test/hooks.py @@ -76,6 +76,25 @@ except sqlite.OperationalError, e: self.assertEqual(e.args[0].lower(), "no such collation sequence: mycoll") + def CheckCollationReturnsLargeInteger(self): + def mycoll(x, y): + # reverse order + return -((x > y) - (x < y)) * 2**32 + con = sqlite.connect(":memory:") + con.create_collation("mycoll", mycoll) + sql = """ + select x from ( + select 'a' as x + union + select 'b' as x + union + select 'c' as x + ) order by x collate mycoll + """ + result = con.execute(sql).fetchall() + self.assertEqual(result, [('c',), ('b',), ('a',)], + msg="the expected order was not returned") + def CheckCollationRegisterTwice(self): """ Register two different collation functions under the same name. diff --git a/Lib/sqlite3/test/userfunctions.py b/Lib/sqlite3/test/userfunctions.py --- a/Lib/sqlite3/test/userfunctions.py +++ b/Lib/sqlite3/test/userfunctions.py @@ -374,14 +374,15 @@ val = cur.fetchone()[0] self.assertEqual(val, 60) -def authorizer_cb(action, arg1, arg2, dbname, source): - if action != sqlite.SQLITE_SELECT: - return sqlite.SQLITE_DENY - if arg2 == 'c2' or arg1 == 't2': - return sqlite.SQLITE_DENY - return sqlite.SQLITE_OK +class AuthorizerTests(unittest.TestCase): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return sqlite.SQLITE_DENY + if arg2 == 'c2' or arg1 == 't2': + return sqlite.SQLITE_DENY + return sqlite.SQLITE_OK -class AuthorizerTests(unittest.TestCase): def setUp(self): self.con = sqlite.connect(":memory:") self.con.executescript(""" @@ -394,12 +395,12 @@ # For our security test: self.con.execute("select c2 from t2") - self.con.set_authorizer(authorizer_cb) + self.con.set_authorizer(self.authorizer_cb) def tearDown(self): pass - def CheckTableAccess(self): + def test_table_access(self): try: self.con.execute("select * from t2") except sqlite.DatabaseError, e: @@ -408,7 +409,7 @@ return self.fail("should have raised an exception due to missing privileges") - def CheckColumnAccess(self): + def test_column_access(self): try: self.con.execute("select c2 from t1") except sqlite.DatabaseError, e: @@ -417,11 +418,46 @@ return self.fail("should have raised an exception due to missing privileges") +class AuthorizerRaiseExceptionTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + raise ValueError + if arg2 == 'c2' or arg1 == 't2': + raise ValueError + return sqlite.SQLITE_OK + +class AuthorizerIllegalTypeTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 0.0 + if arg2 == 'c2' or arg1 == 't2': + return 0.0 + return sqlite.SQLITE_OK + +class AuthorizerLargeIntegerTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 2**32 + if arg2 == 'c2' or arg1 == 't2': + return 2**32 + return sqlite.SQLITE_OK + + def suite(): function_suite = unittest.makeSuite(FunctionTests, "Check") aggregate_suite = unittest.makeSuite(AggregateTests, "Check") - authorizer_suite = unittest.makeSuite(AuthorizerTests, "Check") - return unittest.TestSuite((function_suite, aggregate_suite, authorizer_suite)) + authorizer_suite = unittest.makeSuite(AuthorizerTests) + return unittest.TestSuite(( + function_suite, + aggregate_suite, + authorizer_suite, + unittest.makeSuite(AuthorizerRaiseExceptionTests), + unittest.makeSuite(AuthorizerIllegalTypeTests), + unittest.makeSuite(AuthorizerLargeIntegerTests), + )) def test(): runner = unittest.TextTestRunner() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,8 @@ Library ------- +- Issue #17073: Fix some integer overflows in sqlite3 module. + - Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple parses nested mutating sequence. diff --git a/Modules/_sqlite/connection.c b/Modules/_sqlite/connection.c --- a/Modules/_sqlite/connection.c +++ b/Modules/_sqlite/connection.c @@ -538,39 +538,40 @@ } } -void _pysqlite_set_result(sqlite3_context* context, PyObject* py_val) +static int +_pysqlite_set_result(sqlite3_context* context, PyObject* py_val) { - const char* buffer; - Py_ssize_t buflen; - PyObject* stringval; - - if ((!py_val) || PyErr_Occurred()) { - sqlite3_result_null(context); - } else if (py_val == Py_None) { + if (py_val == Py_None) { sqlite3_result_null(context); } else if (PyInt_Check(py_val)) { sqlite3_result_int64(context, (sqlite_int64)PyInt_AsLong(py_val)); } else if (PyLong_Check(py_val)) { - sqlite3_result_int64(context, PyLong_AsLongLong(py_val)); + sqlite_int64 value = _pysqlite_long_as_int64(py_val); + if (value == -1 && PyErr_Occurred()) + return -1; + sqlite3_result_int64(context, value); } else if (PyFloat_Check(py_val)) { sqlite3_result_double(context, PyFloat_AsDouble(py_val)); } else if (PyBuffer_Check(py_val)) { + const char* buffer; + Py_ssize_t buflen; if (PyObject_AsCharBuffer(py_val, &buffer, &buflen) != 0) { PyErr_SetString(PyExc_ValueError, "could not convert BLOB to buffer"); - } else { - sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); + return -1; } + sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); } else if (PyString_Check(py_val)) { sqlite3_result_text(context, PyString_AsString(py_val), -1, SQLITE_TRANSIENT); } else if (PyUnicode_Check(py_val)) { - stringval = PyUnicode_AsUTF8String(py_val); - if (stringval) { - sqlite3_result_text(context, PyString_AsString(stringval), -1, SQLITE_TRANSIENT); - Py_DECREF(stringval); - } + PyObject * stringval = PyUnicode_AsUTF8String(py_val); + if (!stringval) + return -1; + sqlite3_result_text(context, PyString_AsString(stringval), -1, SQLITE_TRANSIENT); + Py_DECREF(stringval); } else { - /* TODO: raise error */ + return -1; } + return 0; } PyObject* _pysqlite_build_py_params(sqlite3_context *context, int argc, sqlite3_value** argv) @@ -580,7 +581,6 @@ sqlite3_value* cur_value; PyObject* cur_py_value; const char* val_str; - sqlite_int64 val_int; Py_ssize_t buflen; void* raw_buffer; @@ -593,11 +593,7 @@ cur_value = argv[i]; switch (sqlite3_value_type(argv[i])) { case SQLITE_INTEGER: - val_int = sqlite3_value_int64(cur_value); - if(val_int < LONG_MIN || val_int > LONG_MAX) - cur_py_value = PyLong_FromLongLong(val_int); - else - cur_py_value = PyInt_FromLong((long)val_int); + cur_py_value = _pysqlite_long_from_int64(sqlite3_value_int64(cur_value)); break; case SQLITE_FLOAT: cur_py_value = PyFloat_FromDouble(sqlite3_value_double(cur_value)); @@ -648,6 +644,7 @@ PyObject* args; PyObject* py_func; PyObject* py_retval = NULL; + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -663,10 +660,12 @@ Py_DECREF(args); } + ok = 0; if (py_retval) { - _pysqlite_set_result(context, py_retval); + ok = _pysqlite_set_result(context, py_retval) == 0; Py_DECREF(py_retval); - } else { + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { @@ -746,8 +745,9 @@ void _pysqlite_final_callback(sqlite3_context* context) { - PyObject* function_result = NULL; + PyObject* function_result; PyObject** aggregate_instance; + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -764,21 +764,23 @@ } function_result = PyObject_CallMethod(*aggregate_instance, "finalize", ""); - if (!function_result) { + Py_DECREF(*aggregate_instance); + + ok = 0; + if (function_result) { + ok = _pysqlite_set_result(context, function_result) == 0; + Py_DECREF(function_result); + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { PyErr_Clear(); } _sqlite3_result_error(context, "user-defined aggregate's 'finalize' method raised error", -1); - } else { - _pysqlite_set_result(context, function_result); } error: - Py_XDECREF(*aggregate_instance); - Py_XDECREF(function_result); - #ifdef WITH_THREAD PyGILState_Release(threadstate); #endif @@ -935,7 +937,9 @@ rc = SQLITE_DENY; } else { if (PyInt_Check(ret)) { - rc = (int)PyInt_AsLong(ret); + rc = _PyInt_AsInt(ret); + if (rc == -1 && PyErr_Occurred()) + rc = SQLITE_DENY; } else { rc = SQLITE_DENY; } @@ -967,7 +971,7 @@ } /* abort query if error occurred */ - rc = 1; + rc = 1; } else { rc = (int)PyObject_IsTrue(ret); Py_DECREF(ret); @@ -1337,6 +1341,7 @@ PyGILState_STATE gilstate; #endif PyObject* retval = NULL; + long longval; int result = 0; #ifdef WITH_THREAD gilstate = PyGILState_Ensure(); @@ -1360,10 +1365,17 @@ goto finally; } - result = PyInt_AsLong(retval); - if (PyErr_Occurred()) { + longval = PyLong_AsLongAndOverflow(retval, &result); + if (longval == -1 && PyErr_Occurred()) { + PyErr_Clear(); result = 0; } + else if (!result) { + if (longval > 0) + result = 1; + else if (longval < 0) + result = -1; + } finally: Py_XDECREF(string1); diff --git a/Modules/_sqlite/cursor.c b/Modules/_sqlite/cursor.c --- a/Modules/_sqlite/cursor.c +++ b/Modules/_sqlite/cursor.c @@ -26,14 +26,6 @@ #include "util.h" #include "sqlitecompat.h" -/* used to decide wether to call PyInt_FromLong or PyLong_FromLongLong */ -#ifndef INT32_MIN -#define INT32_MIN (-2147483647 - 1) -#endif -#ifndef INT32_MAX -#define INT32_MAX 2147483647 -#endif - PyObject* pysqlite_cursor_iternext(pysqlite_Cursor* self); static char* errmsg_fetch_across_rollback = "Cursor needed to be reset because of commit/rollback and can no longer be fetched from."; @@ -307,7 +299,6 @@ PyObject* row; PyObject* item = NULL; int coltype; - PY_LONG_LONG intval; PyObject* converter; PyObject* converted; Py_ssize_t nbytes; @@ -366,12 +357,7 @@ Py_INCREF(Py_None); converted = Py_None; } else if (coltype == SQLITE_INTEGER) { - intval = sqlite3_column_int64(self->statement->st, i); - if (intval < INT32_MIN || intval > INT32_MAX) { - converted = PyLong_FromLongLong(intval); - } else { - converted = PyInt_FromLong((long)intval); - } + converted = _pysqlite_long_from_int64(sqlite3_column_int64(self->statement->st, i)); } else if (coltype == SQLITE_FLOAT) { converted = PyFloat_FromDouble(sqlite3_column_double(self->statement->st, i)); } else if (coltype == SQLITE_TEXT) { @@ -466,7 +452,6 @@ PyObject* func_args; PyObject* result; int numcols; - PY_LONG_LONG lastrowid; int statement_type; PyObject* descriptor; PyObject* second_argument = NULL; @@ -747,10 +732,11 @@ Py_DECREF(self->lastrowid); if (!multiple && statement_type == STATEMENT_INSERT) { + sqlite3_int64 lastrowid; Py_BEGIN_ALLOW_THREADS lastrowid = sqlite3_last_insert_rowid(self->connection->db); Py_END_ALLOW_THREADS - self->lastrowid = PyInt_FromLong((long)lastrowid); + self->lastrowid = _pysqlite_long_from_int64(lastrowid); } else { Py_INCREF(Py_None); self->lastrowid = Py_None; diff --git a/Modules/_sqlite/statement.c b/Modules/_sqlite/statement.c --- a/Modules/_sqlite/statement.c +++ b/Modules/_sqlite/statement.c @@ -26,6 +26,7 @@ #include "connection.h" #include "microprotocols.h" #include "prepare_protocol.h" +#include "util.h" #include "sqlitecompat.h" /* prototypes */ @@ -101,8 +102,6 @@ int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter, int allow_8bit_chars) { int rc = SQLITE_OK; - long longval; - PY_LONG_LONG longlongval; const char* buffer; char* string; Py_ssize_t buflen; @@ -153,15 +152,19 @@ } switch (paramtype) { - case TYPE_INT: - longval = PyInt_AsLong(parameter); - rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longval); + case TYPE_INT: { + long longval = PyInt_AsLong(parameter); + rc = sqlite3_bind_int64(self->st, pos, longval); break; - case TYPE_LONG: - longlongval = PyLong_AsLongLong(parameter); - /* in the overflow error case, longlongval is -1, and an exception is set */ - rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longlongval); + } + case TYPE_LONG: { + sqlite_int64 value = _pysqlite_long_as_int64(parameter); + if (value == -1 && PyErr_Occurred()) + rc = -1; + else + rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)value); break; + } case TYPE_FLOAT: rc = sqlite3_bind_double(self->st, pos, PyFloat_AsDouble(parameter)); break; @@ -198,7 +201,7 @@ return 1; } - if (PyInt_CheckExact(obj) || PyLong_CheckExact(obj) + if (PyInt_CheckExact(obj) || PyLong_CheckExact(obj) || PyFloat_CheckExact(obj) || PyString_CheckExact(obj) || PyUnicode_CheckExact(obj) || PyBuffer_Check(obj)) { return 0; diff --git a/Modules/_sqlite/util.c b/Modules/_sqlite/util.c --- a/Modules/_sqlite/util.c +++ b/Modules/_sqlite/util.c @@ -104,3 +104,69 @@ return errorcode; } +#ifdef WORDS_BIGENDIAN +# define IS_LITTLE_ENDIAN 0 +#else +# define IS_LITTLE_ENDIAN 1 +#endif + +PyObject * +_pysqlite_long_from_int64(sqlite3_int64 value) +{ +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG < 8 + if (value > PY_LLONG_MAX || value < PY_LLONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +# if SIZEOF_LONG < SIZEOF_LONG_LONG + if (value > LONG_MAX || value < LONG_MIN) + return PyLong_FromLongLong(value); +# endif +#else +# if SIZEOF_LONG < 8 + if (value > LONG_MAX || value < LONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +#endif + return PyInt_FromLong(value); +} + +sqlite3_int64 +_pysqlite_long_as_int64(PyObject * py_val) +{ + int overflow; +#ifdef HAVE_LONG_LONG + PY_LONG_LONG value = PyLong_AsLongLongAndOverflow(py_val, &overflow); +#else + long value = PyLong_AsLongAndOverflow(py_val, &overflow); +#endif + if (value == -1 && PyErr_Occurred()) + return -1; + if (!overflow) { +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG > 8 + if (-0x8000000000000000LL <= value && value <= 0x7FFFFFFFFFFFFFFFLL) +# endif +#else +# if SIZEOF_LONG > 8 + if (-0x8000000000000000L <= value && value <= 0x7FFFFFFFFFFFFFFFL) +# endif +#endif + return value; + } + else if (sizeof(value) < sizeof(sqlite3_int64)) { + sqlite3_int64 int64val; + if (_PyLong_AsByteArray((PyLongObject *)py_val, + (unsigned char *)&int64val, sizeof(int64val), + IS_LITTLE_ENDIAN, 1 /* signed */) >= 0) { + return int64val; + } + } + PyErr_SetString(PyExc_OverflowError, + "Python int too large to convert to SQLite INTEGER"); + return -1; +} diff --git a/Modules/_sqlite/util.h b/Modules/_sqlite/util.h --- a/Modules/_sqlite/util.h +++ b/Modules/_sqlite/util.h @@ -35,4 +35,8 @@ * Returns the error code (0 means no error occurred). */ int _pysqlite_seterror(sqlite3* db, sqlite3_stmt* st); + +PyObject * _pysqlite_long_from_int64(sqlite3_int64 value); +sqlite3_int64 _pysqlite_long_as_int64(PyObject * value); + #endif -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 16:08:30 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 16:08:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MDcz?= =?utf-8?q?=3A_Fix_some_integer_overflows_in_sqlite3_module=2E?= Message-ID: <3Z22wG3940zSbb@mail.python.org> http://hg.python.org/cpython/rev/55a89352e220 changeset: 82055:55a89352e220 branch: 3.2 parent: 82051:0f1c2e2b6bc2 user: Serhiy Storchaka date: Thu Feb 07 17:01:47 2013 +0200 summary: Issue #17073: Fix some integer overflows in sqlite3 module. files: Lib/sqlite3/test/hooks.py | 19 ++++ Lib/sqlite3/test/userfunctions.py | 60 ++++++++++++--- Misc/NEWS | 2 + Modules/_sqlite/connection.c | 73 +++++++++++------- Modules/_sqlite/cursor.c | 20 +---- Modules/_sqlite/statement.c | 13 ++- Modules/_sqlite/util.c | 66 +++++++++++++++++ Modules/_sqlite/util.h | 4 + 8 files changed, 196 insertions(+), 61 deletions(-) diff --git a/Lib/sqlite3/test/hooks.py b/Lib/sqlite3/test/hooks.py --- a/Lib/sqlite3/test/hooks.py +++ b/Lib/sqlite3/test/hooks.py @@ -76,6 +76,25 @@ except sqlite.OperationalError as e: self.assertEqual(e.args[0].lower(), "no such collation sequence: mycoll") + def CheckCollationReturnsLargeInteger(self): + def mycoll(x, y): + # reverse order + return -((x > y) - (x < y)) * 2**32 + con = sqlite.connect(":memory:") + con.create_collation("mycoll", mycoll) + sql = """ + select x from ( + select 'a' as x + union + select 'b' as x + union + select 'c' as x + ) order by x collate mycoll + """ + result = con.execute(sql).fetchall() + self.assertEqual(result, [('c',), ('b',), ('a',)], + msg="the expected order was not returned") + def CheckCollationRegisterTwice(self): """ Register two different collation functions under the same name. diff --git a/Lib/sqlite3/test/userfunctions.py b/Lib/sqlite3/test/userfunctions.py --- a/Lib/sqlite3/test/userfunctions.py +++ b/Lib/sqlite3/test/userfunctions.py @@ -375,14 +375,15 @@ val = cur.fetchone()[0] self.assertEqual(val, 60) -def authorizer_cb(action, arg1, arg2, dbname, source): - if action != sqlite.SQLITE_SELECT: - return sqlite.SQLITE_DENY - if arg2 == 'c2' or arg1 == 't2': - return sqlite.SQLITE_DENY - return sqlite.SQLITE_OK +class AuthorizerTests(unittest.TestCase): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return sqlite.SQLITE_DENY + if arg2 == 'c2' or arg1 == 't2': + return sqlite.SQLITE_DENY + return sqlite.SQLITE_OK -class AuthorizerTests(unittest.TestCase): def setUp(self): self.con = sqlite.connect(":memory:") self.con.executescript(""" @@ -395,12 +396,12 @@ # For our security test: self.con.execute("select c2 from t2") - self.con.set_authorizer(authorizer_cb) + self.con.set_authorizer(self.authorizer_cb) def tearDown(self): pass - def CheckTableAccess(self): + def test_table_access(self): try: self.con.execute("select * from t2") except sqlite.DatabaseError as e: @@ -409,7 +410,7 @@ return self.fail("should have raised an exception due to missing privileges") - def CheckColumnAccess(self): + def test_column_access(self): try: self.con.execute("select c2 from t1") except sqlite.DatabaseError as e: @@ -418,11 +419,46 @@ return self.fail("should have raised an exception due to missing privileges") +class AuthorizerRaiseExceptionTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + raise ValueError + if arg2 == 'c2' or arg1 == 't2': + raise ValueError + return sqlite.SQLITE_OK + +class AuthorizerIllegalTypeTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 0.0 + if arg2 == 'c2' or arg1 == 't2': + return 0.0 + return sqlite.SQLITE_OK + +class AuthorizerLargeIntegerTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 2**32 + if arg2 == 'c2' or arg1 == 't2': + return 2**32 + return sqlite.SQLITE_OK + + def suite(): function_suite = unittest.makeSuite(FunctionTests, "Check") aggregate_suite = unittest.makeSuite(AggregateTests, "Check") - authorizer_suite = unittest.makeSuite(AuthorizerTests, "Check") - return unittest.TestSuite((function_suite, aggregate_suite, authorizer_suite)) + authorizer_suite = unittest.makeSuite(AuthorizerTests) + return unittest.TestSuite(( + function_suite, + aggregate_suite, + authorizer_suite, + unittest.makeSuite(AuthorizerRaiseExceptionTests), + unittest.makeSuite(AuthorizerIllegalTypeTests), + unittest.makeSuite(AuthorizerLargeIntegerTests), + )) def test(): runner = unittest.TextTestRunner() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -215,6 +215,8 @@ Library ------- +- Issue #17073: Fix some integer overflows in sqlite3 module. + - Issue #17114: IDLE?now uses non-strict config parser. - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection diff --git a/Modules/_sqlite/connection.c b/Modules/_sqlite/connection.c --- a/Modules/_sqlite/connection.c +++ b/Modules/_sqlite/connection.c @@ -482,32 +482,35 @@ } } -void _pysqlite_set_result(sqlite3_context* context, PyObject* py_val) +static int +_pysqlite_set_result(sqlite3_context* context, PyObject* py_val) { - const char* buffer; - Py_ssize_t buflen; - - if ((!py_val) || PyErr_Occurred()) { - sqlite3_result_null(context); - } else if (py_val == Py_None) { + if (py_val == Py_None) { sqlite3_result_null(context); } else if (PyLong_Check(py_val)) { - sqlite3_result_int64(context, PyLong_AsLongLong(py_val)); + sqlite_int64 value = _pysqlite_long_as_int64(py_val); + if (value == -1 && PyErr_Occurred()) + return -1; + sqlite3_result_int64(context, value); } else if (PyFloat_Check(py_val)) { sqlite3_result_double(context, PyFloat_AsDouble(py_val)); } else if (PyUnicode_Check(py_val)) { - char *str = _PyUnicode_AsString(py_val); - if (str != NULL) - sqlite3_result_text(context, str, -1, SQLITE_TRANSIENT); + const char *str = _PyUnicode_AsString(py_val); + if (str == NULL) + return -1; + sqlite3_result_text(context, str, -1, SQLITE_TRANSIENT); } else if (PyObject_CheckBuffer(py_val)) { + const char* buffer; + Py_ssize_t buflen; if (PyObject_AsCharBuffer(py_val, &buffer, &buflen) != 0) { PyErr_SetString(PyExc_ValueError, "could not convert BLOB to buffer"); - } else { - sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); + return -1; } + sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); } else { - /* TODO: raise error */ + return -1; } + return 0; } PyObject* _pysqlite_build_py_params(sqlite3_context *context, int argc, sqlite3_value** argv) @@ -528,7 +531,7 @@ cur_value = argv[i]; switch (sqlite3_value_type(argv[i])) { case SQLITE_INTEGER: - cur_py_value = PyLong_FromLongLong(sqlite3_value_int64(cur_value)); + cur_py_value = _pysqlite_long_from_int64(sqlite3_value_int64(cur_value)); break; case SQLITE_FLOAT: cur_py_value = PyFloat_FromDouble(sqlite3_value_double(cur_value)); @@ -571,6 +574,7 @@ PyObject* args; PyObject* py_func; PyObject* py_retval = NULL; + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -586,10 +590,12 @@ Py_DECREF(args); } + ok = 0; if (py_retval) { - _pysqlite_set_result(context, py_retval); + ok = _pysqlite_set_result(context, py_retval) == 0; Py_DECREF(py_retval); - } else { + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { @@ -669,9 +675,10 @@ void _pysqlite_final_callback(sqlite3_context* context) { - PyObject* function_result = NULL; + PyObject* function_result; PyObject** aggregate_instance; PyObject* aggregate_class; + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -690,21 +697,23 @@ } function_result = PyObject_CallMethod(*aggregate_instance, "finalize", ""); - if (!function_result) { + Py_DECREF(*aggregate_instance); + + ok = 0; + if (function_result) { + ok = _pysqlite_set_result(context, function_result) == 0; + Py_DECREF(function_result); + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { PyErr_Clear(); } _sqlite3_result_error(context, "user-defined aggregate's 'finalize' method raised error", -1); - } else { - _pysqlite_set_result(context, function_result); } error: - Py_XDECREF(*aggregate_instance); - Py_XDECREF(function_result); - #ifdef WITH_THREAD PyGILState_Release(threadstate); #endif @@ -861,7 +870,9 @@ rc = SQLITE_DENY; } else { if (PyLong_Check(ret)) { - rc = (int)PyLong_AsLong(ret); + rc = _PyLong_AsInt(ret); + if (rc == -1 && PyErr_Occurred()) + rc = SQLITE_DENY; } else { rc = SQLITE_DENY; } @@ -1266,6 +1277,7 @@ PyGILState_STATE gilstate; #endif PyObject* retval = NULL; + long longval; int result = 0; #ifdef WITH_THREAD gilstate = PyGILState_Ensure(); @@ -1289,10 +1301,17 @@ goto finally; } - result = PyLong_AsLong(retval); - if (PyErr_Occurred()) { + longval = PyLong_AsLongAndOverflow(retval, &result); + if (longval == -1 && PyErr_Occurred()) { + PyErr_Clear(); result = 0; } + else if (!result) { + if (longval > 0) + result = 1; + else if (longval < 0) + result = -1; + } finally: Py_XDECREF(string1); diff --git a/Modules/_sqlite/cursor.c b/Modules/_sqlite/cursor.c --- a/Modules/_sqlite/cursor.c +++ b/Modules/_sqlite/cursor.c @@ -26,14 +26,6 @@ #include "util.h" #include "sqlitecompat.h" -/* used to decide wether to call PyLong_FromLong or PyLong_FromLongLong */ -#ifndef INT32_MIN -#define INT32_MIN (-2147483647 - 1) -#endif -#ifndef INT32_MAX -#define INT32_MAX 2147483647 -#endif - PyObject* pysqlite_cursor_iternext(pysqlite_Cursor* self); static char* errmsg_fetch_across_rollback = "Cursor needed to be reset because of commit/rollback and can no longer be fetched from."; @@ -285,7 +277,6 @@ PyObject* row; PyObject* item = NULL; int coltype; - PY_LONG_LONG intval; PyObject* converter; PyObject* converted; Py_ssize_t nbytes; @@ -345,12 +336,7 @@ Py_INCREF(Py_None); converted = Py_None; } else if (coltype == SQLITE_INTEGER) { - intval = sqlite3_column_int64(self->statement->st, i); - if (intval < INT32_MIN || intval > INT32_MAX) { - converted = PyLong_FromLongLong(intval); - } else { - converted = PyLong_FromLong((long)intval); - } + converted = _pysqlite_long_from_int64(sqlite3_column_int64(self->statement->st, i)); } else if (coltype == SQLITE_FLOAT) { converted = PyFloat_FromDouble(sqlite3_column_double(self->statement->st, i)); } else if (coltype == SQLITE_TEXT) { @@ -456,7 +442,6 @@ PyObject* func_args; PyObject* result; int numcols; - PY_LONG_LONG lastrowid; int statement_type; PyObject* descriptor; PyObject* second_argument = NULL; @@ -731,10 +716,11 @@ Py_DECREF(self->lastrowid); if (!multiple && statement_type == STATEMENT_INSERT) { + sqlite3_int64 lastrowid; Py_BEGIN_ALLOW_THREADS lastrowid = sqlite3_last_insert_rowid(self->connection->db); Py_END_ALLOW_THREADS - self->lastrowid = PyLong_FromLong((long)lastrowid); + self->lastrowid = _pysqlite_long_from_int64(lastrowid); } else { Py_INCREF(Py_None); self->lastrowid = Py_None; diff --git a/Modules/_sqlite/statement.c b/Modules/_sqlite/statement.c --- a/Modules/_sqlite/statement.c +++ b/Modules/_sqlite/statement.c @@ -26,6 +26,7 @@ #include "connection.h" #include "microprotocols.h" #include "prepare_protocol.h" +#include "util.h" #include "sqlitecompat.h" /* prototypes */ @@ -90,7 +91,6 @@ int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter, int allow_8bit_chars) { int rc = SQLITE_OK; - PY_LONG_LONG longlongval; const char* buffer; char* string; Py_ssize_t buflen; @@ -120,11 +120,14 @@ } switch (paramtype) { - case TYPE_LONG: - /* in the overflow error case, longval/longlongval is -1, and an exception is set */ - longlongval = PyLong_AsLongLong(parameter); - rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longlongval); + case TYPE_LONG: { + sqlite_int64 value = _pysqlite_long_as_int64(parameter); + if (value == -1 && PyErr_Occurred()) + rc = -1; + else + rc = sqlite3_bind_int64(self->st, pos, value); break; + } case TYPE_FLOAT: rc = sqlite3_bind_double(self->st, pos, PyFloat_AsDouble(parameter)); break; diff --git a/Modules/_sqlite/util.c b/Modules/_sqlite/util.c --- a/Modules/_sqlite/util.c +++ b/Modules/_sqlite/util.c @@ -104,3 +104,69 @@ return errorcode; } +#ifdef WORDS_BIGENDIAN +# define IS_LITTLE_ENDIAN 0 +#else +# define IS_LITTLE_ENDIAN 1 +#endif + +PyObject * +_pysqlite_long_from_int64(sqlite3_int64 value) +{ +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG < 8 + if (value > PY_LLONG_MAX || value < PY_LLONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +# if SIZEOF_LONG < SIZEOF_LONG_LONG + if (value > LONG_MAX || value < LONG_MIN) + return PyLong_FromLongLong(value); +# endif +#else +# if SIZEOF_LONG < 8 + if (value > LONG_MAX || value < LONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +#endif + return PyLong_FromLong(value); +} + +sqlite3_int64 +_pysqlite_long_as_int64(PyObject * py_val) +{ + int overflow; +#ifdef HAVE_LONG_LONG + PY_LONG_LONG value = PyLong_AsLongLongAndOverflow(py_val, &overflow); +#else + long value = PyLong_AsLongAndOverflow(py_val, &overflow); +#endif + if (value == -1 && PyErr_Occurred()) + return -1; + if (!overflow) { +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG > 8 + if (-0x8000000000000000LL <= value && value <= 0x7FFFFFFFFFFFFFFFLL) +# endif +#else +# if SIZEOF_LONG > 8 + if (-0x8000000000000000L <= value && value <= 0x7FFFFFFFFFFFFFFFL) +# endif +#endif + return value; + } + else if (sizeof(value) < sizeof(sqlite3_int64)) { + sqlite3_int64 int64val; + if (_PyLong_AsByteArray((PyLongObject *)py_val, + (unsigned char *)&int64val, sizeof(int64val), + IS_LITTLE_ENDIAN, 1 /* signed */) >= 0) { + return int64val; + } + } + PyErr_SetString(PyExc_OverflowError, + "Python int too large to convert to SQLite INTEGER"); + return -1; +} diff --git a/Modules/_sqlite/util.h b/Modules/_sqlite/util.h --- a/Modules/_sqlite/util.h +++ b/Modules/_sqlite/util.h @@ -35,4 +35,8 @@ * Returns the error code (0 means no error occurred). */ int _pysqlite_seterror(sqlite3* db, sqlite3_stmt* st); + +PyObject * _pysqlite_long_from_int64(sqlite3_int64 value); +sqlite3_int64 _pysqlite_long_as_int64(PyObject * value); + #endif -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 16:08:32 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 16:08:32 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317073=3A_Fix_some_integer_overflows_in_sqlite3_module?= =?utf-8?q?=2E?= Message-ID: <3Z22wJ1TwZzSc3@mail.python.org> http://hg.python.org/cpython/rev/c5fb8bc56def changeset: 82056:c5fb8bc56def branch: 3.3 parent: 82052:fec2976c8503 parent: 82055:55a89352e220 user: Serhiy Storchaka date: Thu Feb 07 17:03:46 2013 +0200 summary: Issue #17073: Fix some integer overflows in sqlite3 module. files: Lib/sqlite3/test/hooks.py | 19 ++++ Lib/sqlite3/test/userfunctions.py | 60 ++++++++++++--- Misc/NEWS | 2 + Modules/_sqlite/connection.c | 73 +++++++++++------- Modules/_sqlite/cursor.c | 20 +---- Modules/_sqlite/statement.c | 13 ++- Modules/_sqlite/util.c | 66 +++++++++++++++++ Modules/_sqlite/util.h | 4 + 8 files changed, 196 insertions(+), 61 deletions(-) diff --git a/Lib/sqlite3/test/hooks.py b/Lib/sqlite3/test/hooks.py --- a/Lib/sqlite3/test/hooks.py +++ b/Lib/sqlite3/test/hooks.py @@ -76,6 +76,25 @@ except sqlite.OperationalError as e: self.assertEqual(e.args[0].lower(), "no such collation sequence: mycoll") + def CheckCollationReturnsLargeInteger(self): + def mycoll(x, y): + # reverse order + return -((x > y) - (x < y)) * 2**32 + con = sqlite.connect(":memory:") + con.create_collation("mycoll", mycoll) + sql = """ + select x from ( + select 'a' as x + union + select 'b' as x + union + select 'c' as x + ) order by x collate mycoll + """ + result = con.execute(sql).fetchall() + self.assertEqual(result, [('c',), ('b',), ('a',)], + msg="the expected order was not returned") + def CheckCollationRegisterTwice(self): """ Register two different collation functions under the same name. diff --git a/Lib/sqlite3/test/userfunctions.py b/Lib/sqlite3/test/userfunctions.py --- a/Lib/sqlite3/test/userfunctions.py +++ b/Lib/sqlite3/test/userfunctions.py @@ -375,14 +375,15 @@ val = cur.fetchone()[0] self.assertEqual(val, 60) -def authorizer_cb(action, arg1, arg2, dbname, source): - if action != sqlite.SQLITE_SELECT: - return sqlite.SQLITE_DENY - if arg2 == 'c2' or arg1 == 't2': - return sqlite.SQLITE_DENY - return sqlite.SQLITE_OK +class AuthorizerTests(unittest.TestCase): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return sqlite.SQLITE_DENY + if arg2 == 'c2' or arg1 == 't2': + return sqlite.SQLITE_DENY + return sqlite.SQLITE_OK -class AuthorizerTests(unittest.TestCase): def setUp(self): self.con = sqlite.connect(":memory:") self.con.executescript(""" @@ -395,12 +396,12 @@ # For our security test: self.con.execute("select c2 from t2") - self.con.set_authorizer(authorizer_cb) + self.con.set_authorizer(self.authorizer_cb) def tearDown(self): pass - def CheckTableAccess(self): + def test_table_access(self): try: self.con.execute("select * from t2") except sqlite.DatabaseError as e: @@ -409,7 +410,7 @@ return self.fail("should have raised an exception due to missing privileges") - def CheckColumnAccess(self): + def test_column_access(self): try: self.con.execute("select c2 from t1") except sqlite.DatabaseError as e: @@ -418,11 +419,46 @@ return self.fail("should have raised an exception due to missing privileges") +class AuthorizerRaiseExceptionTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + raise ValueError + if arg2 == 'c2' or arg1 == 't2': + raise ValueError + return sqlite.SQLITE_OK + +class AuthorizerIllegalTypeTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 0.0 + if arg2 == 'c2' or arg1 == 't2': + return 0.0 + return sqlite.SQLITE_OK + +class AuthorizerLargeIntegerTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 2**32 + if arg2 == 'c2' or arg1 == 't2': + return 2**32 + return sqlite.SQLITE_OK + + def suite(): function_suite = unittest.makeSuite(FunctionTests, "Check") aggregate_suite = unittest.makeSuite(AggregateTests, "Check") - authorizer_suite = unittest.makeSuite(AuthorizerTests, "Check") - return unittest.TestSuite((function_suite, aggregate_suite, authorizer_suite)) + authorizer_suite = unittest.makeSuite(AuthorizerTests) + return unittest.TestSuite(( + function_suite, + aggregate_suite, + authorizer_suite, + unittest.makeSuite(AuthorizerRaiseExceptionTests), + unittest.makeSuite(AuthorizerIllegalTypeTests), + unittest.makeSuite(AuthorizerLargeIntegerTests), + )) def test(): runner = unittest.TextTestRunner() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -166,6 +166,8 @@ Library ------- +- Issue #17073: Fix some integer overflows in sqlite3 module. + - Issue #17114: IDLE?now uses non-strict config parser. - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection diff --git a/Modules/_sqlite/connection.c b/Modules/_sqlite/connection.c --- a/Modules/_sqlite/connection.c +++ b/Modules/_sqlite/connection.c @@ -482,32 +482,35 @@ } } -void _pysqlite_set_result(sqlite3_context* context, PyObject* py_val) +static int +_pysqlite_set_result(sqlite3_context* context, PyObject* py_val) { - const char* buffer; - Py_ssize_t buflen; - - if ((!py_val) || PyErr_Occurred()) { - sqlite3_result_null(context); - } else if (py_val == Py_None) { + if (py_val == Py_None) { sqlite3_result_null(context); } else if (PyLong_Check(py_val)) { - sqlite3_result_int64(context, PyLong_AsLongLong(py_val)); + sqlite_int64 value = _pysqlite_long_as_int64(py_val); + if (value == -1 && PyErr_Occurred()) + return -1; + sqlite3_result_int64(context, value); } else if (PyFloat_Check(py_val)) { sqlite3_result_double(context, PyFloat_AsDouble(py_val)); } else if (PyUnicode_Check(py_val)) { - char *str = _PyUnicode_AsString(py_val); - if (str != NULL) - sqlite3_result_text(context, str, -1, SQLITE_TRANSIENT); + const char *str = _PyUnicode_AsString(py_val); + if (str == NULL) + return -1; + sqlite3_result_text(context, str, -1, SQLITE_TRANSIENT); } else if (PyObject_CheckBuffer(py_val)) { + const char* buffer; + Py_ssize_t buflen; if (PyObject_AsCharBuffer(py_val, &buffer, &buflen) != 0) { PyErr_SetString(PyExc_ValueError, "could not convert BLOB to buffer"); - } else { - sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); + return -1; } + sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); } else { - /* TODO: raise error */ + return -1; } + return 0; } PyObject* _pysqlite_build_py_params(sqlite3_context *context, int argc, sqlite3_value** argv) @@ -528,7 +531,7 @@ cur_value = argv[i]; switch (sqlite3_value_type(argv[i])) { case SQLITE_INTEGER: - cur_py_value = PyLong_FromLongLong(sqlite3_value_int64(cur_value)); + cur_py_value = _pysqlite_long_from_int64(sqlite3_value_int64(cur_value)); break; case SQLITE_FLOAT: cur_py_value = PyFloat_FromDouble(sqlite3_value_double(cur_value)); @@ -571,6 +574,7 @@ PyObject* args; PyObject* py_func; PyObject* py_retval = NULL; + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -586,10 +590,12 @@ Py_DECREF(args); } + ok = 0; if (py_retval) { - _pysqlite_set_result(context, py_retval); + ok = _pysqlite_set_result(context, py_retval) == 0; Py_DECREF(py_retval); - } else { + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { @@ -669,9 +675,10 @@ void _pysqlite_final_callback(sqlite3_context* context) { - PyObject* function_result = NULL; + PyObject* function_result; PyObject** aggregate_instance; _Py_IDENTIFIER(finalize); + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -688,21 +695,23 @@ } function_result = _PyObject_CallMethodId(*aggregate_instance, &PyId_finalize, ""); - if (!function_result) { + Py_DECREF(*aggregate_instance); + + ok = 0; + if (function_result) { + ok = _pysqlite_set_result(context, function_result) == 0; + Py_DECREF(function_result); + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { PyErr_Clear(); } _sqlite3_result_error(context, "user-defined aggregate's 'finalize' method raised error", -1); - } else { - _pysqlite_set_result(context, function_result); } error: - Py_XDECREF(*aggregate_instance); - Py_XDECREF(function_result); - #ifdef WITH_THREAD PyGILState_Release(threadstate); #endif @@ -859,7 +868,9 @@ rc = SQLITE_DENY; } else { if (PyLong_Check(ret)) { - rc = (int)PyLong_AsLong(ret); + rc = _PyLong_AsInt(ret); + if (rc == -1 && PyErr_Occurred()) + rc = SQLITE_DENY; } else { rc = SQLITE_DENY; } @@ -1327,6 +1338,7 @@ PyGILState_STATE gilstate; #endif PyObject* retval = NULL; + long longval; int result = 0; #ifdef WITH_THREAD gilstate = PyGILState_Ensure(); @@ -1350,10 +1362,17 @@ goto finally; } - result = PyLong_AsLong(retval); - if (PyErr_Occurred()) { + longval = PyLong_AsLongAndOverflow(retval, &result); + if (longval == -1 && PyErr_Occurred()) { + PyErr_Clear(); result = 0; } + else if (!result) { + if (longval > 0) + result = 1; + else if (longval < 0) + result = -1; + } finally: Py_XDECREF(string1); diff --git a/Modules/_sqlite/cursor.c b/Modules/_sqlite/cursor.c --- a/Modules/_sqlite/cursor.c +++ b/Modules/_sqlite/cursor.c @@ -26,14 +26,6 @@ #include "util.h" #include "sqlitecompat.h" -/* used to decide wether to call PyLong_FromLong or PyLong_FromLongLong */ -#ifndef INT32_MIN -#define INT32_MIN (-2147483647 - 1) -#endif -#ifndef INT32_MAX -#define INT32_MAX 2147483647 -#endif - PyObject* pysqlite_cursor_iternext(pysqlite_Cursor* self); static char* errmsg_fetch_across_rollback = "Cursor needed to be reset because of commit/rollback and can no longer be fetched from."; @@ -279,7 +271,6 @@ PyObject* row; PyObject* item = NULL; int coltype; - PY_LONG_LONG intval; PyObject* converter; PyObject* converted; Py_ssize_t nbytes; @@ -339,12 +330,7 @@ Py_INCREF(Py_None); converted = Py_None; } else if (coltype == SQLITE_INTEGER) { - intval = sqlite3_column_int64(self->statement->st, i); - if (intval < INT32_MIN || intval > INT32_MAX) { - converted = PyLong_FromLongLong(intval); - } else { - converted = PyLong_FromLong((long)intval); - } + converted = _pysqlite_long_from_int64(sqlite3_column_int64(self->statement->st, i)); } else if (coltype == SQLITE_FLOAT) { converted = PyFloat_FromDouble(sqlite3_column_double(self->statement->st, i)); } else if (coltype == SQLITE_TEXT) { @@ -446,7 +432,6 @@ PyObject* func_args; PyObject* result; int numcols; - PY_LONG_LONG lastrowid; int statement_type; PyObject* descriptor; PyObject* second_argument = NULL; @@ -716,10 +701,11 @@ Py_DECREF(self->lastrowid); if (!multiple && statement_type == STATEMENT_INSERT) { + sqlite3_int64 lastrowid; Py_BEGIN_ALLOW_THREADS lastrowid = sqlite3_last_insert_rowid(self->connection->db); Py_END_ALLOW_THREADS - self->lastrowid = PyLong_FromLong((long)lastrowid); + self->lastrowid = _pysqlite_long_from_int64(lastrowid); } else { Py_INCREF(Py_None); self->lastrowid = Py_None; diff --git a/Modules/_sqlite/statement.c b/Modules/_sqlite/statement.c --- a/Modules/_sqlite/statement.c +++ b/Modules/_sqlite/statement.c @@ -26,6 +26,7 @@ #include "connection.h" #include "microprotocols.h" #include "prepare_protocol.h" +#include "util.h" #include "sqlitecompat.h" /* prototypes */ @@ -90,7 +91,6 @@ int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter) { int rc = SQLITE_OK; - PY_LONG_LONG longlongval; const char* buffer; char* string; Py_ssize_t buflen; @@ -120,11 +120,14 @@ } switch (paramtype) { - case TYPE_LONG: - /* in the overflow error case, longval/longlongval is -1, and an exception is set */ - longlongval = PyLong_AsLongLong(parameter); - rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longlongval); + case TYPE_LONG: { + sqlite_int64 value = _pysqlite_long_as_int64(parameter); + if (value == -1 && PyErr_Occurred()) + rc = -1; + else + rc = sqlite3_bind_int64(self->st, pos, value); break; + } case TYPE_FLOAT: rc = sqlite3_bind_double(self->st, pos, PyFloat_AsDouble(parameter)); break; diff --git a/Modules/_sqlite/util.c b/Modules/_sqlite/util.c --- a/Modules/_sqlite/util.c +++ b/Modules/_sqlite/util.c @@ -104,3 +104,69 @@ return errorcode; } +#ifdef WORDS_BIGENDIAN +# define IS_LITTLE_ENDIAN 0 +#else +# define IS_LITTLE_ENDIAN 1 +#endif + +PyObject * +_pysqlite_long_from_int64(sqlite3_int64 value) +{ +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG < 8 + if (value > PY_LLONG_MAX || value < PY_LLONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +# if SIZEOF_LONG < SIZEOF_LONG_LONG + if (value > LONG_MAX || value < LONG_MIN) + return PyLong_FromLongLong(value); +# endif +#else +# if SIZEOF_LONG < 8 + if (value > LONG_MAX || value < LONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +#endif + return PyLong_FromLong(value); +} + +sqlite3_int64 +_pysqlite_long_as_int64(PyObject * py_val) +{ + int overflow; +#ifdef HAVE_LONG_LONG + PY_LONG_LONG value = PyLong_AsLongLongAndOverflow(py_val, &overflow); +#else + long value = PyLong_AsLongAndOverflow(py_val, &overflow); +#endif + if (value == -1 && PyErr_Occurred()) + return -1; + if (!overflow) { +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG > 8 + if (-0x8000000000000000LL <= value && value <= 0x7FFFFFFFFFFFFFFFLL) +# endif +#else +# if SIZEOF_LONG > 8 + if (-0x8000000000000000L <= value && value <= 0x7FFFFFFFFFFFFFFFL) +# endif +#endif + return value; + } + else if (sizeof(value) < sizeof(sqlite3_int64)) { + sqlite3_int64 int64val; + if (_PyLong_AsByteArray((PyLongObject *)py_val, + (unsigned char *)&int64val, sizeof(int64val), + IS_LITTLE_ENDIAN, 1 /* signed */) >= 0) { + return int64val; + } + } + PyErr_SetString(PyExc_OverflowError, + "Python int too large to convert to SQLite INTEGER"); + return -1; +} diff --git a/Modules/_sqlite/util.h b/Modules/_sqlite/util.h --- a/Modules/_sqlite/util.h +++ b/Modules/_sqlite/util.h @@ -35,4 +35,8 @@ * Returns the error code (0 means no error occurred). */ int _pysqlite_seterror(sqlite3* db, sqlite3_stmt* st); + +PyObject * _pysqlite_long_from_int64(sqlite3_int64 value); +sqlite3_int64 _pysqlite_long_as_int64(PyObject * value); + #endif -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 16:08:33 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 7 Feb 2013 16:08:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317073=3A_Fix_some_integer_overflows_in_sqlite3_?= =?utf-8?q?module=2E?= Message-ID: <3Z22wK6ppQzSc3@mail.python.org> http://hg.python.org/cpython/rev/b8a6bc70fc08 changeset: 82057:b8a6bc70fc08 parent: 82053:eb0370d4686c parent: 82056:c5fb8bc56def user: Serhiy Storchaka date: Thu Feb 07 17:05:32 2013 +0200 summary: Issue #17073: Fix some integer overflows in sqlite3 module. files: Lib/sqlite3/test/hooks.py | 19 ++++ Lib/sqlite3/test/userfunctions.py | 60 ++++++++++++--- Misc/NEWS | 2 + Modules/_sqlite/connection.c | 73 +++++++++++------- Modules/_sqlite/cursor.c | 20 +---- Modules/_sqlite/statement.c | 13 ++- Modules/_sqlite/util.c | 66 +++++++++++++++++ Modules/_sqlite/util.h | 4 + 8 files changed, 196 insertions(+), 61 deletions(-) diff --git a/Lib/sqlite3/test/hooks.py b/Lib/sqlite3/test/hooks.py --- a/Lib/sqlite3/test/hooks.py +++ b/Lib/sqlite3/test/hooks.py @@ -76,6 +76,25 @@ except sqlite.OperationalError as e: self.assertEqual(e.args[0].lower(), "no such collation sequence: mycoll") + def CheckCollationReturnsLargeInteger(self): + def mycoll(x, y): + # reverse order + return -((x > y) - (x < y)) * 2**32 + con = sqlite.connect(":memory:") + con.create_collation("mycoll", mycoll) + sql = """ + select x from ( + select 'a' as x + union + select 'b' as x + union + select 'c' as x + ) order by x collate mycoll + """ + result = con.execute(sql).fetchall() + self.assertEqual(result, [('c',), ('b',), ('a',)], + msg="the expected order was not returned") + def CheckCollationRegisterTwice(self): """ Register two different collation functions under the same name. diff --git a/Lib/sqlite3/test/userfunctions.py b/Lib/sqlite3/test/userfunctions.py --- a/Lib/sqlite3/test/userfunctions.py +++ b/Lib/sqlite3/test/userfunctions.py @@ -375,14 +375,15 @@ val = cur.fetchone()[0] self.assertEqual(val, 60) -def authorizer_cb(action, arg1, arg2, dbname, source): - if action != sqlite.SQLITE_SELECT: - return sqlite.SQLITE_DENY - if arg2 == 'c2' or arg1 == 't2': - return sqlite.SQLITE_DENY - return sqlite.SQLITE_OK +class AuthorizerTests(unittest.TestCase): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return sqlite.SQLITE_DENY + if arg2 == 'c2' or arg1 == 't2': + return sqlite.SQLITE_DENY + return sqlite.SQLITE_OK -class AuthorizerTests(unittest.TestCase): def setUp(self): self.con = sqlite.connect(":memory:") self.con.executescript(""" @@ -395,12 +396,12 @@ # For our security test: self.con.execute("select c2 from t2") - self.con.set_authorizer(authorizer_cb) + self.con.set_authorizer(self.authorizer_cb) def tearDown(self): pass - def CheckTableAccess(self): + def test_table_access(self): try: self.con.execute("select * from t2") except sqlite.DatabaseError as e: @@ -409,7 +410,7 @@ return self.fail("should have raised an exception due to missing privileges") - def CheckColumnAccess(self): + def test_column_access(self): try: self.con.execute("select c2 from t1") except sqlite.DatabaseError as e: @@ -418,11 +419,46 @@ return self.fail("should have raised an exception due to missing privileges") +class AuthorizerRaiseExceptionTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + raise ValueError + if arg2 == 'c2' or arg1 == 't2': + raise ValueError + return sqlite.SQLITE_OK + +class AuthorizerIllegalTypeTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 0.0 + if arg2 == 'c2' or arg1 == 't2': + return 0.0 + return sqlite.SQLITE_OK + +class AuthorizerLargeIntegerTests(AuthorizerTests): + @staticmethod + def authorizer_cb(action, arg1, arg2, dbname, source): + if action != sqlite.SQLITE_SELECT: + return 2**32 + if arg2 == 'c2' or arg1 == 't2': + return 2**32 + return sqlite.SQLITE_OK + + def suite(): function_suite = unittest.makeSuite(FunctionTests, "Check") aggregate_suite = unittest.makeSuite(AggregateTests, "Check") - authorizer_suite = unittest.makeSuite(AuthorizerTests, "Check") - return unittest.TestSuite((function_suite, aggregate_suite, authorizer_suite)) + authorizer_suite = unittest.makeSuite(AuthorizerTests) + return unittest.TestSuite(( + function_suite, + aggregate_suite, + authorizer_suite, + unittest.makeSuite(AuthorizerRaiseExceptionTests), + unittest.makeSuite(AuthorizerIllegalTypeTests), + unittest.makeSuite(AuthorizerLargeIntegerTests), + )) def test(): runner = unittest.TextTestRunner() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -238,6 +238,8 @@ Library ------- +- Issue #17073: Fix some integer overflows in sqlite3 module. + - Issue #17114: IDLE?now uses non-strict config parser. - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection diff --git a/Modules/_sqlite/connection.c b/Modules/_sqlite/connection.c --- a/Modules/_sqlite/connection.c +++ b/Modules/_sqlite/connection.c @@ -482,32 +482,35 @@ } } -void _pysqlite_set_result(sqlite3_context* context, PyObject* py_val) +static int +_pysqlite_set_result(sqlite3_context* context, PyObject* py_val) { - const char* buffer; - Py_ssize_t buflen; - - if ((!py_val) || PyErr_Occurred()) { - sqlite3_result_null(context); - } else if (py_val == Py_None) { + if (py_val == Py_None) { sqlite3_result_null(context); } else if (PyLong_Check(py_val)) { - sqlite3_result_int64(context, PyLong_AsLongLong(py_val)); + sqlite_int64 value = _pysqlite_long_as_int64(py_val); + if (value == -1 && PyErr_Occurred()) + return -1; + sqlite3_result_int64(context, value); } else if (PyFloat_Check(py_val)) { sqlite3_result_double(context, PyFloat_AsDouble(py_val)); } else if (PyUnicode_Check(py_val)) { - char *str = _PyUnicode_AsString(py_val); - if (str != NULL) - sqlite3_result_text(context, str, -1, SQLITE_TRANSIENT); + const char *str = _PyUnicode_AsString(py_val); + if (str == NULL) + return -1; + sqlite3_result_text(context, str, -1, SQLITE_TRANSIENT); } else if (PyObject_CheckBuffer(py_val)) { + const char* buffer; + Py_ssize_t buflen; if (PyObject_AsCharBuffer(py_val, &buffer, &buflen) != 0) { PyErr_SetString(PyExc_ValueError, "could not convert BLOB to buffer"); - } else { - sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); + return -1; } + sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); } else { - /* TODO: raise error */ + return -1; } + return 0; } PyObject* _pysqlite_build_py_params(sqlite3_context *context, int argc, sqlite3_value** argv) @@ -528,7 +531,7 @@ cur_value = argv[i]; switch (sqlite3_value_type(argv[i])) { case SQLITE_INTEGER: - cur_py_value = PyLong_FromLongLong(sqlite3_value_int64(cur_value)); + cur_py_value = _pysqlite_long_from_int64(sqlite3_value_int64(cur_value)); break; case SQLITE_FLOAT: cur_py_value = PyFloat_FromDouble(sqlite3_value_double(cur_value)); @@ -571,6 +574,7 @@ PyObject* args; PyObject* py_func; PyObject* py_retval = NULL; + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -586,10 +590,12 @@ Py_DECREF(args); } + ok = 0; if (py_retval) { - _pysqlite_set_result(context, py_retval); + ok = _pysqlite_set_result(context, py_retval) == 0; Py_DECREF(py_retval); - } else { + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { @@ -669,9 +675,10 @@ void _pysqlite_final_callback(sqlite3_context* context) { - PyObject* function_result = NULL; + PyObject* function_result; PyObject** aggregate_instance; _Py_IDENTIFIER(finalize); + int ok; #ifdef WITH_THREAD PyGILState_STATE threadstate; @@ -688,21 +695,23 @@ } function_result = _PyObject_CallMethodId(*aggregate_instance, &PyId_finalize, ""); - if (!function_result) { + Py_DECREF(*aggregate_instance); + + ok = 0; + if (function_result) { + ok = _pysqlite_set_result(context, function_result) == 0; + Py_DECREF(function_result); + } + if (!ok) { if (_enable_callback_tracebacks) { PyErr_Print(); } else { PyErr_Clear(); } _sqlite3_result_error(context, "user-defined aggregate's 'finalize' method raised error", -1); - } else { - _pysqlite_set_result(context, function_result); } error: - Py_XDECREF(*aggregate_instance); - Py_XDECREF(function_result); - #ifdef WITH_THREAD PyGILState_Release(threadstate); #endif @@ -859,7 +868,9 @@ rc = SQLITE_DENY; } else { if (PyLong_Check(ret)) { - rc = (int)PyLong_AsLong(ret); + rc = _PyLong_AsInt(ret); + if (rc == -1 && PyErr_Occurred()) + rc = SQLITE_DENY; } else { rc = SQLITE_DENY; } @@ -1327,6 +1338,7 @@ PyGILState_STATE gilstate; #endif PyObject* retval = NULL; + long longval; int result = 0; #ifdef WITH_THREAD gilstate = PyGILState_Ensure(); @@ -1350,10 +1362,17 @@ goto finally; } - result = PyLong_AsLong(retval); - if (PyErr_Occurred()) { + longval = PyLong_AsLongAndOverflow(retval, &result); + if (longval == -1 && PyErr_Occurred()) { + PyErr_Clear(); result = 0; } + else if (!result) { + if (longval > 0) + result = 1; + else if (longval < 0) + result = -1; + } finally: Py_XDECREF(string1); diff --git a/Modules/_sqlite/cursor.c b/Modules/_sqlite/cursor.c --- a/Modules/_sqlite/cursor.c +++ b/Modules/_sqlite/cursor.c @@ -26,14 +26,6 @@ #include "util.h" #include "sqlitecompat.h" -/* used to decide wether to call PyLong_FromLong or PyLong_FromLongLong */ -#ifndef INT32_MIN -#define INT32_MIN (-2147483647 - 1) -#endif -#ifndef INT32_MAX -#define INT32_MAX 2147483647 -#endif - PyObject* pysqlite_cursor_iternext(pysqlite_Cursor* self); static char* errmsg_fetch_across_rollback = "Cursor needed to be reset because of commit/rollback and can no longer be fetched from."; @@ -279,7 +271,6 @@ PyObject* row; PyObject* item = NULL; int coltype; - PY_LONG_LONG intval; PyObject* converter; PyObject* converted; Py_ssize_t nbytes; @@ -339,12 +330,7 @@ Py_INCREF(Py_None); converted = Py_None; } else if (coltype == SQLITE_INTEGER) { - intval = sqlite3_column_int64(self->statement->st, i); - if (intval < INT32_MIN || intval > INT32_MAX) { - converted = PyLong_FromLongLong(intval); - } else { - converted = PyLong_FromLong((long)intval); - } + converted = _pysqlite_long_from_int64(sqlite3_column_int64(self->statement->st, i)); } else if (coltype == SQLITE_FLOAT) { converted = PyFloat_FromDouble(sqlite3_column_double(self->statement->st, i)); } else if (coltype == SQLITE_TEXT) { @@ -446,7 +432,6 @@ PyObject* func_args; PyObject* result; int numcols; - PY_LONG_LONG lastrowid; int statement_type; PyObject* descriptor; PyObject* second_argument = NULL; @@ -716,10 +701,11 @@ Py_DECREF(self->lastrowid); if (!multiple && statement_type == STATEMENT_INSERT) { + sqlite3_int64 lastrowid; Py_BEGIN_ALLOW_THREADS lastrowid = sqlite3_last_insert_rowid(self->connection->db); Py_END_ALLOW_THREADS - self->lastrowid = PyLong_FromLong((long)lastrowid); + self->lastrowid = _pysqlite_long_from_int64(lastrowid); } else { Py_INCREF(Py_None); self->lastrowid = Py_None; diff --git a/Modules/_sqlite/statement.c b/Modules/_sqlite/statement.c --- a/Modules/_sqlite/statement.c +++ b/Modules/_sqlite/statement.c @@ -26,6 +26,7 @@ #include "connection.h" #include "microprotocols.h" #include "prepare_protocol.h" +#include "util.h" #include "sqlitecompat.h" /* prototypes */ @@ -90,7 +91,6 @@ int pysqlite_statement_bind_parameter(pysqlite_Statement* self, int pos, PyObject* parameter) { int rc = SQLITE_OK; - PY_LONG_LONG longlongval; const char* buffer; char* string; Py_ssize_t buflen; @@ -120,11 +120,14 @@ } switch (paramtype) { - case TYPE_LONG: - /* in the overflow error case, longval/longlongval is -1, and an exception is set */ - longlongval = PyLong_AsLongLong(parameter); - rc = sqlite3_bind_int64(self->st, pos, (sqlite_int64)longlongval); + case TYPE_LONG: { + sqlite_int64 value = _pysqlite_long_as_int64(parameter); + if (value == -1 && PyErr_Occurred()) + rc = -1; + else + rc = sqlite3_bind_int64(self->st, pos, value); break; + } case TYPE_FLOAT: rc = sqlite3_bind_double(self->st, pos, PyFloat_AsDouble(parameter)); break; diff --git a/Modules/_sqlite/util.c b/Modules/_sqlite/util.c --- a/Modules/_sqlite/util.c +++ b/Modules/_sqlite/util.c @@ -104,3 +104,69 @@ return errorcode; } +#ifdef WORDS_BIGENDIAN +# define IS_LITTLE_ENDIAN 0 +#else +# define IS_LITTLE_ENDIAN 1 +#endif + +PyObject * +_pysqlite_long_from_int64(sqlite3_int64 value) +{ +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG < 8 + if (value > PY_LLONG_MAX || value < PY_LLONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +# if SIZEOF_LONG < SIZEOF_LONG_LONG + if (value > LONG_MAX || value < LONG_MIN) + return PyLong_FromLongLong(value); +# endif +#else +# if SIZEOF_LONG < 8 + if (value > LONG_MAX || value < LONG_MIN) { + return _PyLong_FromByteArray(&value, sizeof(value), + IS_LITTLE_ENDIAN, 1 /* signed */); + } +# endif +#endif + return PyLong_FromLong(value); +} + +sqlite3_int64 +_pysqlite_long_as_int64(PyObject * py_val) +{ + int overflow; +#ifdef HAVE_LONG_LONG + PY_LONG_LONG value = PyLong_AsLongLongAndOverflow(py_val, &overflow); +#else + long value = PyLong_AsLongAndOverflow(py_val, &overflow); +#endif + if (value == -1 && PyErr_Occurred()) + return -1; + if (!overflow) { +#ifdef HAVE_LONG_LONG +# if SIZEOF_LONG_LONG > 8 + if (-0x8000000000000000LL <= value && value <= 0x7FFFFFFFFFFFFFFFLL) +# endif +#else +# if SIZEOF_LONG > 8 + if (-0x8000000000000000L <= value && value <= 0x7FFFFFFFFFFFFFFFL) +# endif +#endif + return value; + } + else if (sizeof(value) < sizeof(sqlite3_int64)) { + sqlite3_int64 int64val; + if (_PyLong_AsByteArray((PyLongObject *)py_val, + (unsigned char *)&int64val, sizeof(int64val), + IS_LITTLE_ENDIAN, 1 /* signed */) >= 0) { + return int64val; + } + } + PyErr_SetString(PyExc_OverflowError, + "Python int too large to convert to SQLite INTEGER"); + return -1; +} diff --git a/Modules/_sqlite/util.h b/Modules/_sqlite/util.h --- a/Modules/_sqlite/util.h +++ b/Modules/_sqlite/util.h @@ -35,4 +35,8 @@ * Returns the error code (0 means no error occurred). */ int _pysqlite_seterror(sqlite3* db, sqlite3_stmt* st); + +PyObject * _pysqlite_long_from_int64(sqlite3_int64 value); +sqlite3_int64 _pysqlite_long_as_int64(PyObject * value); + #endif -- Repository URL: http://hg.python.org/cpython From tjreedy at udel.edu Thu Feb 7 19:35:32 2013 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 07 Feb 2013 13:35:32 -0500 Subject: [Python-checkins] cpython (3.2): Fix Issue17069: Document getcode method in urllib.request.rst In-Reply-To: <3Z1tWQ1FnPzMRY@mail.python.org> References: <3Z1tWQ1FnPzMRY@mail.python.org> Message-ID: <5113F3F4.7000200@udel.edu> 3 suggested changes: 1. On 2/7/2013 3:49 AM, senthil.kumaran wrote: > + For ftp, file, data urls and requests are explicity handled by legacy > + :class:`URLopener` and :class:`FancyURLopener` class, this function returns The first part up to "class," is not a proper English clause. Perhaps you meant ('that' added): "For ftp, file, data urls and requests that are explicity handled ..." or (to make clear that 'that explicitly' does not apply to ftp,file,data ('and' added): "For ftp, file, and data urls and requests that are explicity handled " or perhaps clearest ('are' deleted, 'that' not added) "For ftp, file, and data urls and requests explicity handled ..." 2. it seems that 'class' should be 'classes'. 3. + ... this function returns + an :class:`urllib.response.addinfourl` object Since 'url' is generally pronounced 'you-are-el' rather than like 'earl' (I checked Merriam-Webster online), that should be 'a' rather than 'an' url... . https://owl.english.purdue.edu/owl/resource/591/01/ From python-checkins at python.org Thu Feb 7 23:18:39 2013 From: python-checkins at python.org (victor.stinner) Date: Thu, 7 Feb 2013 23:18:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MTM3?= =?utf-8?q?=3A_When_an_Unicode_string_is_resized=2C_the_internal_wide_char?= =?utf-8?q?acter?= Message-ID: <3Z2DSb6WcKzSd8@mail.python.org> http://hg.python.org/cpython/rev/3b316ea5aa82 changeset: 82058:3b316ea5aa82 branch: 3.3 parent: 82056:c5fb8bc56def user: Victor Stinner date: Thu Feb 07 23:12:46 2013 +0100 summary: Issue #17137: When an Unicode string is resized, the internal wide character string (wstr) format is now cleared. files: Lib/test/test_unicode.py | 15 +++++++++++++++ Misc/NEWS | 3 +++ Objects/unicodeobject.c | 4 ++++ 3 files changed, 22 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py --- a/Lib/test/test_unicode.py +++ b/Lib/test/test_unicode.py @@ -2167,6 +2167,21 @@ self.assertEqual(args[0], text) self.assertEqual(len(args), 1) + def test_resize(self): + for length in range(1, 100, 7): + # generate a fresh string (refcount=1) + text = 'a' * length + 'b' + + # fill wstr internal field + abc = text.encode('unicode_internal') + self.assertEqual(abc.decode('unicode_internal'), text) + + # resize text: wstr field must be cleared and then recomputed + text += 'c' + abcdef = text.encode('unicode_internal') + self.assertNotEqual(abc, abcdef) + self.assertEqual(abcdef.decode('unicode_internal'), text) + class StringModuleTest(unittest.TestCase): def test_formatter_parser(self): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #17137: When an Unicode string is resized, the internal wide character + string (wstr) format is now cleared. + - Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -702,6 +702,10 @@ if (!PyUnicode_IS_ASCII(unicode)) _PyUnicode_WSTR_LENGTH(unicode) = length; } + else if (_PyUnicode_HAS_WSTR_MEMORY(unicode)) { + PyObject_DEL(_PyUnicode_WSTR(unicode)); + _PyUnicode_WSTR(unicode) = NULL; + } PyUnicode_WRITE(PyUnicode_KIND(unicode), PyUnicode_DATA(unicode), length, 0); assert(_PyUnicode_CheckConsistency(unicode, 0)); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 7 23:18:41 2013 From: python-checkins at python.org (victor.stinner) Date: Thu, 7 Feb 2013 23:18:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_=28Merge_3=2E3=29_Issue_=2317137=3A_When_an_Unicode_stri?= =?utf-8?q?ng_is_resized=2C_the_internal_wide?= Message-ID: <3Z2DSd2KQczScx@mail.python.org> http://hg.python.org/cpython/rev/c10a3ddba483 changeset: 82059:c10a3ddba483 parent: 82057:b8a6bc70fc08 parent: 82058:3b316ea5aa82 user: Victor Stinner date: Thu Feb 07 23:17:34 2013 +0100 summary: (Merge 3.3) Issue #17137: When an Unicode string is resized, the internal wide character string (wstr) format is now cleared. files: Lib/test/test_unicode.py | 15 +++++++++++++++ Misc/NEWS | 3 +++ Objects/unicodeobject.c | 4 ++++ 3 files changed, 22 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py --- a/Lib/test/test_unicode.py +++ b/Lib/test/test_unicode.py @@ -2191,6 +2191,21 @@ self.assertEqual(args[0], text) self.assertEqual(len(args), 1) + def test_resize(self): + for length in range(1, 100, 7): + # generate a fresh string (refcount=1) + text = 'a' * length + 'b' + + # fill wstr internal field + abc = text.encode('unicode_internal') + self.assertEqual(abc.decode('unicode_internal'), text) + + # resize text: wstr field must be cleared and then recomputed + text += 'c' + abcdef = text.encode('unicode_internal') + self.assertNotEqual(abc, abcdef) + self.assertEqual(abcdef.decode('unicode_internal'), text) + class StringModuleTest(unittest.TestCase): def test_formatter_parser(self): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17137: When an Unicode string is resized, the internal wide character + string (wstr) format is now cleared. + - Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -717,6 +717,10 @@ if (!PyUnicode_IS_ASCII(unicode)) _PyUnicode_WSTR_LENGTH(unicode) = length; } + else if (_PyUnicode_HAS_WSTR_MEMORY(unicode)) { + PyObject_DEL(_PyUnicode_WSTR(unicode)); + _PyUnicode_WSTR(unicode) = NULL; + } #ifdef Py_DEBUG unicode_fill_invalid(unicode, old_length); #endif -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 01:17:01 2013 From: python-checkins at python.org (frank.wierzbicki) Date: Fri, 8 Feb 2013 01:17:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_Add_self_to_experts_for_J?= =?utf-8?q?ava/JVM=2E?= Message-ID: <3Z2H596B43zPXn@mail.python.org> http://hg.python.org/devguide/rev/75d1706be67c changeset: 596:75d1706be67c user: Frank Wierzbicki date: Thu Feb 07 16:16:41 2013 -0800 summary: Add self to experts for Java/JVM. files: experts.rst | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/experts.rst b/experts.rst --- a/experts.rst +++ b/experts.rst @@ -285,6 +285,7 @@ OS2/EMX aimacintyre Solaris/OpenIndiana jcea Windows tim.golden, brian.curtin +JVM/Java frank.wierzbicki =================== =========== -- Repository URL: http://hg.python.org/devguide From solipsis at pitrou.net Fri Feb 8 06:04:50 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Fri, 08 Feb 2013 06:04:50 +0100 Subject: [Python-checkins] Daily reference leaks (c10a3ddba483): sum=1 Message-ID: results for c10a3ddba483 on branch "default" -------------------------------------------- test_concurrent_futures leaked [0, -2, 3] memory blocks, sum=1 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogsIQ_Ms', '-x'] From python-checkins at python.org Fri Feb 8 06:43:51 2013 From: python-checkins at python.org (senthil.kumaran) Date: Fri, 8 Feb 2013 06:43:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Addressing_the?= =?utf-8?q?_review_comment_made_by_Terry_Reedy?= Message-ID: <3Z2QLH3XtSzSdq@mail.python.org> http://hg.python.org/cpython/rev/28229bdb1571 changeset: 82060:28229bdb1571 branch: 3.2 parent: 82055:55a89352e220 user: Senthil Kumaran date: Thu Feb 07 21:43:21 2013 -0800 summary: Addressing the review comment made by Terry Reedy files: Doc/library/urllib.request.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -61,9 +61,9 @@ :class:`http.client.HTTPResponse` object which has the following :ref:`httpresponse-objects` methods. - For ftp, file, data urls and requests are explicity handled by legacy - :class:`URLopener` and :class:`FancyURLopener` class, this function returns - an :class:`urllib.response.addinfourl` object which can work as + For ftp, file, and data urls and requests explicity handled by legacy + :class:`URLopener` and :class:`FancyURLopener` classes, this function + returns a :class:`urllib.response.addinfourl` object which can work as :term:`context manager` and has methods such as * :meth:`~urllib.response.addinfourl.geturl` --- return the URL of the resource retrieved, -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 06:43:52 2013 From: python-checkins at python.org (senthil.kumaran) Date: Fri, 8 Feb 2013 06:43:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Addressing_the_review_comment_made_by_Terry_Reedy?= Message-ID: <3Z2QLJ6PbZzSdw@mail.python.org> http://hg.python.org/cpython/rev/3942c20bebdb changeset: 82061:3942c20bebdb branch: 3.3 parent: 82058:3b316ea5aa82 parent: 82060:28229bdb1571 user: Senthil Kumaran date: Thu Feb 07 21:44:42 2013 -0800 summary: Addressing the review comment made by Terry Reedy files: Doc/library/urllib.request.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -67,9 +67,9 @@ :class:`http.client.HTTPResponse` object which has the following :ref:`httpresponse-objects` methods. - For ftp, file, data urls and requests are explicity handled by legacy - :class:`URLopener` and :class:`FancyURLopener` class, this function returns - an :class:`urllib.response.addinfourl` object which can work as + For ftp, file, and data urls and requests explicity handled by legacy + :class:`URLopener` and :class:`FancyURLopener` classes, this function + returns a :class:`urllib.response.addinfourl` object which can work as :term:`context manager` and has methods such as * :meth:`~urllib.response.addinfourl.geturl` --- return the URL of the resource retrieved, -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 06:43:54 2013 From: python-checkins at python.org (senthil.kumaran) Date: Fri, 8 Feb 2013 06:43:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Addressing_the_review_comment_made_by_Terry_Reedy?= Message-ID: <3Z2QLL1x28zSdq@mail.python.org> http://hg.python.org/cpython/rev/771a0317da83 changeset: 82062:771a0317da83 parent: 82059:c10a3ddba483 parent: 82061:3942c20bebdb user: Senthil Kumaran date: Thu Feb 07 21:45:08 2013 -0800 summary: Addressing the review comment made by Terry Reedy files: Doc/library/urllib.request.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -67,9 +67,9 @@ :class:`http.client.HTTPResponse` object which has the following :ref:`httpresponse-objects` methods. - For ftp, file, data urls and requests are explicity handled by legacy - :class:`URLopener` and :class:`FancyURLopener` class, this function returns - an :class:`urllib.response.addinfourl` object which can work as + For ftp, file, and data urls and requests explicity handled by legacy + :class:`URLopener` and :class:`FancyURLopener` classes, this function + returns a :class:`urllib.response.addinfourl` object which can work as :term:`context manager` and has methods such as * :meth:`~urllib.response.addinfourl.geturl` --- return the URL of the resource retrieved, -- Repository URL: http://hg.python.org/cpython From senthil at uthcode.com Fri Feb 8 06:46:40 2013 From: senthil at uthcode.com (Senthil Kumaran) Date: Thu, 7 Feb 2013 21:46:40 -0800 Subject: [Python-checkins] cpython (3.2): Fix Issue17069: Document getcode method in urllib.request.rst In-Reply-To: <5113F3F4.7000200@udel.edu> References: <3Z1tWQ1FnPzMRY@mail.python.org> <5113F3F4.7000200@udel.edu> Message-ID: On Thu, Feb 7, 2013 at 10:35 AM, Terry Reedy wrote: > 3 suggested changes: Thanks for the review comments. I have made the changes. > 1. On 2/7/2013 3:49 AM, senthil.kumaran wrote: > The first part up to "class," is not a proper English clause. Perhaps you > meant ('that' added): Yes, I had intended to. Looks like I missed it. Ended up using your third suggestion. > Since 'url' is generally pronounced 'you-are-el' rather than like 'earl' (I > checked Merriam-Webster online), that should be 'a' rather than 'an' url... > . > https://owl.english.purdue.edu/owl/resource/591/01/ This is important to know. Thanks for sharing. -- Senthil From python-checkins at python.org Fri Feb 8 07:18:35 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 8 Feb 2013 07:18:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzY5NzI6?= =?utf-8?q?_fix_the_documentation_mis_applied_patch=2E?= Message-ID: <3Z2R6M4QS4zSgH@mail.python.org> http://hg.python.org/cpython/rev/d73fb6b06891 changeset: 82063:d73fb6b06891 branch: 2.7 parent: 82054:649937bb8f1c user: Gregory P. Smith date: Thu Feb 07 22:11:03 2013 -0800 summary: Issue #6972: fix the documentation mis applied patch. files: Doc/library/zipfile.rst | 23 +++++++++++++---------- 1 files changed, 13 insertions(+), 10 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -213,6 +213,16 @@ .. versionadded:: 2.6 + .. note:: + + If a member filename is an absolute path, a drive/UNC sharepoint and + leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes + ``foo/bar`` on Unix, and ``C:\foo\bar`` becomes ``foo\bar`` on Windows. + And all ``".."`` components in a member filename will be removed, e.g.: + ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal + characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) + replaced by underscore (``_``). + .. method:: ZipFile.extractall([path[, members[, pwd]]]) @@ -227,6 +237,9 @@ It is possible that files are created outside of *path*, e.g. members that have absolute filenames starting with ``"/"`` or filenames with two dots ``".."``. + + .. versionchanged:: 2.7.4 + The zipfile module attempts to prevent that. See :meth:`extract` note. .. versionadded:: 2.6 @@ -242,16 +255,6 @@ .. versionadded:: 2.6 - .. note:: - - If a member filename is an absolute path, a drive/UNC sharepoint and - leading (back)slashes will be stripped, e.g.: ``///foo/bar`` becomes - ``foo/bar`` on Unix, and ``C:\foo\bar`` becomes ``foo\bar`` on Windows. - And all ``".."`` components in a member filename will be removed, e.g.: - ``../../foo../../ba..r`` becomes ``foo../ba..r``. On Windows illegal - characters (``:``, ``<``, ``>``, ``|``, ``"``, ``?``, and ``*``) - replaced by underscore (``_``). - .. method:: ZipFile.read(name[, pwd]) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 07:18:36 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 8 Feb 2013 07:18:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzY5NzI6?= =?utf-8?q?_keep_the_warning_about_untrusted_extraction_and_mention?= Message-ID: <3Z2R6N734jzRRP@mail.python.org> http://hg.python.org/cpython/rev/1c2d41850147 changeset: 82064:1c2d41850147 branch: 3.2 parent: 82060:28229bdb1571 user: Gregory P. Smith date: Thu Feb 07 22:15:04 2013 -0800 summary: Issue #6972: keep the warning about untrusted extraction and mention the version it was improved in. files: Doc/library/zipfile.rst | 10 ++++++++-- 1 files changed, 8 insertions(+), 2 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -232,9 +232,15 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. note:: + .. warning:: - See :meth:`extract` note. + Never extract archives from untrusted sources without prior inspection. + It is possible that files are created outside of *path*, e.g. members + that have absolute filenames starting with ``"/"`` or filenames with two + dots ``".."``. + + .. versionchanged:: 3.2.4 + The zipfile module attempts to prevent that. See :meth:`extract` note. .. method:: ZipFile.printdir() -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 07:18:38 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 8 Feb 2013 07:18:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=236972=3A_keep_the_warning_about_untrusted_extraction_a?= =?utf-8?q?nd_mention?= Message-ID: <3Z2R6Q2SK9zSgY@mail.python.org> http://hg.python.org/cpython/rev/5fbca37de9b1 changeset: 82065:5fbca37de9b1 branch: 3.3 parent: 82061:3942c20bebdb parent: 82064:1c2d41850147 user: Gregory P. Smith date: Thu Feb 07 22:15:51 2013 -0800 summary: Issue #6972: keep the warning about untrusted extraction and mention the version it was improved in. files: Doc/library/zipfile.rst | 10 ++++++++-- 1 files changed, 8 insertions(+), 2 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -260,9 +260,15 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. note:: + .. warning:: - See :meth:`extract` note. + Never extract archives from untrusted sources without prior inspection. + It is possible that files are created outside of *path*, e.g. members + that have absolute filenames starting with ``"/"`` or filenames with two + dots ``".."``. + + .. versionchanged:: 3.3.1 + The zipfile module attempts to prevent that. See :meth:`extract` note. .. method:: ZipFile.printdir() -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 07:18:39 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 8 Feb 2013 07:18:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=236972=3A_keep_the_warning_about_untrusted_extrac?= =?utf-8?q?tion_and_mention?= Message-ID: <3Z2R6R52mLzSfR@mail.python.org> http://hg.python.org/cpython/rev/f5e3f2f0fe79 changeset: 82066:f5e3f2f0fe79 parent: 82062:771a0317da83 parent: 82065:5fbca37de9b1 user: Gregory P. Smith date: Thu Feb 07 22:17:21 2013 -0800 summary: Issue #6972: keep the warning about untrusted extraction and mention the version it was improved in. files: Doc/library/zipfile.rst | 6 +++++- 1 files changed, 5 insertions(+), 1 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -260,8 +260,12 @@ be a subset of the list returned by :meth:`namelist`. *pwd* is the password used for encrypted files. - .. note:: + .. warning:: + Never extract archives from untrusted sources without prior inspection. + It is possible that files are created outside of *path*, e.g. members + that have absolute filenames starting with ``"/"`` or filenames with two + dots ``".."``. This module attempts to prevent that. See :meth:`extract` note. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 07:18:41 2013 From: python-checkins at python.org (gregory.p.smith) Date: Fri, 8 Feb 2013 07:18:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_whitespace_fix?= Message-ID: <3Z2R6T1R3kzSgT@mail.python.org> http://hg.python.org/cpython/rev/b370cf4d8cc1 changeset: 82067:b370cf4d8cc1 branch: 2.7 parent: 82063:d73fb6b06891 user: Gregory P. Smith date: Thu Feb 07 22:18:21 2013 -0800 summary: whitespace fix files: Doc/library/zipfile.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -237,7 +237,7 @@ It is possible that files are created outside of *path*, e.g. members that have absolute filenames starting with ``"/"`` or filenames with two dots ``".."``. - + .. versionchanged:: 2.7.4 The zipfile module attempts to prevent that. See :meth:`extract` note. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 10:28:37 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 8 Feb 2013 10:28:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_accidental?= =?utf-8?q?_non-breakable_space_=28U+00A0=29=2E?= Message-ID: <3Z2WKd0mnJzRX8@mail.python.org> http://hg.python.org/cpython/rev/532c59c52cb3 changeset: 82068:532c59c52cb3 branch: 2.7 user: Serhiy Storchaka date: Fri Feb 08 11:21:32 2013 +0200 summary: Fix accidental non-breakable space (U+00A0). files: Misc/NEWS | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -239,7 +239,7 @@ - Issue #15505: `unittest.installHandler` no longer assumes SIGINT handler is set to a callable object. -- Issue #17051: Fix a memory leak in os.path.isdir()?on Windows. Patch by +- Issue #17051: Fix a memory leak in os.path.isdir() on Windows. Patch by Robert Xiao. - Issue #9290: In IDLE the sys.std* streams now implement io.TextIOBase -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 10:28:38 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 8 Feb 2013 10:28:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fix_accidental?= =?utf-8?q?_non-breakable_spaces_=28U+00A0=29=2E?= Message-ID: <3Z2WKf3V4gzRc1@mail.python.org> http://hg.python.org/cpython/rev/83bf8191e4d1 changeset: 82069:83bf8191e4d1 branch: 3.2 parent: 82064:1c2d41850147 user: Serhiy Storchaka date: Fri Feb 08 11:22:05 2013 +0200 summary: Fix accidental non-breakable spaces (U+00A0). files: Lib/test/test_pyexpat.py | 2 +- Misc/NEWS | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_pyexpat.py b/Lib/test/test_pyexpat.py --- a/Lib/test/test_pyexpat.py +++ b/Lib/test/test_pyexpat.py @@ -600,7 +600,7 @@ self.assertEqual(str(e), 'unclosed token: line 2, column 0') def test2(self): - #?\xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) + # \xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) xml = b"\r\n" parser = expat.ParserCreate() try: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -217,7 +217,7 @@ - Issue #17073: Fix some integer overflows in sqlite3 module. -- Issue #17114: IDLE?now uses non-strict config parser. +- Issue #17114: IDLE now uses non-strict config parser. - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 10:28:39 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 8 Feb 2013 10:28:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_accidental_non-breakable_spaces_=28U+00A0=29=2E?= Message-ID: <3Z2WKg6Z4tzSbw@mail.python.org> http://hg.python.org/cpython/rev/ae92c8759c43 changeset: 82070:ae92c8759c43 branch: 3.3 parent: 82065:5fbca37de9b1 parent: 82069:83bf8191e4d1 user: Serhiy Storchaka date: Fri Feb 08 11:24:16 2013 +0200 summary: Fix accidental non-breakable spaces (U+00A0). files: Lib/test/test_lzma.py | 2 +- Lib/test/test_pyexpat.py | 2 +- Misc/NEWS | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/test/test_lzma.py b/Lib/test/test_lzma.py --- a/Lib/test/test_lzma.py +++ b/Lib/test/test_lzma.py @@ -671,7 +671,7 @@ def test_read_truncated(self): # Drop stream footer: CRC (4 bytes), index size (4 bytes), - # flags?(2 bytes) and magic number (2 bytes). + # flags (2 bytes) and magic number (2 bytes). truncated = COMPRESSED_XZ[:-12] with LZMAFile(BytesIO(truncated)) as f: self.assertRaises(EOFError, f.read) diff --git a/Lib/test/test_pyexpat.py b/Lib/test/test_pyexpat.py --- a/Lib/test/test_pyexpat.py +++ b/Lib/test/test_pyexpat.py @@ -600,7 +600,7 @@ self.assertEqual(str(e), 'unclosed token: line 2, column 0') def test2(self): - #?\xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) + # \xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) xml = b"\r\n" parser = expat.ParserCreate() try: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -171,7 +171,7 @@ - Issue #17073: Fix some integer overflows in sqlite3 module. -- Issue #17114: IDLE?now uses non-strict config parser. +- Issue #17114: IDLE now uses non-strict config parser. - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 10:28:41 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Fri, 8 Feb 2013 10:28:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_accidental_non-breakable_spaces_=28U+00A0=29=2E?= Message-ID: <3Z2WKj2Ck8zSbq@mail.python.org> http://hg.python.org/cpython/rev/5c4b00581198 changeset: 82071:5c4b00581198 parent: 82066:f5e3f2f0fe79 parent: 82070:ae92c8759c43 user: Serhiy Storchaka date: Fri Feb 08 11:24:55 2013 +0200 summary: Fix accidental non-breakable spaces (U+00A0). files: Lib/test/test_lzma.py | 2 +- Lib/test/test_pyexpat.py | 2 +- Misc/NEWS | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/test/test_lzma.py b/Lib/test/test_lzma.py --- a/Lib/test/test_lzma.py +++ b/Lib/test/test_lzma.py @@ -671,7 +671,7 @@ def test_read_truncated(self): # Drop stream footer: CRC (4 bytes), index size (4 bytes), - # flags?(2 bytes) and magic number (2 bytes). + # flags (2 bytes) and magic number (2 bytes). truncated = COMPRESSED_XZ[:-12] with LZMAFile(BytesIO(truncated)) as f: self.assertRaises(EOFError, f.read) diff --git a/Lib/test/test_pyexpat.py b/Lib/test/test_pyexpat.py --- a/Lib/test/test_pyexpat.py +++ b/Lib/test/test_pyexpat.py @@ -600,7 +600,7 @@ self.assertEqual(str(e), 'unclosed token: line 2, column 0') def test2(self): - #?\xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) + # \xc2\x85 is UTF-8 encoded U+0085 (NEXT LINE) xml = b"\r\n" parser = expat.ParserCreate() try: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -243,7 +243,7 @@ - Issue #17073: Fix some integer overflows in sqlite3 module. -- Issue #17114: IDLE?now uses non-strict config parser. +- Issue #17114: IDLE now uses non-strict config parser. - Issue #16723: httplib.HTTPResponse no longer marked closed when the connection is automatically closed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 8 17:56:20 2013 From: python-checkins at python.org (daniel.holth) Date: Fri, 8 Feb 2013 17:56:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_wheel_rationales?= Message-ID: <3Z2jGD2R3SzRpt@mail.python.org> http://hg.python.org/peps/rev/fb66173832fb changeset: 4719:fb66173832fb user: Daniel Holth date: Fri Feb 08 11:56:08 2013 -0500 summary: wheel rationales files: pep-0427.txt | 15 ++++++++++++++- 1 files changed, 14 insertions(+), 1 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -235,7 +235,7 @@ ------------------ Wheel files include an extended RECORD that enables digital -signatures. PEP 376's RECORD is altered to include +signatures. PEP 376's RECORD is altered to include a secure hash ``digestname=urlsafe_b64encode_nopad(digest)`` (urlsafe base64 encoding with no trailing = characters) as the second column instead of an md5sum. All possible entries are hashed, including any @@ -316,6 +316,19 @@ the signature, or individual files can be verified without having to download the whole archive. +Why does wheel allow JWS signatures? + The JOSE specifications including JWS are designed to be easy to + implement, a feature that is also one of wheel's primary design goals. + +Why does wheel also allow S/MIME signatures? + S/MIME signatures are allowed for users who need or want to use an + existing public key infrastructure with wheel. + + Signed packages are only a basic building block in a secured package + update system. Wheel only provides the building block. A complete + system would provide for key distribution and trust and would specify + which signature format was required. + Appendix ======== -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 04:32:43 2013 From: python-checkins at python.org (daniel.holth) Date: Sat, 9 Feb 2013 04:32:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP-427_=28wheel=29_edits?= Message-ID: <3Z2zNW4DZ5zPpg@mail.python.org> http://hg.python.org/peps/rev/d9d42d5c1b41 changeset: 4720:d9d42d5c1b41 user: Daniel Holth date: Fri Feb 08 22:32:34 2013 -0500 summary: PEP-427 (wheel) edits files: pep-0427.txt | 69 ++++++++++++++++++++------------------- 1 files changed, 36 insertions(+), 33 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -17,13 +17,13 @@ This PEP describes a built-package format for Python called "wheel". -A wheel is a ZIP-format archive with a specially formatted file name -and the ``.whl`` extension. It contains a single distribution nearly -as it would be installed according to PEP 376 with a particular -installation scheme. A wheel file may be installed by simply -unpacking into site-packages with the standard 'unzip' tool, while -preserving enough information to spread its contents out onto their -final paths at any later time. +A wheel is a ZIP-format archive with a specially formatted file name and +the ``.whl`` extension. It contains a single distribution nearly as it +would be installed according to PEP 376 with a particular installation +scheme. Although a specialized installer is recommended, a wheel file +may be installed by simply unpacking into site-packages with the standard +'unzip' tool while preserving enough information to spread its contents +out onto their final paths at any later time. Note @@ -147,7 +147,7 @@ File contents ''''''''''''' -The conents of a wheel file, where {distribution} is replaced with the +The contents of a wheel file, where {distribution} is replaced with the name of the package, e.g. ``beaglevote`` and {version} is replaced with its version, e.g. ``1.0.0``, consist of: @@ -163,8 +163,8 @@ ``b'#!python'`` in order to enjoy script wrapper generation and ``#!python`` rewriting at install time. They may have any or no extension. -#. ``{distribution}-{version}.dist-info/METADATA`` is Metadata version 1.3 - (PEP 426) or greater format metadata. +#. ``{distribution}-{version}.dist-info/METADATA`` is Metadata version 1.2 + (PEP 345) or greater format metadata. #. ``{distribution}-{version}.dist-info/WHEEL`` is metadata about the archive itself:: @@ -195,20 +195,21 @@ #. Wheel .dist-info directories include at a minimum METADATA, WHEEL, and RECORD. -#. METADATA is the PEP 426 metadata (Metadata version 1.3 or greater) -#. WHEEL is the wheel metadata, specific to a build of the package. +#. METADATA is the package metadata, the same format as PKG-INFO as + found at the root of sdists. +#. WHEEL is the wheel metadata specific to a build of the package. #. RECORD is a list of (almost) all the files in the wheel and their secure hashes. Unlike PEP 376, every file except RECORD, which cannot contain a hash of itself, must include its hash. The hash algorithm must be sha256 or better; specifically, md5 and sha1 are not permitted, as signed wheel files rely on the strong hashes in RECORD to validate the integrity of the archive. -#. INSTALLER and REQUESTED are not included in the archive. +#. PEP 376's INSTALLER and REQUESTED are not included in the archive. #. RECORD.jws is used for digital signatures. It is not mentioned in RECORD. #. RECORD.p7s is allowed as a courtesy to anyone who would prefer to - use s/mime signatures to secure their wheel files. It is not - mentioned in RECORD and it is ignored by the official tools. + use S/MIME signatures to secure their wheel files. It is not + mentioned in RECORD. #. During extraction, wheel installers verify all the hashes in RECORD against the file contents. Apart from RECORD and its signatures, installation will fail if any file in the archive is not both @@ -239,29 +240,31 @@ ``digestname=urlsafe_b64encode_nopad(digest)`` (urlsafe base64 encoding with no trailing = characters) as the second column instead of an md5sum. All possible entries are hashed, including any -generated files such as .pyc files, but not RECORD. For example:: +generated files such as .pyc files, but not RECORD which cannot contain its +own hash. For example:: file.py,sha256=AVTFPZpEKzuHr7OvQZmhaU3LvwKz06AJw8mT\_pNh2yI,3144 distribution-1.0.dist-info/RECORD,, The signature file(s) RECORD.jws and RECORD.p7s are not mentioned in RECORD at all since they can only be added after RECORD is generated. -Every other file in the archive must have a correct hash in RECORD, +Every other file in the archive must have a correct hash in RECORD or the installation will fail. If JSON web signatures are used, one or more JSON Web Signature JSON -Serialization (JWS-JS) signatures may be stored in a file RECORD.jws -adjacent to RECORD. JWS is used to sign RECORD by including the SHA-256 -hash of RECORD as the JWS payload:: +Serialization (JWS-JS) signatures is stored in a file RECORD.jws adjacent +to RECORD. JWS is used to sign RECORD by including the SHA-256 hash of +RECORD as the signature's JSON payload:: { "hash": "sha256=ADD-r2urObZHcxBW3Cr-vDCu5RJwT4CaRTHiFmbcIYY" } -If RECORD.p7s is used, it must contain a PKCS#7 format signature of -RECORD. +If RECORD.p7s is used, it must contain a detached S/MIME format signature +of RECORD. -A wheel installer may assume that the signature has already been checked -against RECORD, and only must verify the hashes in RECORD against the -extracted file contents. +A wheel installer is not required to understand digital signatures but +MUST verify the hashes in RECORD against the extracted file contents. +When the installer checks file hashes against RECORD, a separate signature +checker only needs to establish that RECORD matches the signature. See @@ -313,21 +316,21 @@ Attached signatures are more convenient than detached signatures because they travel with the archive. Since only the individual files are signed, the archive can be recompressed without invalidating - the signature, or individual files can be verified without having + the signature or individual files can be verified without having to download the whole archive. Why does wheel allow JWS signatures? - The JOSE specifications including JWS are designed to be easy to - implement, a feature that is also one of wheel's primary design goals. + The JOSE specifications of which JWS is a part are designed to be easy + to implement, a feature that is also one of wheel's primary design + goals. JWS yields a useful, concise pure-Python implementation. Why does wheel also allow S/MIME signatures? - S/MIME signatures are allowed for users who need or want to use an + S/MIME signatures are allowed for users who need or want to use existing public key infrastructure with wheel. - Signed packages are only a basic building block in a secured package - update system. Wheel only provides the building block. A complete - system would provide for key distribution and trust and would specify - which signature format was required. + Signed packages are only a basic building block in a secure package + update system and many kinds of attacks are possible even when + packages are signed. Wheel only provides the building block. Appendix ======== -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 05:12:20 2013 From: python-checkins at python.org (daniel.holth) Date: Sat, 9 Feb 2013 05:12:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP-0426_=28Metadata_1=2E3=29?= =?utf-8?q?_edits?= Message-ID: <3Z30GD5LQhzRQn@mail.python.org> http://hg.python.org/peps/rev/25daa0625d1d changeset: 4721:25daa0625d1d user: Daniel Holth date: Fri Feb 08 23:12:01 2013 -0500 summary: PEP-0426 (Metadata 1.3) edits files: pep-0426.txt | 65 ++++++++++++++++++++++++--------------- 1 files changed, 40 insertions(+), 25 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -553,11 +553,10 @@ N.N[.N]+[{a|b|c|rc}N][.postN][.devN] -Version numbers which do not comply with this scheme are an error. Projects -which wish to use non-compliant version numbers must restrict themselves -to metadata v1.1 (PEP 314) or earlier, as those versions do not mandate -a particular versioning scheme. - +Version numbers which do not comply with this scheme are an +error. Projects which wish to use non-compliant version numbers may +be heuristically normalized to this scheme and are less likely to sort +correctly. Suffixes and ordering --------------------- @@ -610,29 +609,44 @@ 1.0.post456 1.1.dev1 +Recommended subset +------------------ + +The PEP authors recommend using a subset of the allowed version scheme, +similar to http://semver.org/ but without hyphenated versions. + +* Version numbers are always three positive digits ``X.Y.Z`` (Major.Minor.Patch) +* The patch version is incremented for backwards-compatible bug fixes. +* The minor version is incremented for backwards-compatible API additions. + When the minor version is incremented the patch version resets to 0. +* The major version is incremented for backwards-incompatible API changes. + When the major version is incremented the minor and patch versions + reset to 0. +* Pre-release versions ending in ``a``, ``b``, and ``c`` may be used. +* Dev- and post-release versions are discouraged. Increment the patch number + instead of issuing a post-release. + +When the major version is 0, the API is not considered stable, may change at +any time, and the rules about when to increment the minor and patch version +numbers are relaxed. Ordering across different metadata versions ------------------------------------------- -After making a release with a given metadata version, it is assumed that -projects will not revert to an older metadata version. - -Accordingly, and to allow projects with non-compliant version schemes -to more easily migrate to the version scheme defined in this PEP, -releases should be sorted by their declared metadata version *before* -being sorted by the distribution version. - -Software that processes distribution metadata should account for the fact -that metadata v1.0 (PEP 241) and metadata v1.1 (PEP 314) do not +Metadata v1.0 (PEP 241) and metadata v1.1 (PEP 314) do not specify a standard version numbering or sorting scheme. This PEP does not mandate any particular approach to handling such versions, but acknowledges that the de facto standard for sorting such versions is the scheme used by the ``pkg_resources`` component of ``setuptools``. For metadata v1.2 (PEP 345), the recommended sort order is defined in -PEP 386 (It is expected that projects where the defined PEP 386 sort -order is incorrect will skip straight from metadata v1.1 to metadata v1.3). +PEP 386. +The best way for a publisher to get predictable ordering is to excuse +non-compliant versions from sorting by hiding them on PyPI or by removing +them from any private index that is being used. Otherwise a client +may be restricted to using exact versions to get the correct or latest +version of your project. Version specifiers ================== @@ -647,17 +661,18 @@ `Version scheme`_. Comparison operators must be one of ``<``, ``>``, ``<=``, ``>=``, ``==`` -and ``!=``. +``!=``, and ``~>``. -When no comparison operator is provided, it is equivalent to using the -following pair of version clauses:: +When no comparison operator is provided, it is equivalent to using ``==``. - >= V, < V+1 +The ``~>`` operator, "equal or greater in the last digit" is equivalent +to a pair of version clauses:: -where ``V+1`` is the "next version" after ``V``, as determined by -incrementing the last numeric component in ``V`` (for example, if -``V == 1.0a3``, then ``V+1 == 1.0a4``, while if ``V == 1.0``, then -``V+1 == 1.1``). + ~> 2.3.3 + +is equivalent to:: + + >= 2.3.3, < 2.4.0 The comma (",") is equivalent to a logical **and** operator. -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Sat Feb 9 06:04:27 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Sat, 09 Feb 2013 06:04:27 +0100 Subject: [Python-checkins] Daily reference leaks (5c4b00581198): sum=0 Message-ID: results for 5c4b00581198 on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogsLT3ad', '-x'] From python-checkins at python.org Sat Feb 9 08:05:57 2013 From: python-checkins at python.org (ned.deily) Date: Sat, 9 Feb 2013 08:05:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MTYx?= =?utf-8?q?=3A_make_install_now_also_installs_a_python2_and_python_man_pag?= =?utf-8?q?e=2E?= Message-ID: <3Z346Y5f39zNBG@mail.python.org> http://hg.python.org/cpython/rev/29826cb3f12e changeset: 82072:29826cb3f12e branch: 2.7 parent: 82068:532c59c52cb3 user: Ned Deily date: Fri Feb 08 22:51:52 2013 -0800 summary: Issue #17161: make install now also installs a python2 and python man page. files: Makefile.pre.in | 16 ++++++++++++---- Misc/NEWS | 2 ++ 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -806,7 +806,8 @@ install: @FRAMEWORKINSTALLFIRST@ altinstall bininstall maninstall @FRAMEWORKINSTALLLAST@ # Install almost everything without disturbing previous versions -altinstall: @FRAMEWORKALTINSTALLFIRST@ altbininstall libinstall inclinstall libainstall \ +altinstall: @FRAMEWORKALTINSTALLFIRST@ altbininstall libinstall inclinstall \ + libainstall altmaninstall \ sharedinstall oldsharedinstall @FRAMEWORKALTINSTALLLAST@ # Install shared libraries enabled by Setup @@ -876,8 +877,8 @@ else true; \ fi -# Install the manual page -maninstall: +# Install the versioned manual page +altmaninstall: @for i in $(MANDIR) $(MANDIR)/man1; \ do \ if test ! -d $(DESTDIR)$$i; then \ @@ -889,6 +890,13 @@ $(INSTALL_DATA) $(srcdir)/Misc/python.man \ $(DESTDIR)$(MANDIR)/man1/python$(VERSION).1 +# Install the unversioned manual pages +maninstall: altmaninstall + -rm -f $(DESTDIR)$(MANDIR)/man1/python2.1 + (cd $(DESTDIR)$(MANDIR)/man1; $(LN) -s python$(VERSION).1 python2.1) + -rm -f $(DESTDIR)$(MANDIR)/man1/python.1 + (cd $(DESTDIR)$(MANDIR)/man1; $(LN) -s python2.1 python.1) + # Install the library PLATDIR= plat-$(MACHDEP) EXTRAPLATDIR= @EXTRAPLATDIR@ @@ -1326,7 +1334,7 @@ .PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure .PHONY: frameworkinstallmaclib frameworkinstallapps frameworkinstallunixtools .PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean -.PHONY: smelly funny patchcheck +.PHONY: smelly funny patchcheck altmaninstall .PHONY: gdbhooks # IF YOU PUT ANYTHING HERE IT WILL GO AWAY diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -830,6 +830,8 @@ Retina displays. Applies to Tkinter apps, such as IDLE, on OS X framework builds linked with Cocoa Tk 8.5. +- Issue #17161: make install now also installs a python2 and python man page. + Tools/Demos ----------- -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 08:05:59 2013 From: python-checkins at python.org (ned.deily) Date: Sat, 9 Feb 2013 08:05:59 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTYx?= =?utf-8?q?=3A_make_install_now_also_installs_a_python3_man_page=2E?= Message-ID: <3Z346b1KpPzPSh@mail.python.org> http://hg.python.org/cpython/rev/b0d9b273c029 changeset: 82073:b0d9b273c029 branch: 3.2 parent: 82069:83bf8191e4d1 user: Ned Deily date: Fri Feb 08 22:53:51 2013 -0800 summary: Issue #17161: make install now also installs a python3 man page. files: Makefile.pre.in | 15 ++++++++++----- Misc/NEWS | 2 ++ 2 files changed, 12 insertions(+), 5 deletions(-) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -830,10 +830,10 @@ -$(TESTPYTHON) $(TESTPROG) $(MEMTESTOPTS) $(TESTPYTHON) $(TESTPROG) $(MEMTESTOPTS) -install: altinstall bininstall +install: altinstall bininstall maninstall altinstall: @FRAMEWORKALTINSTALLFIRST@ altbininstall libinstall inclinstall libainstall \ - sharedinstall oldsharedinstall maninstall @FRAMEWORKALTINSTALLLAST@ + sharedinstall oldsharedinstall altmaninstall @FRAMEWORKALTINSTALLLAST@ # Install shared libraries enabled by Setup DESTDIRS= $(exec_prefix) $(LIBDIR) $(BINLIBDEST) $(DESTSHARED) @@ -912,8 +912,8 @@ -rm -f $(DESTDIR)$(BINDIR)/2to3 (cd $(DESTDIR)$(BINDIR); $(LN) -s 2to3-$(VERSION) 2to3) -# Install the manual page -maninstall: +# Install the versioned manual page +altmaninstall: @for i in $(MANDIR) $(MANDIR)/man1; \ do \ if test ! -d $(DESTDIR)$$i; then \ @@ -925,6 +925,11 @@ $(INSTALL_DATA) $(srcdir)/Misc/python.man \ $(DESTDIR)$(MANDIR)/man1/python$(VERSION).1 +# Install the unversioned manual page +maninstall: altmaninstall + -rm -f $(DESTDIR)$(MANDIR)/man1/python3.1 + (cd $(DESTDIR)$(MANDIR)/man1; $(LN) -s python$(VERSION).1 python3.1) + # Install the library PLATDIR= plat-$(MACHDEP) EXTRAPLATDIR= @EXTRAPLATDIR@ @@ -1360,7 +1365,7 @@ .PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure .PHONY: frameworkinstallmaclib frameworkinstallapps frameworkinstallunixtools .PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean -.PHONY: smelly funny patchcheck +.PHONY: smelly funny patchcheck altmaninstall .PHONY: gdbhooks # IF YOU PUT ANYTHING HERE IT WILL GO AWAY diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -993,6 +993,8 @@ Retina displays. Applies to Tkinter apps, such as IDLE, on OS X framework builds linked with Cocoa Tk 8.5. +- Issue #17161: make install now also installs a python3 man page. + Tools/Demos ----------- -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 08:06:00 2013 From: python-checkins at python.org (ned.deily) Date: Sat, 9 Feb 2013 08:06:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317161=3A_make_install_now_also_installs_a_python3_man?= =?utf-8?q?_page=2E?= Message-ID: <3Z346c4KbCzSTw@mail.python.org> http://hg.python.org/cpython/rev/9828c4ffb401 changeset: 82074:9828c4ffb401 branch: 3.3 parent: 82070:ae92c8759c43 parent: 82073:b0d9b273c029 user: Ned Deily date: Fri Feb 08 23:02:09 2013 -0800 summary: Issue #17161: make install now also installs a python3 man page. files: Makefile.pre.in | 15 ++++++++++----- Misc/NEWS | 2 ++ 2 files changed, 12 insertions(+), 5 deletions(-) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -886,10 +886,10 @@ $(TESTRUNNER) $(QUICKTESTOPTS) -install: altinstall bininstall +install: altinstall bininstall maninstall altinstall: @FRAMEWORKALTINSTALLFIRST@ altbininstall libinstall inclinstall libainstall \ - sharedinstall oldsharedinstall maninstall @FRAMEWORKALTINSTALLLAST@ + sharedinstall oldsharedinstall altmaninstall @FRAMEWORKALTINSTALLLAST@ # Install shared libraries enabled by Setup DESTDIRS= $(exec_prefix) $(LIBDIR) $(BINLIBDEST) $(DESTSHARED) @@ -970,8 +970,8 @@ -rm -f $(DESTDIR)$(BINDIR)/pyvenv (cd $(DESTDIR)$(BINDIR); $(LN) -s pyvenv-$(VERSION) pyvenv) -# Install the manual page -maninstall: +# Install the versioned manual page +altmaninstall: @for i in $(MANDIR) $(MANDIR)/man1; \ do \ if test ! -d $(DESTDIR)$$i; then \ @@ -983,6 +983,11 @@ $(INSTALL_DATA) $(srcdir)/Misc/python.man \ $(DESTDIR)$(MANDIR)/man1/python$(VERSION).1 +# Install the unversioned manual page +maninstall: altmaninstall + -rm -f $(DESTDIR)$(MANDIR)/man1/python3.1 + (cd $(DESTDIR)$(MANDIR)/man1; $(LN) -s python$(VERSION).1 python3.1) + # Install the library PLATDIR= plat-$(MACHDEP) EXTRAPLATDIR= @EXTRAPLATDIR@ @@ -1452,7 +1457,7 @@ .PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure .PHONY: frameworkinstallmaclib frameworkinstallapps frameworkinstallunixtools .PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean -.PHONY: smelly funny patchcheck touch +.PHONY: smelly funny patchcheck touch altmaninstall .PHONY: gdbhooks # IF YOU PUT ANYTHING HERE IT WILL GO AWAY diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -663,6 +663,8 @@ Retina displays. Applies to Tkinter apps, such as IDLE, on OS X framework builds linked with Cocoa Tk 8.5. +- Issue #17161: make install now also installs a python3 man page. + Tools/Demos ----------- -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 08:06:01 2013 From: python-checkins at python.org (ned.deily) Date: Sat, 9 Feb 2013 08:06:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317161=3A_merge_from_3=2E3?= Message-ID: <3Z346d70lVzSfs@mail.python.org> http://hg.python.org/cpython/rev/5e874b2a0469 changeset: 82075:5e874b2a0469 parent: 82071:5c4b00581198 parent: 82074:9828c4ffb401 user: Ned Deily date: Fri Feb 08 23:05:10 2013 -0800 summary: Issue #17161: merge from 3.3 files: Makefile.pre.in | 15 ++++++++++----- Misc/NEWS | 2 ++ 2 files changed, 12 insertions(+), 5 deletions(-) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -886,10 +886,10 @@ $(TESTRUNNER) $(QUICKTESTOPTS) -install: altinstall bininstall +install: altinstall bininstall maninstall altinstall: @FRAMEWORKALTINSTALLFIRST@ altbininstall libinstall inclinstall libainstall \ - sharedinstall oldsharedinstall maninstall @FRAMEWORKALTINSTALLLAST@ + sharedinstall oldsharedinstall altmaninstall @FRAMEWORKALTINSTALLLAST@ # Install shared libraries enabled by Setup DESTDIRS= $(exec_prefix) $(LIBDIR) $(BINLIBDEST) $(DESTSHARED) @@ -970,8 +970,8 @@ -rm -f $(DESTDIR)$(BINDIR)/pyvenv (cd $(DESTDIR)$(BINDIR); $(LN) -s pyvenv-$(VERSION) pyvenv) -# Install the manual page -maninstall: +# Install the versioned manual page +altmaninstall: @for i in $(MANDIR) $(MANDIR)/man1; \ do \ if test ! -d $(DESTDIR)$$i; then \ @@ -983,6 +983,11 @@ $(INSTALL_DATA) $(srcdir)/Misc/python.man \ $(DESTDIR)$(MANDIR)/man1/python$(VERSION).1 +# Install the unversioned manual page +maninstall: altmaninstall + -rm -f $(DESTDIR)$(MANDIR)/man1/python3.1 + (cd $(DESTDIR)$(MANDIR)/man1; $(LN) -s python$(VERSION).1 python3.1) + # Install the library PLATDIR= plat-$(MACHDEP) EXTRAPLATDIR= @EXTRAPLATDIR@ @@ -1455,7 +1460,7 @@ .PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure .PHONY: frameworkinstallmaclib frameworkinstallapps frameworkinstallunixtools .PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean -.PHONY: smelly funny patchcheck touch +.PHONY: smelly funny patchcheck touch altmaninstall .PHONY: gdbhooks # IF YOU PUT ANYTHING HERE IT WILL GO AWAY diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -901,6 +901,8 @@ Retina displays. Applies to Tkinter apps, such as IDLE, on OS X framework builds linked with Cocoa Tk 8.5. +- Issue #17161: make install now also installs a python3 man page. + Documentation ------------- -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:17:45 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:17:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2Njg2?= =?utf-8?q?=3A_Fixed_a_lot_of_bugs_in_audioop_module=2E?= Message-ID: <3Z372d3Dj8zRXP@mail.python.org> http://hg.python.org/cpython/rev/6add6ac6a802 changeset: 82076:6add6ac6a802 branch: 2.7 parent: 82072:29826cb3f12e user: Serhiy Storchaka date: Sat Feb 09 11:10:30 2013 +0200 summary: Issue #16686: Fixed a lot of bugs in audioop module. * avgpp() and maxpp() no more crash on empty and 1-samples input fragment. They now work when peak-peak values are greater INT_MAX. * ratecv() no more crashes on empty input fragment. * Fixed an integer overflow in ratecv(). * Fixed an integer overflow in add() and bias() for 32-bit samples. * reverse(), lin2lin() and ratecv() no more lose precision for 32-bit samples. * max() and rms() no more returns negative result for 32-bit sample -0x80000000. * minmax() now returns correct max value for 32-bit sample -0x80000000. * avg(), mul(), tomono() and tostereo() now round negative result down and can return 32-bit sample -0x80000000. * add() now can return 32-bit sample -0x80000000. files: Doc/library/audioop.rst | 6 +- Lib/test/test_audioop.py | 405 ++++++++++++++++++-------- Misc/NEWS | 6 + Modules/audioop.c | 320 +++++++++++---------- 4 files changed, 457 insertions(+), 280 deletions(-) diff --git a/Doc/library/audioop.rst b/Doc/library/audioop.rst --- a/Doc/library/audioop.rst +++ b/Doc/library/audioop.rst @@ -38,7 +38,7 @@ Return a fragment which is the addition of the two samples passed as parameters. *width* is the sample width in bytes, either ``1``, ``2`` or ``4``. Both - fragments should have the same length. + fragments should have the same length. Samples are truncated in case of overflow. .. function:: adpcm2lin(adpcmfragment, width, state) @@ -71,7 +71,7 @@ .. function:: bias(fragment, width, bias) Return a fragment that is the original fragment with a bias added to each - sample. + sample. Samples wrap around in case of overflow. .. function:: cross(fragment, width) @@ -181,7 +181,7 @@ .. function:: mul(fragment, width, factor) Return a fragment that has all samples in the original fragment multiplied by - the floating-point value *factor*. Overflow is silently ignored. + the floating-point value *factor*. Samples are truncated in case of overflow. .. function:: ratecv(fragment, width, nchannels, inrate, outrate, state[, weightA[, weightB]]) diff --git a/Lib/test/test_audioop.py b/Lib/test/test_audioop.py --- a/Lib/test/test_audioop.py +++ b/Lib/test/test_audioop.py @@ -1,25 +1,33 @@ import audioop +import sys import unittest +import struct from test.test_support import run_unittest -endian = 'big' if audioop.getsample('\0\1', 2, 0) == 1 else 'little' -def gendata1(): - return '\0\1\2' +formats = { + 1: 'b', + 2: 'h', + 4: 'i', +} -def gendata2(): - if endian == 'big': - return '\0\0\0\1\0\2' - else: - return '\0\0\1\0\2\0' +def pack(width, data): + return struct.pack('=%d%s' % (len(data), formats[width]), *data) -def gendata4(): - if endian == 'big': - return '\0\0\0\0\0\0\0\1\0\0\0\2' - else: - return '\0\0\0\0\1\0\0\0\2\0\0\0' +packs = { + 1: lambda *data: pack(1, data), + 2: lambda *data: pack(2, data), + 4: lambda *data: pack(4, data), +} +maxvalues = {w: (1 << (8 * w - 1)) - 1 for w in (1, 2, 4)} +minvalues = {w: -1 << (8 * w - 1) for w in (1, 2, 4)} -data = [gendata1(), gendata2(), gendata4()] +datas = { + 1: b'\x00\x12\x45\xbb\x7f\x80\xff', + 2: packs[2](0, 0x1234, 0x4567, -0x4567, 0x7fff, -0x8000, -1), + 4: packs[4](0, 0x12345678, 0x456789ab, -0x456789ab, + 0x7fffffff, -0x80000000, -1), +} INVALID_DATA = [ (b'abc', 0), @@ -31,164 +39,315 @@ class TestAudioop(unittest.TestCase): def test_max(self): - self.assertEqual(audioop.max(data[0], 1), 2) - self.assertEqual(audioop.max(data[1], 2), 2) - self.assertEqual(audioop.max(data[2], 4), 2) + for w in 1, 2, 4: + self.assertEqual(audioop.max(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.max(p(5), w), 5) + self.assertEqual(audioop.max(p(5, -8, -1), w), 8) + self.assertEqual(audioop.max(p(maxvalues[w]), w), maxvalues[w]) + self.assertEqual(audioop.max(p(minvalues[w]), w), -minvalues[w]) + self.assertEqual(audioop.max(datas[w], w), -minvalues[w]) def test_minmax(self): - self.assertEqual(audioop.minmax(data[0], 1), (0, 2)) - self.assertEqual(audioop.minmax(data[1], 2), (0, 2)) - self.assertEqual(audioop.minmax(data[2], 4), (0, 2)) + for w in 1, 2, 4: + self.assertEqual(audioop.minmax(b'', w), + (0x7fffffff, -0x80000000)) + p = packs[w] + self.assertEqual(audioop.minmax(p(5), w), (5, 5)) + self.assertEqual(audioop.minmax(p(5, -8, -1), w), (-8, 5)) + self.assertEqual(audioop.minmax(p(maxvalues[w]), w), + (maxvalues[w], maxvalues[w])) + self.assertEqual(audioop.minmax(p(minvalues[w]), w), + (minvalues[w], minvalues[w])) + self.assertEqual(audioop.minmax(datas[w], w), + (minvalues[w], maxvalues[w])) def test_maxpp(self): - self.assertEqual(audioop.maxpp(data[0], 1), 0) - self.assertEqual(audioop.maxpp(data[1], 2), 0) - self.assertEqual(audioop.maxpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.maxpp(b'', w), 0) + self.assertEqual(audioop.maxpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.maxpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.maxpp(datas[w], w), + maxvalues[w] - minvalues[w]) def test_avg(self): - self.assertEqual(audioop.avg(data[0], 1), 1) - self.assertEqual(audioop.avg(data[1], 2), 1) - self.assertEqual(audioop.avg(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.avg(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.avg(p(5), w), 5) + self .assertEqual(audioop.avg(p(5, 8), w), 6) + self.assertEqual(audioop.avg(p(5, -8), w), -2) + self.assertEqual(audioop.avg(p(maxvalues[w], maxvalues[w]), w), + maxvalues[w]) + self.assertEqual(audioop.avg(p(minvalues[w], minvalues[w]), w), + minvalues[w]) + self.assertEqual(audioop.avg(packs[4](0x50000000, 0x70000000), 4), + 0x60000000) + self.assertEqual(audioop.avg(packs[4](-0x50000000, -0x70000000), 4), + -0x60000000) def test_avgpp(self): - self.assertEqual(audioop.avgpp(data[0], 1), 0) - self.assertEqual(audioop.avgpp(data[1], 2), 0) - self.assertEqual(audioop.avgpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.avgpp(b'', w), 0) + self.assertEqual(audioop.avgpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.avgpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.avgpp(datas[1], 1), 196) + self.assertEqual(audioop.avgpp(datas[2], 2), 50534) + self.assertEqual(audioop.avgpp(datas[4], 4), 3311897002) def test_rms(self): - self.assertEqual(audioop.rms(data[0], 1), 1) - self.assertEqual(audioop.rms(data[1], 2), 1) - self.assertEqual(audioop.rms(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.rms(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.rms(p(*range(100)), w), 57) + self.assertAlmostEqual(audioop.rms(p(maxvalues[w]) * 5, w), + maxvalues[w], delta=1) + self.assertAlmostEqual(audioop.rms(p(minvalues[w]) * 5, w), + -minvalues[w], delta=1) + self.assertEqual(audioop.rms(datas[1], 1), 77) + self.assertEqual(audioop.rms(datas[2], 2), 20001) + self.assertEqual(audioop.rms(datas[4], 4), 1310854152) def test_cross(self): - self.assertEqual(audioop.cross(data[0], 1), 0) - self.assertEqual(audioop.cross(data[1], 2), 0) - self.assertEqual(audioop.cross(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.cross(b'', w), -1) + p = packs[w] + self.assertEqual(audioop.cross(p(0, 1, 2), w), 0) + self.assertEqual(audioop.cross(p(1, 2, -3, -4), w), 1) + self.assertEqual(audioop.cross(p(-1, -2, 3, 4), w), 1) + self.assertEqual(audioop.cross(p(0, minvalues[w]), w), 1) + self.assertEqual(audioop.cross(p(minvalues[w], maxvalues[w]), w), 1) def test_add(self): - data2 = [] - for d in data: - str = '' - for s in d: - str = str + chr(ord(s)*2) - data2.append(str) - self.assertEqual(audioop.add(data[0], data[0], 1), data2[0]) - self.assertEqual(audioop.add(data[1], data[1], 2), data2[1]) - self.assertEqual(audioop.add(data[2], data[2], 4), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.add(b'', b'', w), b'') + self.assertEqual(audioop.add(datas[w], b'\0' * len(datas[w]), w), + datas[w]) + self.assertEqual(audioop.add(datas[1], datas[1], 1), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.add(datas[2], datas[2], 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.add(datas[4], datas[4], 4), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_bias(self): - # Note: this test assumes that avg() works - d1 = audioop.bias(data[0], 1, 100) - d2 = audioop.bias(data[1], 2, 100) - d4 = audioop.bias(data[2], 4, 100) - self.assertEqual(audioop.avg(d1, 1), 101) - self.assertEqual(audioop.avg(d2, 2), 101) - self.assertEqual(audioop.avg(d4, 4), 101) + for w in 1, 2, 4: + for bias in 0, 1, -1, 127, -128, 0x7fffffff, -0x80000000: + self.assertEqual(audioop.bias(b'', w, bias), b'') + self.assertEqual(audioop.bias(datas[1], 1, 1), + b'\x01\x13\x46\xbc\x80\x81\x00') + self.assertEqual(audioop.bias(datas[1], 1, -1), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, 0x7fffffff), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, -0x80000000), + datas[1]) + self.assertEqual(audioop.bias(datas[2], 2, 1), + packs[2](1, 0x1235, 0x4568, -0x4566, -0x8000, -0x7fff, 0)) + self.assertEqual(audioop.bias(datas[2], 2, -1), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, 0x7fffffff), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, -0x80000000), + datas[2]) + self.assertEqual(audioop.bias(datas[4], 4, 1), + packs[4](1, 0x12345679, 0x456789ac, -0x456789aa, + -0x80000000, -0x7fffffff, 0)) + self.assertEqual(audioop.bias(datas[4], 4, -1), + packs[4](-1, 0x12345677, 0x456789aa, -0x456789ac, + 0x7ffffffe, 0x7fffffff, -2)) + self.assertEqual(audioop.bias(datas[4], 4, 0x7fffffff), + packs[4](0x7fffffff, -0x6dcba989, -0x3a987656, 0x3a987654, + -2, -1, 0x7ffffffe)) + self.assertEqual(audioop.bias(datas[4], 4, -0x80000000), + packs[4](-0x80000000, -0x6dcba988, -0x3a987655, 0x3a987655, + -1, 0, 0x7fffffff)) def test_lin2lin(self): - # too simple: we test only the size - for d1 in data: - for d2 in data: - got = len(d1)//3 - wtd = len(d2)//3 - self.assertEqual(len(audioop.lin2lin(d1, got, wtd)), len(d2)) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2lin(datas[w], w, w), datas[w]) + + self.assertEqual(audioop.lin2lin(datas[1], 1, 2), + packs[2](0, 0x1200, 0x4500, -0x4500, 0x7f00, -0x8000, -0x100)) + self.assertEqual(audioop.lin2lin(datas[1], 1, 4), + packs[4](0, 0x12000000, 0x45000000, -0x45000000, + 0x7f000000, -0x80000000, -0x1000000)) + self.assertEqual(audioop.lin2lin(datas[2], 2, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[2], 2, 4), + packs[4](0, 0x12340000, 0x45670000, -0x45670000, + 0x7fff0000, -0x80000000, -0x10000)) + self.assertEqual(audioop.lin2lin(datas[4], 4, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[4], 4, 2), + packs[2](0, 0x1234, 0x4567, -0x4568, 0x7fff, -0x8000, -1)) def test_adpcm2lin(self): + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 1, None), + (b'\x00\x00\x00\xff\x00\xff', (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 2, None), + (packs[2](0, 0xb, 0x29, -0x16, 0x72, -0xb3), (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 4, None), + (packs[4](0, 0xb0000, 0x290000, -0x160000, 0x720000, + -0xb30000), (-179, 40))) + # Very cursory test - self.assertEqual(audioop.adpcm2lin(b'\0\0', 1, None), (b'\0' * 4, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 2, None), (b'\0' * 8, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 4, None), (b'\0' * 16, (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.adpcm2lin(b'\0' * 5, w, None), + (b'\0' * w * 10, (0, 0))) def test_lin2adpcm(self): + self.assertEqual(audioop.lin2adpcm(datas[1], 1, None), + (b'\x07\x7f\x7f', (-221, 39))) + self.assertEqual(audioop.lin2adpcm(datas[2], 2, None), + (b'\x07\x7f\x7f', (31, 39))) + self.assertEqual(audioop.lin2adpcm(datas[4], 4, None), + (b'\x07\x7f\x7f', (31, 39))) + # Very cursory test - self.assertEqual(audioop.lin2adpcm('\0\0\0\0', 1, None), ('\0\0', (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2adpcm(b'\0' * w * 10, w, None), + (b'\0' * 5, (0, 0))) def test_lin2alaw(self): - self.assertEqual(audioop.lin2alaw(data[0], 1), '\xd5\xc5\xf5') - self.assertEqual(audioop.lin2alaw(data[1], 2), '\xd5\xd5\xd5') - self.assertEqual(audioop.lin2alaw(data[2], 4), '\xd5\xd5\xd5') + self.assertEqual(audioop.lin2alaw(datas[1], 1), + b'\xd5\x87\xa4\x24\xaa\x2a\x5a') + self.assertEqual(audioop.lin2alaw(datas[2], 2), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') + self.assertEqual(audioop.lin2alaw(datas[4], 4), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') def test_alaw2lin(self): - # Cursory - d = audioop.lin2alaw(data[0], 1) - self.assertEqual(audioop.alaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x00\x08\x01\x08\x02\x10') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x08\x00\x00\x01\x08\x00\x00\x02\x10\x00\x00') - else: - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x08\x00\x08\x01\x10\x02') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x00\x08\x00\x00\x00\x08\x01\x00\x00\x10\x02') + encoded = b'\x00\x03\x24\x2a\x51\x54\x55\x58\x6b\x71\x7f'\ + b'\x80\x83\xa4\xaa\xd1\xd4\xd5\xd8\xeb\xf1\xff' + src = [-688, -720, -2240, -4032, -9, -3, -1, -27, -244, -82, -106, + 688, 720, 2240, 4032, 9, 3, 1, 27, 244, 82, 106] + for w in 1, 2, 4: + self.assertEqual(audioop.alaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 13 for x in src))) + + encoded = ''.join(chr(x) for x in xrange(256)) + for w in 2, 4: + decoded = audioop.alaw2lin(encoded, w) + self.assertEqual(audioop.lin2alaw(decoded, w), encoded) def test_lin2ulaw(self): - self.assertEqual(audioop.lin2ulaw(data[0], 1), '\xff\xe7\xdb') - self.assertEqual(audioop.lin2ulaw(data[1], 2), '\xff\xff\xff') - self.assertEqual(audioop.lin2ulaw(data[2], 4), '\xff\xff\xff') + self.assertEqual(audioop.lin2ulaw(datas[1], 1), + b'\xff\xad\x8e\x0e\x80\x00\x67') + self.assertEqual(audioop.lin2ulaw(datas[2], 2), + b'\xff\xad\x8e\x0e\x80\x00\x7e') + self.assertEqual(audioop.lin2ulaw(datas[4], 4), + b'\xff\xad\x8e\x0e\x80\x00\x7e') def test_ulaw2lin(self): - # Cursory - d = audioop.lin2ulaw(data[0], 1) - self.assertEqual(audioop.ulaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x01\x04\x02\x0c') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x01\x04\x00\x00\x02\x0c\x00\x00') - else: - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x04\x01\x0c\x02') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x00\x00\x04\x01\x00\x00\x0c\x02') + encoded = b'\x00\x0e\x28\x3f\x57\x6a\x76\x7c\x7e\x7f'\ + b'\x80\x8e\xa8\xbf\xd7\xea\xf6\xfc\xfe\xff' + src = [-8031, -4447, -1471, -495, -163, -53, -18, -6, -2, 0, + 8031, 4447, 1471, 495, 163, 53, 18, 6, 2, 0] + for w in 1, 2, 4: + self.assertEqual(audioop.ulaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 14 for x in src))) + + # Current u-law implementation has two codes fo 0: 0x7f and 0xff. + encoded = ''.join(chr(x) for x in range(127) + range(128, 256)) + for w in 2, 4: + decoded = audioop.ulaw2lin(encoded, w) + self.assertEqual(audioop.lin2ulaw(decoded, w), encoded) def test_mul(self): - data2 = [] - for d in data: - str = '' - for s in d: - str = str + chr(ord(s)*2) - data2.append(str) - self.assertEqual(audioop.mul(data[0], 1, 2), data2[0]) - self.assertEqual(audioop.mul(data[1],2, 2), data2[1]) - self.assertEqual(audioop.mul(data[2], 4, 2), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.mul(b'', w, 2), b'') + self.assertEqual(audioop.mul(datas[w], w, 0), + b'\0' * len(datas[w])) + self.assertEqual(audioop.mul(datas[w], w, 1), + datas[w]) + self.assertEqual(audioop.mul(datas[1], 1, 2), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.mul(datas[2], 2, 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.mul(datas[4], 4, 2), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_ratecv(self): + for w in 1, 2, 4: + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 8000, None), + (b'', (-1, ((0, 0),)))) + self.assertEqual(audioop.ratecv(b'', w, 5, 8000, 8000, None), + (b'', (-1, ((0, 0),) * 5))) + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 16000, None), + (b'', (-2, ((0, 0),)))) + self.assertEqual(audioop.ratecv(datas[w], w, 1, 8000, 8000, None)[0], + datas[w]) state = None - d1, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) - d2, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) - self.assertEqual(d1 + d2, '\000\000\001\001\002\001\000\000\001\001\002') + d1, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) + d2, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) + self.assertEqual(d1 + d2, b'\000\000\001\001\002\001\000\000\001\001\002') + + for w in 1, 2, 4: + d0, state0 = audioop.ratecv(datas[w], w, 1, 8000, 16000, None) + d, state = b'', None + for i in range(0, len(datas[w]), w): + d1, state = audioop.ratecv(datas[w][i:i + w], w, 1, + 8000, 16000, state) + d += d1 + self.assertEqual(d, d0) + self.assertEqual(state, state0) def test_reverse(self): - self.assertEqual(audioop.reverse(data[0], 1), '\2\1\0') + for w in 1, 2, 4: + self.assertEqual(audioop.reverse(b'', w), b'') + self.assertEqual(audioop.reverse(packs[w](0, 1, 2), w), + packs[w](2, 1, 0)) def test_tomono(self): - data2 = '' - for d in data[0]: - data2 = data2 + d + d - self.assertEqual(audioop.tomono(data2, 1, 0.5, 0.5), data[0]) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(str(data2), w, 1, 0), data1) + self.assertEqual(audioop.tomono(str(data2), w, 0, 1), b'\0' * len(data1)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(str(data2), w, 0.5, 0.5), data1) def test_tostereo(self): - data2 = '' - for d in data[0]: - data2 = data2 + d + d - self.assertEqual(audioop.tostereo(data[0], 1, 1, 1), data2) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 0), data2) + self.assertEqual(audioop.tostereo(data1, w, 0, 0), b'\0' * len(data2)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 1), data2) def test_findfactor(self): - self.assertEqual(audioop.findfactor(data[1], data[1]), 1.0) + self.assertEqual(audioop.findfactor(datas[2], datas[2]), 1.0) + self.assertEqual(audioop.findfactor(b'\0' * len(datas[2]), datas[2]), + 0.0) def test_findfit(self): - self.assertEqual(audioop.findfit(data[1], data[1]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], datas[2]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], packs[2](1, 2, 0)), + (1, 8038.8)) + self.assertEqual(audioop.findfit(datas[2][:-2] * 5 + datas[2], datas[2]), + (30, 1.0)) def test_findmax(self): - self.assertEqual(audioop.findmax(data[1], 1), 2) + self.assertEqual(audioop.findmax(datas[2], 1), 5) def test_getsample(self): - for i in range(3): - self.assertEqual(audioop.getsample(data[0], 1, i), i) - self.assertEqual(audioop.getsample(data[1], 2, i), i) - self.assertEqual(audioop.getsample(data[2], 4, i), i) + for w in 1, 2, 4: + data = packs[w](0, 1, -1, maxvalues[w], minvalues[w]) + self.assertEqual(audioop.getsample(data, w, 0), 0) + self.assertEqual(audioop.getsample(data, w, 1), 1) + self.assertEqual(audioop.getsample(data, w, 2), -1) + self.assertEqual(audioop.getsample(data, w, 3), maxvalues[w]) + self.assertEqual(audioop.getsample(data, w, 4), minvalues[w]) def test_negativelen(self): # from issue 3306, previously it segfaulted @@ -220,9 +379,9 @@ self.assertRaises(audioop.error, audioop.lin2adpcm, data, size, state) def test_wrongsize(self): - data = b'abc' + data = b'abcdefgh' state = None - for size in (-1, 3, 5): + for size in (-1, 0, 3, 5, 1024): self.assertRaises(audioop.error, audioop.ulaw2lin, data, size) self.assertRaises(audioop.error, audioop.alaw2lin, data, size) self.assertRaises(audioop.error, audioop.adpcm2lin, data, size, state) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,12 @@ Library ------- +- Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in + avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), + and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for + 32-bit samples. max() and rms() no more returns a negative result and + various other functions now work correctly with 32-bit sample -0x80000000. + - Issue #17073: Fix some integer overflows in sqlite3 module. - Issue #6083: Fix multiple segmentation faults occured when PyArg_ParseTuple diff --git a/Modules/audioop.c b/Modules/audioop.c --- a/Modules/audioop.c +++ b/Modules/audioop.c @@ -24,6 +24,21 @@ #endif #endif +static const int maxvals[] = {0, 0x7F, 0x7FFF, 0x7FFFFF, 0x7FFFFFFF}; +static const int minvals[] = {0, -0x80, -0x8000, -0x800000, -0x80000000}; +static const unsigned int masks[] = {0, 0xFF, 0xFFFF, 0xFFFFFF, 0xFFFFFFFF}; + +static int +fbound(double val, double minval, double maxval) +{ + if (val > maxval) + val = maxval; + else if (val < minval + 1) + val = minval; + return val; +} + + /* Code shamelessly stolen from sox, 12.17.7, g711.c ** (c) Craig Reese, Joe Campbell and Jeff Poskanzer 1989 */ @@ -345,7 +360,7 @@ signed char *cp; int len, size, val = 0; int i; - int max = 0; + unsigned int absval, max = 0; if ( !PyArg_ParseTuple(args, "s#i:max", &cp, &len, &size) ) return 0; @@ -355,10 +370,14 @@ if ( size == 1 ) val = (int)*CHARP(cp, i); else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( val < 0 ) val = (-val); - if ( val > max ) max = val; + if (val < 0) absval = (-val); + else absval = val; + if (absval > max) max = absval; } - return PyInt_FromLong(max); + if (max <= INT_MAX) + return PyInt_FromLong(max); + else + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -367,7 +386,7 @@ signed char *cp; int len, size, val = 0; int i; - int min = 0x7fffffff, max = -0x7fffffff; + int min = 0x7fffffff, max = -0x80000000; if (!PyArg_ParseTuple(args, "s#i:minmax", &cp, &len, &size)) return NULL; @@ -404,7 +423,7 @@ if ( len == 0 ) val = 0; else - val = (int)(avg / (double)(len/size)); + val = (int)floor(avg / (double)(len/size)); return PyInt_FromLong(val); } @@ -414,6 +433,7 @@ signed char *cp; int len, size, val = 0; int i; + unsigned int res; double sum_squares = 0.0; if ( !PyArg_ParseTuple(args, "s#i:rms", &cp, &len, &size) ) @@ -427,10 +447,13 @@ sum_squares += (double)val*(double)val; } if ( len == 0 ) - val = 0; + res = 0; else - val = (int)sqrt(sum_squares / (double)(len/size)); - return PyInt_FromLong(val); + res = (unsigned int)sqrt(sum_squares / (double)(len/size)); + if (res <= INT_MAX) + return PyInt_FromLong(res); + else + return PyLong_FromUnsignedLong(res); } static double _sum2(short *a, short *b, int len) @@ -620,52 +643,49 @@ int len, size, val = 0, prevval = 0, prevextremevalid = 0, prevextreme = 0; int i; - double avg = 0.0; - int diff, prevdiff, extremediff, nextreme = 0; + double sum = 0.0; + unsigned int avg; + int diff, prevdiff, nextreme = 0; if ( !PyArg_ParseTuple(args, "s#i:avgpp", &cp, &len, &size) ) return 0; if (!audioop_check_parameters(len, size)) return NULL; - /* Compute first delta value ahead. Also automatically makes us - ** skip the first extreme value - */ + if (len <= size*2) + return PyInt_FromLong(0); if ( size == 1 ) prevval = (int)*CHARP(cp, 0); else if ( size == 2 ) prevval = (int)*SHORTP(cp, 0); else if ( size == 4 ) prevval = (int)*LONGP(cp, 0); - if ( size == 1 ) val = (int)*CHARP(cp, size); - else if ( size == 2 ) val = (int)*SHORTP(cp, size); - else if ( size == 4 ) val = (int)*LONGP(cp, size); - prevdiff = val - prevval; - + prevdiff = 17; /* Anything != 0, 1 */ for ( i=size; i max ) - max = extremediff; + if (val != prevval) { + diff = val < prevval; + if (prevdiff == !diff) { + /* Derivative changed sign. Compute difference to + ** last extreme value and remember. + */ + if (prevextremevalid) { + if (prevval < prevextreme) + extremediff = (unsigned int)prevextreme - + (unsigned int)prevval; + else + extremediff = (unsigned int)prevval - + (unsigned int)prevextreme; + if ( extremediff > max ) + max = extremediff; + } + prevextremevalid = 1; + prevextreme = prevval; } - prevextremevalid = 1; - prevextreme = prevval; + prevval = val; + prevdiff = diff; } - prevval = val; - if ( diff != 0 ) - prevdiff = diff; } - return PyInt_FromLong(max); + if (max <= INT_MAX) + return PyInt_FromLong(max); + else + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -749,7 +771,7 @@ { signed char *cp, *ncp; int len, size, val = 0; - double factor, fval, maxval; + double factor, fval, maxval, minval; PyObject *rv; int i; @@ -758,13 +780,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyString_FromStringAndSize(NULL, len); if ( rv == 0 ) @@ -777,9 +794,7 @@ else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*factor; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val = (int)fval; + val = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i) = (signed char)val; else if ( size == 2 ) *SHORTP(ncp, i) = (short)val; else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)val; @@ -792,7 +807,7 @@ { signed char *cp, *ncp; int len, size, val1 = 0, val2 = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; int i; @@ -806,13 +821,8 @@ return NULL; } - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyString_FromStringAndSize(NULL, len/2); if ( rv == 0 ) @@ -828,9 +838,7 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp, i+2); else if ( size == 4 ) val2 = (int)*LONGP(cp, i+4); fval = (double)val1*fac1 + (double)val2*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i/2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i/2) = (short)val1; else if ( size == 4 ) *LONGP(ncp, i/2)= (Py_Int32)val1; @@ -843,7 +851,7 @@ { signed char *cp, *ncp; int len, size, val1, val2, val = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; int i; @@ -853,13 +861,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; if (len > INT_MAX/2) { PyErr_SetString(PyExc_MemoryError, @@ -879,14 +882,10 @@ else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*fac1; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); fval = (double)val*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val2 = (int)fval; + val2 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i*2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i*2) = (short)val1; @@ -903,7 +902,7 @@ audioop_add(PyObject *self, PyObject *args) { signed char *cp1, *cp2, *ncp; - int len1, len2, size, val1 = 0, val2 = 0, maxval, newval; + int len1, len2, size, val1 = 0, val2 = 0, minval, maxval, newval; PyObject *rv; int i; @@ -917,13 +916,8 @@ return 0; } - if ( size == 1 ) maxval = 0x7f; - else if ( size == 2 ) maxval = 0x7fff; - else if ( size == 4 ) maxval = 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = maxvals[size]; + minval = minvals[size]; rv = PyString_FromStringAndSize(NULL, len1); if ( rv == 0 ) @@ -939,12 +933,19 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp2, i); else if ( size == 4 ) val2 = (int)*LONGP(cp2, i); - newval = val1 + val2; - /* truncate in case of overflow */ - if (newval > maxval) newval = maxval; - else if (newval < -maxval) newval = -maxval; - else if (size == 4 && (newval^val1) < 0 && (newval^val2) < 0) - newval = val1 > 0 ? maxval : - maxval; + if (size < 4) { + newval = val1 + val2; + /* truncate in case of overflow */ + if (newval > maxval) + newval = maxval; + else if (newval < minval) + newval = minval; + } + else { + double fval = (double)val1 + (double)val2; + /* truncate in case of overflow */ + newval = (int)floor(fbound(fval, minval, maxval)); + } if ( size == 1 ) *CHARP(ncp, i) = (signed char)newval; else if ( size == 2 ) *SHORTP(ncp, i) = (short)newval; @@ -957,7 +958,8 @@ audioop_bias(PyObject *self, PyObject *args) { signed char *cp, *ncp; - int len, size, val = 0; + int len, size; + unsigned int val = 0, mask; PyObject *rv; int i; int bias; @@ -974,15 +976,20 @@ return 0; ncp = (signed char *)PyString_AsString(rv); + mask = masks[size]; for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = (int)*CHARP(cp, i); - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = (int)*LONGP(cp, i); + if ( size == 1 ) val = (unsigned int)(unsigned char)*CHARP(cp, i); + else if ( size == 2 ) val = (unsigned int)(unsigned short)*SHORTP(cp, i); + else if ( size == 4 ) val = (unsigned int)(Py_UInt32)*LONGP(cp, i); - if ( size == 1 ) *CHARP(ncp, i) = (signed char)(val+bias); - else if ( size == 2 ) *SHORTP(ncp, i) = (short)(val+bias); - else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(val+bias); + val += (unsigned int)bias; + /* wrap around in case of overflow */ + val &= mask; + + if ( size == 1 ) *CHARP(ncp, i) = (signed char)(unsigned char)val; + else if ( size == 2 ) *SHORTP(ncp, i) = (short)(unsigned short)val; + else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(Py_UInt32)val; } return rv; } @@ -1009,15 +1016,15 @@ ncp = (unsigned char *)PyString_AsString(rv); for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); j = len - i - size; - if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1051,13 +1058,13 @@ ncp = (unsigned char *)PyString_AsString(rv); for ( i=0, j=0; i < len; i += size, j += size2 ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1120,6 +1127,10 @@ d = gcd(inrate, outrate); inrate /= d; outrate /= d; + /* divide weightA and weightB by their greatest common divisor */ + d = gcd(weightA, weightB); + weightA /= d; + weightA /= d; if ((size_t)nchannels > PY_SIZE_MAX/sizeof(int)) { PyErr_SetString(PyExc_MemoryError, @@ -1159,7 +1170,9 @@ } /* str <- Space for the output buffer. */ - { + if (len == 0) + str = PyString_FromStringAndSize(NULL, 0); + else { /* There are len input frames, so we need (mathematically) ceiling(len*outrate/inrate) output frames, and each frame requires bytes_per_frame bytes. Computing this @@ -1174,12 +1187,11 @@ else str = PyString_FromStringAndSize(NULL, q * outrate * bytes_per_frame); - - if (str == NULL) { - PyErr_SetString(PyExc_MemoryError, - "not enough memory for output buffer"); - goto exit; - } + } + if (str == NULL) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + goto exit; } ncp = PyString_AsString(str); @@ -1214,32 +1226,32 @@ for (chan = 0; chan < nchannels; chan++) { prev_i[chan] = cur_i[chan]; if (size == 1) - cur_i[chan] = ((int)*CHARP(cp, 0)) << 8; + cur_i[chan] = ((int)*CHARP(cp, 0)) << 24; else if (size == 2) - cur_i[chan] = (int)*SHORTP(cp, 0); + cur_i[chan] = ((int)*SHORTP(cp, 0)) << 16; else if (size == 4) - cur_i[chan] = ((int)*LONGP(cp, 0)) >> 16; + cur_i[chan] = (int)*LONGP(cp, 0); cp += size; /* implements a simple digital filter */ - cur_i[chan] = - (weightA * cur_i[chan] + - weightB * prev_i[chan]) / - (weightA + weightB); + cur_i[chan] = (int)( + ((double)weightA * (double)cur_i[chan] + + (double)weightB * (double)prev_i[chan]) / + ((double)weightA + (double)weightB)); } len--; d += outrate; } while (d >= 0) { for (chan = 0; chan < nchannels; chan++) { - cur_o = (prev_i[chan] * d + - cur_i[chan] * (outrate - d)) / - outrate; + cur_o = (int)(((double)prev_i[chan] * (double)d + + (double)cur_i[chan] * (double)(outrate - d)) / + (double)outrate); if (size == 1) - *CHARP(ncp, 0) = (signed char)(cur_o >> 8); + *CHARP(ncp, 0) = (signed char)(cur_o >> 24); else if (size == 2) - *SHORTP(ncp, 0) = (short)(cur_o); + *SHORTP(ncp, 0) = (short)(cur_o >> 16); else if (size == 4) - *LONGP(ncp, 0) = (Py_Int32)(cur_o<<16); + *LONGP(ncp, 0) = (Py_Int32)(cur_o); ncp += size; } d -= inrate; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:17:47 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:17:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2Njg2?= =?utf-8?q?=3A_Fixed_a_lot_of_bugs_in_audioop_module=2E?= Message-ID: <3Z372g38PDzScm@mail.python.org> http://hg.python.org/cpython/rev/104b17f8316b changeset: 82077:104b17f8316b branch: 3.2 parent: 82073:b0d9b273c029 user: Serhiy Storchaka date: Sat Feb 09 11:10:53 2013 +0200 summary: Issue #16686: Fixed a lot of bugs in audioop module. * avgpp() and maxpp() no more crash on empty and 1-samples input fragment. They now work when peak-peak values are greater INT_MAX. * ratecv() no more crashes on empty input fragment. * Fixed an integer overflow in ratecv(). * Fixed an integer overflow in add() and bias() for 32-bit samples. * reverse(), lin2lin() and ratecv() no more lose precision for 32-bit samples. * max() and rms() no more returns negative result for 32-bit sample -0x80000000. * minmax() now returns correct max value for 32-bit sample -0x80000000. * avg(), mul(), tomono() and tostereo() now round negative result down and can return 32-bit sample -0x80000000. * add() now can return 32-bit sample -0x80000000. files: Doc/library/audioop.rst | 6 +- Lib/test/test_audioop.py | 399 ++++++++++++++++++-------- Misc/NEWS | 6 + Modules/audioop.c | 310 ++++++++++---------- 4 files changed, 435 insertions(+), 286 deletions(-) diff --git a/Doc/library/audioop.rst b/Doc/library/audioop.rst --- a/Doc/library/audioop.rst +++ b/Doc/library/audioop.rst @@ -36,7 +36,7 @@ Return a fragment which is the addition of the two samples passed as parameters. *width* is the sample width in bytes, either ``1``, ``2`` or ``4``. Both - fragments should have the same length. + fragments should have the same length. Samples are truncated in case of overflow. .. function:: adpcm2lin(adpcmfragment, width, state) @@ -67,7 +67,7 @@ .. function:: bias(fragment, width, bias) Return a fragment that is the original fragment with a bias added to each - sample. + sample. Samples wrap around in case of overflow. .. function:: cross(fragment, width) @@ -175,7 +175,7 @@ .. function:: mul(fragment, width, factor) Return a fragment that has all samples in the original fragment multiplied by - the floating-point value *factor*. Overflow is silently ignored. + the floating-point value *factor*. Samples are truncated in case of overflow. .. function:: ratecv(fragment, width, nchannels, inrate, outrate, state[, weightA[, weightB]]) diff --git a/Lib/test/test_audioop.py b/Lib/test/test_audioop.py --- a/Lib/test/test_audioop.py +++ b/Lib/test/test_audioop.py @@ -1,25 +1,21 @@ import audioop +import sys import unittest from test.support import run_unittest -endian = 'big' if audioop.getsample(b'\0\1', 2, 0) == 1 else 'little' +def pack(width, data): + return b''.join(v.to_bytes(width, sys.byteorder, signed=True) for v in data) -def gendata1(): - return b'\0\1\2' +packs = {w: (lambda *data, width=w: pack(width, data)) for w in (1, 2, 4)} +maxvalues = {w: (1 << (8 * w - 1)) - 1 for w in (1, 2, 4)} +minvalues = {w: -1 << (8 * w - 1) for w in (1, 2, 4)} -def gendata2(): - if endian == 'big': - return b'\0\0\0\1\0\2' - else: - return b'\0\0\1\0\2\0' - -def gendata4(): - if endian == 'big': - return b'\0\0\0\0\0\0\0\1\0\0\0\2' - else: - return b'\0\0\0\0\1\0\0\0\2\0\0\0' - -data = [gendata1(), gendata2(), gendata4()] +datas = { + 1: b'\x00\x12\x45\xbb\x7f\x80\xff', + 2: packs[2](0, 0x1234, 0x4567, -0x4567, 0x7fff, -0x8000, -1), + 4: packs[4](0, 0x12345678, 0x456789ab, -0x456789ab, + 0x7fffffff, -0x80000000, -1), +} INVALID_DATA = [ (b'abc', 0), @@ -31,171 +27,320 @@ class TestAudioop(unittest.TestCase): def test_max(self): - self.assertEqual(audioop.max(data[0], 1), 2) - self.assertEqual(audioop.max(data[1], 2), 2) - self.assertEqual(audioop.max(data[2], 4), 2) + for w in 1, 2, 4: + self.assertEqual(audioop.max(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.max(p(5), w), 5) + self.assertEqual(audioop.max(p(5, -8, -1), w), 8) + self.assertEqual(audioop.max(p(maxvalues[w]), w), maxvalues[w]) + self.assertEqual(audioop.max(p(minvalues[w]), w), -minvalues[w]) + self.assertEqual(audioop.max(datas[w], w), -minvalues[w]) def test_minmax(self): - self.assertEqual(audioop.minmax(data[0], 1), (0, 2)) - self.assertEqual(audioop.minmax(data[1], 2), (0, 2)) - self.assertEqual(audioop.minmax(data[2], 4), (0, 2)) + for w in 1, 2, 4: + self.assertEqual(audioop.minmax(b'', w), + (0x7fffffff, -0x80000000)) + p = packs[w] + self.assertEqual(audioop.minmax(p(5), w), (5, 5)) + self.assertEqual(audioop.minmax(p(5, -8, -1), w), (-8, 5)) + self.assertEqual(audioop.minmax(p(maxvalues[w]), w), + (maxvalues[w], maxvalues[w])) + self.assertEqual(audioop.minmax(p(minvalues[w]), w), + (minvalues[w], minvalues[w])) + self.assertEqual(audioop.minmax(datas[w], w), + (minvalues[w], maxvalues[w])) def test_maxpp(self): - self.assertEqual(audioop.maxpp(data[0], 1), 0) - self.assertEqual(audioop.maxpp(data[1], 2), 0) - self.assertEqual(audioop.maxpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.maxpp(b'', w), 0) + self.assertEqual(audioop.maxpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.maxpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.maxpp(datas[w], w), + maxvalues[w] - minvalues[w]) def test_avg(self): - self.assertEqual(audioop.avg(data[0], 1), 1) - self.assertEqual(audioop.avg(data[1], 2), 1) - self.assertEqual(audioop.avg(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.avg(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.avg(p(5), w), 5) + self .assertEqual(audioop.avg(p(5, 8), w), 6) + self.assertEqual(audioop.avg(p(5, -8), w), -2) + self.assertEqual(audioop.avg(p(maxvalues[w], maxvalues[w]), w), + maxvalues[w]) + self.assertEqual(audioop.avg(p(minvalues[w], minvalues[w]), w), + minvalues[w]) + self.assertEqual(audioop.avg(packs[4](0x50000000, 0x70000000), 4), + 0x60000000) + self.assertEqual(audioop.avg(packs[4](-0x50000000, -0x70000000), 4), + -0x60000000) def test_avgpp(self): - self.assertEqual(audioop.avgpp(data[0], 1), 0) - self.assertEqual(audioop.avgpp(data[1], 2), 0) - self.assertEqual(audioop.avgpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.avgpp(b'', w), 0) + self.assertEqual(audioop.avgpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.avgpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.avgpp(datas[1], 1), 196) + self.assertEqual(audioop.avgpp(datas[2], 2), 50534) + self.assertEqual(audioop.avgpp(datas[4], 4), 3311897002) def test_rms(self): - self.assertEqual(audioop.rms(data[0], 1), 1) - self.assertEqual(audioop.rms(data[1], 2), 1) - self.assertEqual(audioop.rms(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.rms(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.rms(p(*range(100)), w), 57) + self.assertAlmostEqual(audioop.rms(p(maxvalues[w]) * 5, w), + maxvalues[w], delta=1) + self.assertAlmostEqual(audioop.rms(p(minvalues[w]) * 5, w), + -minvalues[w], delta=1) + self.assertEqual(audioop.rms(datas[1], 1), 77) + self.assertEqual(audioop.rms(datas[2], 2), 20001) + self.assertEqual(audioop.rms(datas[4], 4), 1310854152) def test_cross(self): - self.assertEqual(audioop.cross(data[0], 1), 0) - self.assertEqual(audioop.cross(data[1], 2), 0) - self.assertEqual(audioop.cross(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.cross(b'', w), -1) + p = packs[w] + self.assertEqual(audioop.cross(p(0, 1, 2), w), 0) + self.assertEqual(audioop.cross(p(1, 2, -3, -4), w), 1) + self.assertEqual(audioop.cross(p(-1, -2, 3, 4), w), 1) + self.assertEqual(audioop.cross(p(0, minvalues[w]), w), 1) + self.assertEqual(audioop.cross(p(minvalues[w], maxvalues[w]), w), 1) def test_add(self): - data2 = [] - for d in data: - str = bytearray(len(d)) - for i,b in enumerate(d): - str[i] = 2*b - data2.append(str) - self.assertEqual(audioop.add(data[0], data[0], 1), data2[0]) - self.assertEqual(audioop.add(data[1], data[1], 2), data2[1]) - self.assertEqual(audioop.add(data[2], data[2], 4), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.add(b'', b'', w), b'') + self.assertEqual(audioop.add(datas[w], b'\0' * len(datas[w]), w), + datas[w]) + self.assertEqual(audioop.add(datas[1], datas[1], 1), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.add(datas[2], datas[2], 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.add(datas[4], datas[4], 4), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_bias(self): - # Note: this test assumes that avg() works - d1 = audioop.bias(data[0], 1, 100) - d2 = audioop.bias(data[1], 2, 100) - d4 = audioop.bias(data[2], 4, 100) - self.assertEqual(audioop.avg(d1, 1), 101) - self.assertEqual(audioop.avg(d2, 2), 101) - self.assertEqual(audioop.avg(d4, 4), 101) + for w in 1, 2, 4: + for bias in 0, 1, -1, 127, -128, 0x7fffffff, -0x80000000: + self.assertEqual(audioop.bias(b'', w, bias), b'') + self.assertEqual(audioop.bias(datas[1], 1, 1), + b'\x01\x13\x46\xbc\x80\x81\x00') + self.assertEqual(audioop.bias(datas[1], 1, -1), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, 0x7fffffff), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, -0x80000000), + datas[1]) + self.assertEqual(audioop.bias(datas[2], 2, 1), + packs[2](1, 0x1235, 0x4568, -0x4566, -0x8000, -0x7fff, 0)) + self.assertEqual(audioop.bias(datas[2], 2, -1), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, 0x7fffffff), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, -0x80000000), + datas[2]) + self.assertEqual(audioop.bias(datas[4], 4, 1), + packs[4](1, 0x12345679, 0x456789ac, -0x456789aa, + -0x80000000, -0x7fffffff, 0)) + self.assertEqual(audioop.bias(datas[4], 4, -1), + packs[4](-1, 0x12345677, 0x456789aa, -0x456789ac, + 0x7ffffffe, 0x7fffffff, -2)) + self.assertEqual(audioop.bias(datas[4], 4, 0x7fffffff), + packs[4](0x7fffffff, -0x6dcba989, -0x3a987656, 0x3a987654, + -2, -1, 0x7ffffffe)) + self.assertEqual(audioop.bias(datas[4], 4, -0x80000000), + packs[4](-0x80000000, -0x6dcba988, -0x3a987655, 0x3a987655, + -1, 0, 0x7fffffff)) def test_lin2lin(self): - # too simple: we test only the size - for d1 in data: - for d2 in data: - got = len(d1)//3 - wtd = len(d2)//3 - self.assertEqual(len(audioop.lin2lin(d1, got, wtd)), len(d2)) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2lin(datas[w], w, w), datas[w]) + + self.assertEqual(audioop.lin2lin(datas[1], 1, 2), + packs[2](0, 0x1200, 0x4500, -0x4500, 0x7f00, -0x8000, -0x100)) + self.assertEqual(audioop.lin2lin(datas[1], 1, 4), + packs[4](0, 0x12000000, 0x45000000, -0x45000000, + 0x7f000000, -0x80000000, -0x1000000)) + self.assertEqual(audioop.lin2lin(datas[2], 2, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[2], 2, 4), + packs[4](0, 0x12340000, 0x45670000, -0x45670000, + 0x7fff0000, -0x80000000, -0x10000)) + self.assertEqual(audioop.lin2lin(datas[4], 4, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[4], 4, 2), + packs[2](0, 0x1234, 0x4567, -0x4568, 0x7fff, -0x8000, -1)) def test_adpcm2lin(self): + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 1, None), + (b'\x00\x00\x00\xff\x00\xff', (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 2, None), + (packs[2](0, 0xb, 0x29, -0x16, 0x72, -0xb3), (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 4, None), + (packs[4](0, 0xb0000, 0x290000, -0x160000, 0x720000, + -0xb30000), (-179, 40))) + # Very cursory test - self.assertEqual(audioop.adpcm2lin(b'\0\0', 1, None), (b'\0' * 4, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 2, None), (b'\0' * 8, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 4, None), (b'\0' * 16, (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.adpcm2lin(b'\0' * 5, w, None), + (b'\0' * w * 10, (0, 0))) def test_lin2adpcm(self): + self.assertEqual(audioop.lin2adpcm(datas[1], 1, None), + (b'\x07\x7f\x7f', (-221, 39))) + self.assertEqual(audioop.lin2adpcm(datas[2], 2, None), + (b'\x07\x7f\x7f', (31, 39))) + self.assertEqual(audioop.lin2adpcm(datas[4], 4, None), + (b'\x07\x7f\x7f', (31, 39))) + # Very cursory test - self.assertEqual(audioop.lin2adpcm(b'\0\0\0\0', 1, None), (b'\0\0', (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2adpcm(b'\0' * w * 10, w, None), + (b'\0' * 5, (0, 0))) def test_lin2alaw(self): - self.assertEqual(audioop.lin2alaw(data[0], 1), b'\xd5\xc5\xf5') - self.assertEqual(audioop.lin2alaw(data[1], 2), b'\xd5\xd5\xd5') - self.assertEqual(audioop.lin2alaw(data[2], 4), b'\xd5\xd5\xd5') + self.assertEqual(audioop.lin2alaw(datas[1], 1), + b'\xd5\x87\xa4\x24\xaa\x2a\x5a') + self.assertEqual(audioop.lin2alaw(datas[2], 2), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') + self.assertEqual(audioop.lin2alaw(datas[4], 4), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') def test_alaw2lin(self): - # Cursory - d = audioop.lin2alaw(data[0], 1) - self.assertEqual(audioop.alaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x00\x08\x01\x08\x02\x10') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x08\x00\x00\x01\x08\x00\x00\x02\x10\x00\x00') - else: - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x08\x00\x08\x01\x10\x02') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x00\x08\x00\x00\x00\x08\x01\x00\x00\x10\x02') + encoded = b'\x00\x03\x24\x2a\x51\x54\x55\x58\x6b\x71\x7f'\ + b'\x80\x83\xa4\xaa\xd1\xd4\xd5\xd8\xeb\xf1\xff' + src = [-688, -720, -2240, -4032, -9, -3, -1, -27, -244, -82, -106, + 688, 720, 2240, 4032, 9, 3, 1, 27, 244, 82, 106] + for w in 1, 2, 4: + self.assertEqual(audioop.alaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 13 for x in src))) + + encoded = bytes(range(256)) + for w in 2, 4: + decoded = audioop.alaw2lin(encoded, w) + self.assertEqual(audioop.lin2alaw(decoded, w), encoded) def test_lin2ulaw(self): - self.assertEqual(audioop.lin2ulaw(data[0], 1), b'\xff\xe7\xdb') - self.assertEqual(audioop.lin2ulaw(data[1], 2), b'\xff\xff\xff') - self.assertEqual(audioop.lin2ulaw(data[2], 4), b'\xff\xff\xff') + self.assertEqual(audioop.lin2ulaw(datas[1], 1), + b'\xff\xad\x8e\x0e\x80\x00\x67') + self.assertEqual(audioop.lin2ulaw(datas[2], 2), + b'\xff\xad\x8e\x0e\x80\x00\x7e') + self.assertEqual(audioop.lin2ulaw(datas[4], 4), + b'\xff\xad\x8e\x0e\x80\x00\x7e') def test_ulaw2lin(self): - # Cursory - d = audioop.lin2ulaw(data[0], 1) - self.assertEqual(audioop.ulaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x01\x04\x02\x0c') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x01\x04\x00\x00\x02\x0c\x00\x00') - else: - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x04\x01\x0c\x02') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x00\x00\x04\x01\x00\x00\x0c\x02') + encoded = b'\x00\x0e\x28\x3f\x57\x6a\x76\x7c\x7e\x7f'\ + b'\x80\x8e\xa8\xbf\xd7\xea\xf6\xfc\xfe\xff' + src = [-8031, -4447, -1471, -495, -163, -53, -18, -6, -2, 0, + 8031, 4447, 1471, 495, 163, 53, 18, 6, 2, 0] + for w in 1, 2, 4: + self.assertEqual(audioop.ulaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 14 for x in src))) + + # Current u-law implementation has two codes fo 0: 0x7f and 0xff. + encoded = bytes(range(127)) + bytes(range(128, 256)) + for w in 2, 4: + decoded = audioop.ulaw2lin(encoded, w) + self.assertEqual(audioop.lin2ulaw(decoded, w), encoded) def test_mul(self): - data2 = [] - for d in data: - str = bytearray(len(d)) - for i,b in enumerate(d): - str[i] = 2*b - data2.append(str) - self.assertEqual(audioop.mul(data[0], 1, 2), data2[0]) - self.assertEqual(audioop.mul(data[1],2, 2), data2[1]) - self.assertEqual(audioop.mul(data[2], 4, 2), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.mul(b'', w, 2), b'') + self.assertEqual(audioop.mul(datas[w], w, 0), + b'\0' * len(datas[w])) + self.assertEqual(audioop.mul(datas[w], w, 1), + datas[w]) + self.assertEqual(audioop.mul(datas[1], 1, 2), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.mul(datas[2], 2, 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.mul(datas[4], 4, 2), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_ratecv(self): + for w in 1, 2, 4: + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 8000, None), + (b'', (-1, ((0, 0),)))) + self.assertEqual(audioop.ratecv(b'', w, 5, 8000, 8000, None), + (b'', (-1, ((0, 0),) * 5))) + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 16000, None), + (b'', (-2, ((0, 0),)))) + self.assertEqual(audioop.ratecv(datas[w], w, 1, 8000, 8000, None)[0], + datas[w]) state = None - d1, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) - d2, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) + d1, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) + d2, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) self.assertEqual(d1 + d2, b'\000\000\001\001\002\001\000\000\001\001\002') + for w in 1, 2, 4: + d0, state0 = audioop.ratecv(datas[w], w, 1, 8000, 16000, None) + d, state = b'', None + for i in range(0, len(datas[w]), w): + d1, state = audioop.ratecv(datas[w][i:i + w], w, 1, + 8000, 16000, state) + d += d1 + self.assertEqual(d, d0) + self.assertEqual(state, state0) + def test_reverse(self): - self.assertEqual(audioop.reverse(data[0], 1), b'\2\1\0') + for w in 1, 2, 4: + self.assertEqual(audioop.reverse(b'', w), b'') + self.assertEqual(audioop.reverse(packs[w](0, 1, 2), w), + packs[w](2, 1, 0)) def test_tomono(self): - data2 = bytearray() - for d in data[0]: - data2.append(d) - data2.append(d) - self.assertEqual(audioop.tomono(data2, 1, 0.5, 0.5), data[0]) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(data2, w, 1, 0), data1) + self.assertEqual(audioop.tomono(data2, w, 0, 1), b'\0' * len(data1)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(data2, w, 0.5, 0.5), data1) def test_tostereo(self): - data2 = bytearray() - for d in data[0]: - data2.append(d) - data2.append(d) - self.assertEqual(audioop.tostereo(data[0], 1, 1, 1), data2) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 0), data2) + self.assertEqual(audioop.tostereo(data1, w, 0, 0), b'\0' * len(data2)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 1), data2) def test_findfactor(self): - self.assertEqual(audioop.findfactor(data[1], data[1]), 1.0) + self.assertEqual(audioop.findfactor(datas[2], datas[2]), 1.0) + self.assertEqual(audioop.findfactor(b'\0' * len(datas[2]), datas[2]), + 0.0) def test_findfit(self): - self.assertEqual(audioop.findfit(data[1], data[1]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], datas[2]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], packs[2](1, 2, 0)), + (1, 8038.8)) + self.assertEqual(audioop.findfit(datas[2][:-2] * 5 + datas[2], datas[2]), + (30, 1.0)) def test_findmax(self): - self.assertEqual(audioop.findmax(data[1], 1), 2) + self.assertEqual(audioop.findmax(datas[2], 1), 5) def test_getsample(self): - for i in range(3): - self.assertEqual(audioop.getsample(data[0], 1, i), i) - self.assertEqual(audioop.getsample(data[1], 2, i), i) - self.assertEqual(audioop.getsample(data[2], 4, i), i) + for w in 1, 2, 4: + data = packs[w](0, 1, -1, maxvalues[w], minvalues[w]) + self.assertEqual(audioop.getsample(data, w, 0), 0) + self.assertEqual(audioop.getsample(data, w, 1), 1) + self.assertEqual(audioop.getsample(data, w, 2), -1) + self.assertEqual(audioop.getsample(data, w, 3), maxvalues[w]) + self.assertEqual(audioop.getsample(data, w, 4), minvalues[w]) def test_negativelen(self): # from issue 3306, previously it segfaulted self.assertRaises(audioop.error, - audioop.findmax, ''.join(chr(x) for x in range(256)), -2392392) + audioop.findmax, bytes(range(256)), -2392392) def test_issue7673(self): state = None @@ -222,9 +367,9 @@ self.assertRaises(audioop.error, audioop.lin2adpcm, data, size, state) def test_wrongsize(self): - data = b'abc' + data = b'abcdefgh' state = None - for size in (-1, 3, 5): + for size in (-1, 0, 3, 5, 1024): self.assertRaises(audioop.error, audioop.ulaw2lin, data, size) self.assertRaises(audioop.error, audioop.alaw2lin, data, size) self.assertRaises(audioop.error, audioop.adpcm2lin, data, size, state) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -215,6 +215,12 @@ Library ------- +- Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in + avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), + and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for + 32-bit samples. max() and rms() no more returns a negative result and + various other functions now work correctly with 32-bit sample -0x80000000. + - Issue #17073: Fix some integer overflows in sqlite3 module. - Issue #17114: IDLE now uses non-strict config parser. diff --git a/Modules/audioop.c b/Modules/audioop.c --- a/Modules/audioop.c +++ b/Modules/audioop.c @@ -26,6 +26,21 @@ #endif #endif +static const int maxvals[] = {0, 0x7F, 0x7FFF, 0x7FFFFF, 0x7FFFFFFF}; +static const int minvals[] = {0, -0x80, -0x8000, -0x800000, -0x80000000}; +static const unsigned int masks[] = {0, 0xFF, 0xFFFF, 0xFFFFFF, 0xFFFFFFFF}; + +static int +fbound(double val, double minval, double maxval) +{ + if (val > maxval) + val = maxval; + else if (val < minval + 1) + val = minval; + return val; +} + + /* Code shamelessly stolen from sox, 12.17.7, g711.c ** (c) Craig Reese, Joe Campbell and Jeff Poskanzer 1989 */ @@ -347,7 +362,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; - int max = 0; + unsigned int absval, max = 0; if ( !PyArg_ParseTuple(args, "s#i:max", &cp, &len, &size) ) return 0; @@ -357,10 +372,11 @@ if ( size == 1 ) val = (int)*CHARP(cp, i); else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( val < 0 ) val = (-val); - if ( val > max ) max = val; + if (val < 0) absval = (-val); + else absval = val; + if (absval > max) max = absval; } - return PyLong_FromLong(max); + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -369,7 +385,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; - int min = 0x7fffffff, max = -0x7fffffff; + int min = 0x7fffffff, max = -0x80000000; if (!PyArg_ParseTuple(args, "s#i:minmax", &cp, &len, &size)) return NULL; @@ -406,7 +422,7 @@ if ( len == 0 ) val = 0; else - val = (int)(avg / (double)(len/size)); + val = (int)floor(avg / (double)(len/size)); return PyLong_FromLong(val); } @@ -416,6 +432,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; + unsigned int res; double sum_squares = 0.0; if ( !PyArg_ParseTuple(args, "s#i:rms", &cp, &len, &size) ) @@ -429,10 +446,10 @@ sum_squares += (double)val*(double)val; } if ( len == 0 ) - val = 0; + res = 0; else - val = (int)sqrt(sum_squares / (double)(len/size)); - return PyLong_FromLong(val); + res = (unsigned int)sqrt(sum_squares / (double)(len/size)); + return PyLong_FromUnsignedLong(res); } static double _sum2(short *a, short *b, Py_ssize_t len) @@ -624,52 +641,46 @@ Py_ssize_t len, i; int size, val = 0, prevval = 0, prevextremevalid = 0, prevextreme = 0; - double avg = 0.0; - int diff, prevdiff, extremediff, nextreme = 0; + double sum = 0.0; + unsigned int avg; + int diff, prevdiff, nextreme = 0; if ( !PyArg_ParseTuple(args, "s#i:avgpp", &cp, &len, &size) ) return 0; if (!audioop_check_parameters(len, size)) return NULL; - /* Compute first delta value ahead. Also automatically makes us - ** skip the first extreme value - */ + if (len <= size) + return PyLong_FromLong(0); if ( size == 1 ) prevval = (int)*CHARP(cp, 0); else if ( size == 2 ) prevval = (int)*SHORTP(cp, 0); else if ( size == 4 ) prevval = (int)*LONGP(cp, 0); - if ( size == 1 ) val = (int)*CHARP(cp, size); - else if ( size == 2 ) val = (int)*SHORTP(cp, size); - else if ( size == 4 ) val = (int)*LONGP(cp, size); - prevdiff = val - prevval; - + prevdiff = 17; /* Anything != 0, 1 */ for ( i=size; i max ) - max = extremediff; + if (val != prevval) { + diff = val < prevval; + if (prevdiff == !diff) { + /* Derivative changed sign. Compute difference to + ** last extreme value and remember. + */ + if (prevextremevalid) { + if (prevval < prevextreme) + extremediff = (unsigned int)prevextreme - + (unsigned int)prevval; + else + extremediff = (unsigned int)prevval - + (unsigned int)prevextreme; + if ( extremediff > max ) + max = extremediff; + } + prevextremevalid = 1; + prevextreme = prevval; } - prevextremevalid = 1; - prevextreme = prevval; + prevval = val; + prevdiff = diff; } - prevval = val; - if ( diff != 0 ) - prevdiff = diff; } - return PyLong_FromLong(max); + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -755,7 +765,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val = 0; - double factor, fval, maxval; + double factor, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#id:mul", &cp, &len, &size, &factor ) ) @@ -763,13 +773,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len); if ( rv == 0 ) @@ -782,9 +787,7 @@ else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*factor; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val = (int)fval; + val = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i) = (signed char)val; else if ( size == 2 ) *SHORTP(ncp, i) = (short)val; else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)val; @@ -799,7 +802,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val1 = 0, val2 = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s*idd:tomono", @@ -817,14 +820,8 @@ return NULL; } - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyBuffer_Release(&pcp); - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len/2); if ( rv == 0 ) { @@ -842,9 +839,7 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp, i+2); else if ( size == 4 ) val2 = (int)*LONGP(cp, i+4); fval = (double)val1*fac1 + (double)val2*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i/2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i/2) = (short)val1; else if ( size == 4 ) *LONGP(ncp, i/2)= (Py_Int32)val1; @@ -859,7 +854,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val1, val2, val = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#idd:tostereo", @@ -868,13 +863,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; if (len > PY_SSIZE_T_MAX/2) { PyErr_SetString(PyExc_MemoryError, @@ -894,14 +884,10 @@ else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*fac1; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); fval = (double)val*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val2 = (int)fval; + val2 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i*2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i*2) = (short)val1; @@ -919,7 +905,7 @@ { signed char *cp1, *cp2, *ncp; Py_ssize_t len1, len2, i; - int size, val1 = 0, val2 = 0, maxval, newval; + int size, val1 = 0, val2 = 0, minval, maxval, newval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#s#i:add", @@ -932,13 +918,8 @@ return 0; } - if ( size == 1 ) maxval = 0x7f; - else if ( size == 2 ) maxval = 0x7fff; - else if ( size == 4 ) maxval = 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = maxvals[size]; + minval = minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len1); if ( rv == 0 ) @@ -954,12 +935,19 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp2, i); else if ( size == 4 ) val2 = (int)*LONGP(cp2, i); - newval = val1 + val2; - /* truncate in case of overflow */ - if (newval > maxval) newval = maxval; - else if (newval < -maxval) newval = -maxval; - else if (size == 4 && (newval^val1) < 0 && (newval^val2) < 0) - newval = val1 > 0 ? maxval : - maxval; + if (size < 4) { + newval = val1 + val2; + /* truncate in case of overflow */ + if (newval > maxval) + newval = maxval; + else if (newval < minval) + newval = minval; + } + else { + double fval = (double)val1 + (double)val2; + /* truncate in case of overflow */ + newval = (int)floor(fbound(fval, minval, maxval)); + } if ( size == 1 ) *CHARP(ncp, i) = (signed char)newval; else if ( size == 2 ) *SHORTP(ncp, i) = (short)newval; @@ -973,9 +961,9 @@ { signed char *cp, *ncp; Py_ssize_t len, i; - int size, val = 0; + int size, bias; + unsigned int val = 0, mask; PyObject *rv; - int bias; if ( !PyArg_ParseTuple(args, "s#ii:bias", &cp, &len, &size , &bias) ) @@ -989,15 +977,20 @@ return 0; ncp = (signed char *)PyBytes_AsString(rv); + mask = masks[size]; for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = (int)*CHARP(cp, i); - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = (int)*LONGP(cp, i); + if ( size == 1 ) val = (unsigned int)(unsigned char)*CHARP(cp, i); + else if ( size == 2 ) val = (unsigned int)(unsigned short)*SHORTP(cp, i); + else if ( size == 4 ) val = (unsigned int)(Py_UInt32)*LONGP(cp, i); - if ( size == 1 ) *CHARP(ncp, i) = (signed char)(val+bias); - else if ( size == 2 ) *SHORTP(ncp, i) = (short)(val+bias); - else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(val+bias); + val += (unsigned int)bias; + /* wrap around in case of overflow */ + val &= mask; + + if ( size == 1 ) *CHARP(ncp, i) = (signed char)(unsigned char)val; + else if ( size == 2 ) *SHORTP(ncp, i) = (short)(unsigned short)val; + else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(Py_UInt32)val; } return rv; } @@ -1024,15 +1017,15 @@ ncp = (unsigned char *)PyBytes_AsString(rv); for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); j = len - i - size; - if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1066,13 +1059,13 @@ ncp = (unsigned char *)PyBytes_AsString(rv); for ( i=0, j=0; i < len; i += size, j += size2 ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1136,6 +1129,10 @@ d = gcd(inrate, outrate); inrate /= d; outrate /= d; + /* divide weightA and weightB by their greatest common divisor */ + d = gcd(weightA, weightB); + weightA /= d; + weightA /= d; if ((size_t)nchannels > PY_SIZE_MAX/sizeof(int)) { PyErr_SetString(PyExc_MemoryError, @@ -1175,7 +1172,9 @@ } /* str <- Space for the output buffer. */ - { + if (len == 0) + str = PyBytes_FromStringAndSize(NULL, 0); + else { /* There are len input frames, so we need (mathematically) ceiling(len*outrate/inrate) output frames, and each frame requires bytes_per_frame bytes. Computing this @@ -1190,12 +1189,11 @@ else str = PyBytes_FromStringAndSize(NULL, q * outrate * bytes_per_frame); - - if (str == NULL) { - PyErr_SetString(PyExc_MemoryError, - "not enough memory for output buffer"); - goto exit; - } + } + if (str == NULL) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + goto exit; } ncp = PyBytes_AsString(str); @@ -1229,32 +1227,32 @@ for (chan = 0; chan < nchannels; chan++) { prev_i[chan] = cur_i[chan]; if (size == 1) - cur_i[chan] = ((int)*CHARP(cp, 0)) << 8; + cur_i[chan] = ((int)*CHARP(cp, 0)) << 24; else if (size == 2) - cur_i[chan] = (int)*SHORTP(cp, 0); + cur_i[chan] = ((int)*SHORTP(cp, 0)) << 16; else if (size == 4) - cur_i[chan] = ((int)*LONGP(cp, 0)) >> 16; + cur_i[chan] = (int)*LONGP(cp, 0); cp += size; /* implements a simple digital filter */ - cur_i[chan] = - (weightA * cur_i[chan] + - weightB * prev_i[chan]) / - (weightA + weightB); + cur_i[chan] = (int)( + ((double)weightA * (double)cur_i[chan] + + (double)weightB * (double)prev_i[chan]) / + ((double)weightA + (double)weightB)); } len--; d += outrate; } while (d >= 0) { for (chan = 0; chan < nchannels; chan++) { - cur_o = (prev_i[chan] * d + - cur_i[chan] * (outrate - d)) / - outrate; + cur_o = (int)(((double)prev_i[chan] * (double)d + + (double)cur_i[chan] * (double)(outrate - d)) / + (double)outrate); if (size == 1) - *CHARP(ncp, 0) = (signed char)(cur_o >> 8); + *CHARP(ncp, 0) = (signed char)(cur_o >> 24); else if (size == 2) - *SHORTP(ncp, 0) = (short)(cur_o); + *SHORTP(ncp, 0) = (short)(cur_o >> 16); else if (size == 4) - *LONGP(ncp, 0) = (Py_Int32)(cur_o<<16); + *LONGP(ncp, 0) = (Py_Int32)(cur_o); ncp += size; } d -= inrate; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:17:49 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:17:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316686=3A_Fixed_a_lot_of_bugs_in_audioop_module=2E?= Message-ID: <3Z372j5wMPzSZg@mail.python.org> http://hg.python.org/cpython/rev/63b164708e60 changeset: 82078:63b164708e60 branch: 3.3 parent: 82074:9828c4ffb401 parent: 82077:104b17f8316b user: Serhiy Storchaka date: Sat Feb 09 11:12:36 2013 +0200 summary: Issue #16686: Fixed a lot of bugs in audioop module. * avgpp() and maxpp() no more crash on empty and 1-samples input fragment. They now work when peak-peak values are greater INT_MAX. * ratecv() no more crashes on empty input fragment. * Fixed an integer overflow in ratecv(). * Fixed an integer overflow in add() and bias() for 32-bit samples. * reverse(), lin2lin() and ratecv() no more lose precision for 32-bit samples. * max() and rms() no more returns negative result for 32-bit sample -0x80000000. * minmax() now returns correct max value for 32-bit sample -0x80000000. * avg(), mul(), tomono() and tostereo() now round negative result down and can return 32-bit sample -0x80000000. * add() now can return 32-bit sample -0x80000000. files: Doc/library/audioop.rst | 6 +- Lib/test/test_audioop.py | 399 ++++++++++++++++++-------- Misc/NEWS | 6 + Modules/audioop.c | 310 ++++++++++---------- 4 files changed, 435 insertions(+), 286 deletions(-) diff --git a/Doc/library/audioop.rst b/Doc/library/audioop.rst --- a/Doc/library/audioop.rst +++ b/Doc/library/audioop.rst @@ -36,7 +36,7 @@ Return a fragment which is the addition of the two samples passed as parameters. *width* is the sample width in bytes, either ``1``, ``2`` or ``4``. Both - fragments should have the same length. + fragments should have the same length. Samples are truncated in case of overflow. .. function:: adpcm2lin(adpcmfragment, width, state) @@ -67,7 +67,7 @@ .. function:: bias(fragment, width, bias) Return a fragment that is the original fragment with a bias added to each - sample. + sample. Samples wrap around in case of overflow. .. function:: cross(fragment, width) @@ -175,7 +175,7 @@ .. function:: mul(fragment, width, factor) Return a fragment that has all samples in the original fragment multiplied by - the floating-point value *factor*. Overflow is silently ignored. + the floating-point value *factor*. Samples are truncated in case of overflow. .. function:: ratecv(fragment, width, nchannels, inrate, outrate, state[, weightA[, weightB]]) diff --git a/Lib/test/test_audioop.py b/Lib/test/test_audioop.py --- a/Lib/test/test_audioop.py +++ b/Lib/test/test_audioop.py @@ -1,25 +1,21 @@ import audioop +import sys import unittest from test.support import run_unittest -endian = 'big' if audioop.getsample(b'\0\1', 2, 0) == 1 else 'little' +def pack(width, data): + return b''.join(v.to_bytes(width, sys.byteorder, signed=True) for v in data) -def gendata1(): - return b'\0\1\2' +packs = {w: (lambda *data, width=w: pack(width, data)) for w in (1, 2, 4)} +maxvalues = {w: (1 << (8 * w - 1)) - 1 for w in (1, 2, 4)} +minvalues = {w: -1 << (8 * w - 1) for w in (1, 2, 4)} -def gendata2(): - if endian == 'big': - return b'\0\0\0\1\0\2' - else: - return b'\0\0\1\0\2\0' - -def gendata4(): - if endian == 'big': - return b'\0\0\0\0\0\0\0\1\0\0\0\2' - else: - return b'\0\0\0\0\1\0\0\0\2\0\0\0' - -data = [gendata1(), gendata2(), gendata4()] +datas = { + 1: b'\x00\x12\x45\xbb\x7f\x80\xff', + 2: packs[2](0, 0x1234, 0x4567, -0x4567, 0x7fff, -0x8000, -1), + 4: packs[4](0, 0x12345678, 0x456789ab, -0x456789ab, + 0x7fffffff, -0x80000000, -1), +} INVALID_DATA = [ (b'abc', 0), @@ -31,171 +27,320 @@ class TestAudioop(unittest.TestCase): def test_max(self): - self.assertEqual(audioop.max(data[0], 1), 2) - self.assertEqual(audioop.max(data[1], 2), 2) - self.assertEqual(audioop.max(data[2], 4), 2) + for w in 1, 2, 4: + self.assertEqual(audioop.max(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.max(p(5), w), 5) + self.assertEqual(audioop.max(p(5, -8, -1), w), 8) + self.assertEqual(audioop.max(p(maxvalues[w]), w), maxvalues[w]) + self.assertEqual(audioop.max(p(minvalues[w]), w), -minvalues[w]) + self.assertEqual(audioop.max(datas[w], w), -minvalues[w]) def test_minmax(self): - self.assertEqual(audioop.minmax(data[0], 1), (0, 2)) - self.assertEqual(audioop.minmax(data[1], 2), (0, 2)) - self.assertEqual(audioop.minmax(data[2], 4), (0, 2)) + for w in 1, 2, 4: + self.assertEqual(audioop.minmax(b'', w), + (0x7fffffff, -0x80000000)) + p = packs[w] + self.assertEqual(audioop.minmax(p(5), w), (5, 5)) + self.assertEqual(audioop.minmax(p(5, -8, -1), w), (-8, 5)) + self.assertEqual(audioop.minmax(p(maxvalues[w]), w), + (maxvalues[w], maxvalues[w])) + self.assertEqual(audioop.minmax(p(minvalues[w]), w), + (minvalues[w], minvalues[w])) + self.assertEqual(audioop.minmax(datas[w], w), + (minvalues[w], maxvalues[w])) def test_maxpp(self): - self.assertEqual(audioop.maxpp(data[0], 1), 0) - self.assertEqual(audioop.maxpp(data[1], 2), 0) - self.assertEqual(audioop.maxpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.maxpp(b'', w), 0) + self.assertEqual(audioop.maxpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.maxpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.maxpp(datas[w], w), + maxvalues[w] - minvalues[w]) def test_avg(self): - self.assertEqual(audioop.avg(data[0], 1), 1) - self.assertEqual(audioop.avg(data[1], 2), 1) - self.assertEqual(audioop.avg(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.avg(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.avg(p(5), w), 5) + self .assertEqual(audioop.avg(p(5, 8), w), 6) + self.assertEqual(audioop.avg(p(5, -8), w), -2) + self.assertEqual(audioop.avg(p(maxvalues[w], maxvalues[w]), w), + maxvalues[w]) + self.assertEqual(audioop.avg(p(minvalues[w], minvalues[w]), w), + minvalues[w]) + self.assertEqual(audioop.avg(packs[4](0x50000000, 0x70000000), 4), + 0x60000000) + self.assertEqual(audioop.avg(packs[4](-0x50000000, -0x70000000), 4), + -0x60000000) def test_avgpp(self): - self.assertEqual(audioop.avgpp(data[0], 1), 0) - self.assertEqual(audioop.avgpp(data[1], 2), 0) - self.assertEqual(audioop.avgpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.avgpp(b'', w), 0) + self.assertEqual(audioop.avgpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.avgpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.avgpp(datas[1], 1), 196) + self.assertEqual(audioop.avgpp(datas[2], 2), 50534) + self.assertEqual(audioop.avgpp(datas[4], 4), 3311897002) def test_rms(self): - self.assertEqual(audioop.rms(data[0], 1), 1) - self.assertEqual(audioop.rms(data[1], 2), 1) - self.assertEqual(audioop.rms(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.rms(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.rms(p(*range(100)), w), 57) + self.assertAlmostEqual(audioop.rms(p(maxvalues[w]) * 5, w), + maxvalues[w], delta=1) + self.assertAlmostEqual(audioop.rms(p(minvalues[w]) * 5, w), + -minvalues[w], delta=1) + self.assertEqual(audioop.rms(datas[1], 1), 77) + self.assertEqual(audioop.rms(datas[2], 2), 20001) + self.assertEqual(audioop.rms(datas[4], 4), 1310854152) def test_cross(self): - self.assertEqual(audioop.cross(data[0], 1), 0) - self.assertEqual(audioop.cross(data[1], 2), 0) - self.assertEqual(audioop.cross(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.cross(b'', w), -1) + p = packs[w] + self.assertEqual(audioop.cross(p(0, 1, 2), w), 0) + self.assertEqual(audioop.cross(p(1, 2, -3, -4), w), 1) + self.assertEqual(audioop.cross(p(-1, -2, 3, 4), w), 1) + self.assertEqual(audioop.cross(p(0, minvalues[w]), w), 1) + self.assertEqual(audioop.cross(p(minvalues[w], maxvalues[w]), w), 1) def test_add(self): - data2 = [] - for d in data: - str = bytearray(len(d)) - for i,b in enumerate(d): - str[i] = 2*b - data2.append(str) - self.assertEqual(audioop.add(data[0], data[0], 1), data2[0]) - self.assertEqual(audioop.add(data[1], data[1], 2), data2[1]) - self.assertEqual(audioop.add(data[2], data[2], 4), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.add(b'', b'', w), b'') + self.assertEqual(audioop.add(datas[w], b'\0' * len(datas[w]), w), + datas[w]) + self.assertEqual(audioop.add(datas[1], datas[1], 1), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.add(datas[2], datas[2], 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.add(datas[4], datas[4], 4), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_bias(self): - # Note: this test assumes that avg() works - d1 = audioop.bias(data[0], 1, 100) - d2 = audioop.bias(data[1], 2, 100) - d4 = audioop.bias(data[2], 4, 100) - self.assertEqual(audioop.avg(d1, 1), 101) - self.assertEqual(audioop.avg(d2, 2), 101) - self.assertEqual(audioop.avg(d4, 4), 101) + for w in 1, 2, 4: + for bias in 0, 1, -1, 127, -128, 0x7fffffff, -0x80000000: + self.assertEqual(audioop.bias(b'', w, bias), b'') + self.assertEqual(audioop.bias(datas[1], 1, 1), + b'\x01\x13\x46\xbc\x80\x81\x00') + self.assertEqual(audioop.bias(datas[1], 1, -1), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, 0x7fffffff), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, -0x80000000), + datas[1]) + self.assertEqual(audioop.bias(datas[2], 2, 1), + packs[2](1, 0x1235, 0x4568, -0x4566, -0x8000, -0x7fff, 0)) + self.assertEqual(audioop.bias(datas[2], 2, -1), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, 0x7fffffff), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, -0x80000000), + datas[2]) + self.assertEqual(audioop.bias(datas[4], 4, 1), + packs[4](1, 0x12345679, 0x456789ac, -0x456789aa, + -0x80000000, -0x7fffffff, 0)) + self.assertEqual(audioop.bias(datas[4], 4, -1), + packs[4](-1, 0x12345677, 0x456789aa, -0x456789ac, + 0x7ffffffe, 0x7fffffff, -2)) + self.assertEqual(audioop.bias(datas[4], 4, 0x7fffffff), + packs[4](0x7fffffff, -0x6dcba989, -0x3a987656, 0x3a987654, + -2, -1, 0x7ffffffe)) + self.assertEqual(audioop.bias(datas[4], 4, -0x80000000), + packs[4](-0x80000000, -0x6dcba988, -0x3a987655, 0x3a987655, + -1, 0, 0x7fffffff)) def test_lin2lin(self): - # too simple: we test only the size - for d1 in data: - for d2 in data: - got = len(d1)//3 - wtd = len(d2)//3 - self.assertEqual(len(audioop.lin2lin(d1, got, wtd)), len(d2)) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2lin(datas[w], w, w), datas[w]) + + self.assertEqual(audioop.lin2lin(datas[1], 1, 2), + packs[2](0, 0x1200, 0x4500, -0x4500, 0x7f00, -0x8000, -0x100)) + self.assertEqual(audioop.lin2lin(datas[1], 1, 4), + packs[4](0, 0x12000000, 0x45000000, -0x45000000, + 0x7f000000, -0x80000000, -0x1000000)) + self.assertEqual(audioop.lin2lin(datas[2], 2, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[2], 2, 4), + packs[4](0, 0x12340000, 0x45670000, -0x45670000, + 0x7fff0000, -0x80000000, -0x10000)) + self.assertEqual(audioop.lin2lin(datas[4], 4, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[4], 4, 2), + packs[2](0, 0x1234, 0x4567, -0x4568, 0x7fff, -0x8000, -1)) def test_adpcm2lin(self): + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 1, None), + (b'\x00\x00\x00\xff\x00\xff', (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 2, None), + (packs[2](0, 0xb, 0x29, -0x16, 0x72, -0xb3), (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 4, None), + (packs[4](0, 0xb0000, 0x290000, -0x160000, 0x720000, + -0xb30000), (-179, 40))) + # Very cursory test - self.assertEqual(audioop.adpcm2lin(b'\0\0', 1, None), (b'\0' * 4, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 2, None), (b'\0' * 8, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 4, None), (b'\0' * 16, (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.adpcm2lin(b'\0' * 5, w, None), + (b'\0' * w * 10, (0, 0))) def test_lin2adpcm(self): + self.assertEqual(audioop.lin2adpcm(datas[1], 1, None), + (b'\x07\x7f\x7f', (-221, 39))) + self.assertEqual(audioop.lin2adpcm(datas[2], 2, None), + (b'\x07\x7f\x7f', (31, 39))) + self.assertEqual(audioop.lin2adpcm(datas[4], 4, None), + (b'\x07\x7f\x7f', (31, 39))) + # Very cursory test - self.assertEqual(audioop.lin2adpcm(b'\0\0\0\0', 1, None), (b'\0\0', (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2adpcm(b'\0' * w * 10, w, None), + (b'\0' * 5, (0, 0))) def test_lin2alaw(self): - self.assertEqual(audioop.lin2alaw(data[0], 1), b'\xd5\xc5\xf5') - self.assertEqual(audioop.lin2alaw(data[1], 2), b'\xd5\xd5\xd5') - self.assertEqual(audioop.lin2alaw(data[2], 4), b'\xd5\xd5\xd5') + self.assertEqual(audioop.lin2alaw(datas[1], 1), + b'\xd5\x87\xa4\x24\xaa\x2a\x5a') + self.assertEqual(audioop.lin2alaw(datas[2], 2), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') + self.assertEqual(audioop.lin2alaw(datas[4], 4), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') def test_alaw2lin(self): - # Cursory - d = audioop.lin2alaw(data[0], 1) - self.assertEqual(audioop.alaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x00\x08\x01\x08\x02\x10') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x08\x00\x00\x01\x08\x00\x00\x02\x10\x00\x00') - else: - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x08\x00\x08\x01\x10\x02') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x00\x08\x00\x00\x00\x08\x01\x00\x00\x10\x02') + encoded = b'\x00\x03\x24\x2a\x51\x54\x55\x58\x6b\x71\x7f'\ + b'\x80\x83\xa4\xaa\xd1\xd4\xd5\xd8\xeb\xf1\xff' + src = [-688, -720, -2240, -4032, -9, -3, -1, -27, -244, -82, -106, + 688, 720, 2240, 4032, 9, 3, 1, 27, 244, 82, 106] + for w in 1, 2, 4: + self.assertEqual(audioop.alaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 13 for x in src))) + + encoded = bytes(range(256)) + for w in 2, 4: + decoded = audioop.alaw2lin(encoded, w) + self.assertEqual(audioop.lin2alaw(decoded, w), encoded) def test_lin2ulaw(self): - self.assertEqual(audioop.lin2ulaw(data[0], 1), b'\xff\xe7\xdb') - self.assertEqual(audioop.lin2ulaw(data[1], 2), b'\xff\xff\xff') - self.assertEqual(audioop.lin2ulaw(data[2], 4), b'\xff\xff\xff') + self.assertEqual(audioop.lin2ulaw(datas[1], 1), + b'\xff\xad\x8e\x0e\x80\x00\x67') + self.assertEqual(audioop.lin2ulaw(datas[2], 2), + b'\xff\xad\x8e\x0e\x80\x00\x7e') + self.assertEqual(audioop.lin2ulaw(datas[4], 4), + b'\xff\xad\x8e\x0e\x80\x00\x7e') def test_ulaw2lin(self): - # Cursory - d = audioop.lin2ulaw(data[0], 1) - self.assertEqual(audioop.ulaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x01\x04\x02\x0c') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x01\x04\x00\x00\x02\x0c\x00\x00') - else: - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x04\x01\x0c\x02') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x00\x00\x04\x01\x00\x00\x0c\x02') + encoded = b'\x00\x0e\x28\x3f\x57\x6a\x76\x7c\x7e\x7f'\ + b'\x80\x8e\xa8\xbf\xd7\xea\xf6\xfc\xfe\xff' + src = [-8031, -4447, -1471, -495, -163, -53, -18, -6, -2, 0, + 8031, 4447, 1471, 495, 163, 53, 18, 6, 2, 0] + for w in 1, 2, 4: + self.assertEqual(audioop.ulaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 14 for x in src))) + + # Current u-law implementation has two codes fo 0: 0x7f and 0xff. + encoded = bytes(range(127)) + bytes(range(128, 256)) + for w in 2, 4: + decoded = audioop.ulaw2lin(encoded, w) + self.assertEqual(audioop.lin2ulaw(decoded, w), encoded) def test_mul(self): - data2 = [] - for d in data: - str = bytearray(len(d)) - for i,b in enumerate(d): - str[i] = 2*b - data2.append(str) - self.assertEqual(audioop.mul(data[0], 1, 2), data2[0]) - self.assertEqual(audioop.mul(data[1],2, 2), data2[1]) - self.assertEqual(audioop.mul(data[2], 4, 2), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.mul(b'', w, 2), b'') + self.assertEqual(audioop.mul(datas[w], w, 0), + b'\0' * len(datas[w])) + self.assertEqual(audioop.mul(datas[w], w, 1), + datas[w]) + self.assertEqual(audioop.mul(datas[1], 1, 2), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.mul(datas[2], 2, 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.mul(datas[4], 4, 2), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_ratecv(self): + for w in 1, 2, 4: + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 8000, None), + (b'', (-1, ((0, 0),)))) + self.assertEqual(audioop.ratecv(b'', w, 5, 8000, 8000, None), + (b'', (-1, ((0, 0),) * 5))) + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 16000, None), + (b'', (-2, ((0, 0),)))) + self.assertEqual(audioop.ratecv(datas[w], w, 1, 8000, 8000, None)[0], + datas[w]) state = None - d1, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) - d2, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) + d1, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) + d2, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) self.assertEqual(d1 + d2, b'\000\000\001\001\002\001\000\000\001\001\002') + for w in 1, 2, 4: + d0, state0 = audioop.ratecv(datas[w], w, 1, 8000, 16000, None) + d, state = b'', None + for i in range(0, len(datas[w]), w): + d1, state = audioop.ratecv(datas[w][i:i + w], w, 1, + 8000, 16000, state) + d += d1 + self.assertEqual(d, d0) + self.assertEqual(state, state0) + def test_reverse(self): - self.assertEqual(audioop.reverse(data[0], 1), b'\2\1\0') + for w in 1, 2, 4: + self.assertEqual(audioop.reverse(b'', w), b'') + self.assertEqual(audioop.reverse(packs[w](0, 1, 2), w), + packs[w](2, 1, 0)) def test_tomono(self): - data2 = bytearray() - for d in data[0]: - data2.append(d) - data2.append(d) - self.assertEqual(audioop.tomono(data2, 1, 0.5, 0.5), data[0]) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(data2, w, 1, 0), data1) + self.assertEqual(audioop.tomono(data2, w, 0, 1), b'\0' * len(data1)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(data2, w, 0.5, 0.5), data1) def test_tostereo(self): - data2 = bytearray() - for d in data[0]: - data2.append(d) - data2.append(d) - self.assertEqual(audioop.tostereo(data[0], 1, 1, 1), data2) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 0), data2) + self.assertEqual(audioop.tostereo(data1, w, 0, 0), b'\0' * len(data2)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 1), data2) def test_findfactor(self): - self.assertEqual(audioop.findfactor(data[1], data[1]), 1.0) + self.assertEqual(audioop.findfactor(datas[2], datas[2]), 1.0) + self.assertEqual(audioop.findfactor(b'\0' * len(datas[2]), datas[2]), + 0.0) def test_findfit(self): - self.assertEqual(audioop.findfit(data[1], data[1]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], datas[2]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], packs[2](1, 2, 0)), + (1, 8038.8)) + self.assertEqual(audioop.findfit(datas[2][:-2] * 5 + datas[2], datas[2]), + (30, 1.0)) def test_findmax(self): - self.assertEqual(audioop.findmax(data[1], 1), 2) + self.assertEqual(audioop.findmax(datas[2], 1), 5) def test_getsample(self): - for i in range(3): - self.assertEqual(audioop.getsample(data[0], 1, i), i) - self.assertEqual(audioop.getsample(data[1], 2, i), i) - self.assertEqual(audioop.getsample(data[2], 4, i), i) + for w in 1, 2, 4: + data = packs[w](0, 1, -1, maxvalues[w], minvalues[w]) + self.assertEqual(audioop.getsample(data, w, 0), 0) + self.assertEqual(audioop.getsample(data, w, 1), 1) + self.assertEqual(audioop.getsample(data, w, 2), -1) + self.assertEqual(audioop.getsample(data, w, 3), maxvalues[w]) + self.assertEqual(audioop.getsample(data, w, 4), minvalues[w]) def test_negativelen(self): # from issue 3306, previously it segfaulted self.assertRaises(audioop.error, - audioop.findmax, ''.join(chr(x) for x in range(256)), -2392392) + audioop.findmax, bytes(range(256)), -2392392) def test_issue7673(self): state = None @@ -222,9 +367,9 @@ self.assertRaises(audioop.error, audioop.lin2adpcm, data, size, state) def test_wrongsize(self): - data = b'abc' + data = b'abcdefgh' state = None - for size in (-1, 3, 5): + for size in (-1, 0, 3, 5, 1024): self.assertRaises(audioop.error, audioop.ulaw2lin, data, size) self.assertRaises(audioop.error, audioop.alaw2lin, data, size) self.assertRaises(audioop.error, audioop.adpcm2lin, data, size, state) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -169,6 +169,12 @@ Library ------- +- Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in + avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), + and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for + 32-bit samples. max() and rms() no more returns a negative result and + various other functions now work correctly with 32-bit sample -0x80000000. + - Issue #17073: Fix some integer overflows in sqlite3 module. - Issue #17114: IDLE now uses non-strict config parser. diff --git a/Modules/audioop.c b/Modules/audioop.c --- a/Modules/audioop.c +++ b/Modules/audioop.c @@ -26,6 +26,21 @@ #endif #endif +static const int maxvals[] = {0, 0x7F, 0x7FFF, 0x7FFFFF, 0x7FFFFFFF}; +static const int minvals[] = {0, -0x80, -0x8000, -0x800000, -0x80000000}; +static const unsigned int masks[] = {0, 0xFF, 0xFFFF, 0xFFFFFF, 0xFFFFFFFF}; + +static int +fbound(double val, double minval, double maxval) +{ + if (val > maxval) + val = maxval; + else if (val < minval + 1) + val = minval; + return val; +} + + /* Code shamelessly stolen from sox, 12.17.7, g711.c ** (c) Craig Reese, Joe Campbell and Jeff Poskanzer 1989 */ @@ -347,7 +362,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; - int max = 0; + unsigned int absval, max = 0; if ( !PyArg_ParseTuple(args, "s#i:max", &cp, &len, &size) ) return 0; @@ -357,10 +372,11 @@ if ( size == 1 ) val = (int)*CHARP(cp, i); else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( val < 0 ) val = (-val); - if ( val > max ) max = val; + if (val < 0) absval = (-val); + else absval = val; + if (absval > max) max = absval; } - return PyLong_FromLong(max); + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -369,7 +385,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; - int min = 0x7fffffff, max = -0x7fffffff; + int min = 0x7fffffff, max = -0x80000000; if (!PyArg_ParseTuple(args, "s#i:minmax", &cp, &len, &size)) return NULL; @@ -406,7 +422,7 @@ if ( len == 0 ) val = 0; else - val = (int)(avg / (double)(len/size)); + val = (int)floor(avg / (double)(len/size)); return PyLong_FromLong(val); } @@ -416,6 +432,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; + unsigned int res; double sum_squares = 0.0; if ( !PyArg_ParseTuple(args, "s#i:rms", &cp, &len, &size) ) @@ -429,10 +446,10 @@ sum_squares += (double)val*(double)val; } if ( len == 0 ) - val = 0; + res = 0; else - val = (int)sqrt(sum_squares / (double)(len/size)); - return PyLong_FromLong(val); + res = (unsigned int)sqrt(sum_squares / (double)(len/size)); + return PyLong_FromUnsignedLong(res); } static double _sum2(short *a, short *b, Py_ssize_t len) @@ -622,52 +639,46 @@ Py_ssize_t len, i; int size, val = 0, prevval = 0, prevextremevalid = 0, prevextreme = 0; - double avg = 0.0; - int diff, prevdiff, extremediff, nextreme = 0; + double sum = 0.0; + unsigned int avg; + int diff, prevdiff, nextreme = 0; if ( !PyArg_ParseTuple(args, "s#i:avgpp", &cp, &len, &size) ) return 0; if (!audioop_check_parameters(len, size)) return NULL; - /* Compute first delta value ahead. Also automatically makes us - ** skip the first extreme value - */ + if (len <= size) + return PyLong_FromLong(0); if ( size == 1 ) prevval = (int)*CHARP(cp, 0); else if ( size == 2 ) prevval = (int)*SHORTP(cp, 0); else if ( size == 4 ) prevval = (int)*LONGP(cp, 0); - if ( size == 1 ) val = (int)*CHARP(cp, size); - else if ( size == 2 ) val = (int)*SHORTP(cp, size); - else if ( size == 4 ) val = (int)*LONGP(cp, size); - prevdiff = val - prevval; - + prevdiff = 17; /* Anything != 0, 1 */ for ( i=size; i max ) - max = extremediff; + if (val != prevval) { + diff = val < prevval; + if (prevdiff == !diff) { + /* Derivative changed sign. Compute difference to + ** last extreme value and remember. + */ + if (prevextremevalid) { + if (prevval < prevextreme) + extremediff = (unsigned int)prevextreme - + (unsigned int)prevval; + else + extremediff = (unsigned int)prevval - + (unsigned int)prevextreme; + if ( extremediff > max ) + max = extremediff; + } + prevextremevalid = 1; + prevextreme = prevval; } - prevextremevalid = 1; - prevextreme = prevval; + prevval = val; + prevdiff = diff; } - prevval = val; - if ( diff != 0 ) - prevdiff = diff; } - return PyLong_FromLong(max); + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -753,7 +763,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val = 0; - double factor, fval, maxval; + double factor, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#id:mul", &cp, &len, &size, &factor ) ) @@ -761,13 +771,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len); if ( rv == 0 ) @@ -780,9 +785,7 @@ else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*factor; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val = (int)fval; + val = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i) = (signed char)val; else if ( size == 2 ) *SHORTP(ncp, i) = (short)val; else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)val; @@ -797,7 +800,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val1 = 0, val2 = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s*idd:tomono", @@ -815,14 +818,8 @@ return NULL; } - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyBuffer_Release(&pcp); - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len/2); if ( rv == 0 ) { @@ -840,9 +837,7 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp, i+2); else if ( size == 4 ) val2 = (int)*LONGP(cp, i+4); fval = (double)val1*fac1 + (double)val2*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i/2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i/2) = (short)val1; else if ( size == 4 ) *LONGP(ncp, i/2)= (Py_Int32)val1; @@ -857,7 +852,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val1, val2, val = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#idd:tostereo", @@ -866,13 +861,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; if (len > PY_SSIZE_T_MAX/2) { PyErr_SetString(PyExc_MemoryError, @@ -892,14 +882,10 @@ else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*fac1; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); fval = (double)val*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val2 = (int)fval; + val2 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i*2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i*2) = (short)val1; @@ -917,7 +903,7 @@ { signed char *cp1, *cp2, *ncp; Py_ssize_t len1, len2, i; - int size, val1 = 0, val2 = 0, maxval, newval; + int size, val1 = 0, val2 = 0, minval, maxval, newval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#s#i:add", @@ -930,13 +916,8 @@ return 0; } - if ( size == 1 ) maxval = 0x7f; - else if ( size == 2 ) maxval = 0x7fff; - else if ( size == 4 ) maxval = 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = maxvals[size]; + minval = minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len1); if ( rv == 0 ) @@ -952,12 +933,19 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp2, i); else if ( size == 4 ) val2 = (int)*LONGP(cp2, i); - newval = val1 + val2; - /* truncate in case of overflow */ - if (newval > maxval) newval = maxval; - else if (newval < -maxval) newval = -maxval; - else if (size == 4 && (newval^val1) < 0 && (newval^val2) < 0) - newval = val1 > 0 ? maxval : - maxval; + if (size < 4) { + newval = val1 + val2; + /* truncate in case of overflow */ + if (newval > maxval) + newval = maxval; + else if (newval < minval) + newval = minval; + } + else { + double fval = (double)val1 + (double)val2; + /* truncate in case of overflow */ + newval = (int)floor(fbound(fval, minval, maxval)); + } if ( size == 1 ) *CHARP(ncp, i) = (signed char)newval; else if ( size == 2 ) *SHORTP(ncp, i) = (short)newval; @@ -971,9 +959,9 @@ { signed char *cp, *ncp; Py_ssize_t len, i; - int size, val = 0; + int size, bias; + unsigned int val = 0, mask; PyObject *rv; - int bias; if ( !PyArg_ParseTuple(args, "s#ii:bias", &cp, &len, &size , &bias) ) @@ -987,15 +975,20 @@ return 0; ncp = (signed char *)PyBytes_AsString(rv); + mask = masks[size]; for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = (int)*CHARP(cp, i); - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = (int)*LONGP(cp, i); + if ( size == 1 ) val = (unsigned int)(unsigned char)*CHARP(cp, i); + else if ( size == 2 ) val = (unsigned int)(unsigned short)*SHORTP(cp, i); + else if ( size == 4 ) val = (unsigned int)(Py_UInt32)*LONGP(cp, i); - if ( size == 1 ) *CHARP(ncp, i) = (signed char)(val+bias); - else if ( size == 2 ) *SHORTP(ncp, i) = (short)(val+bias); - else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(val+bias); + val += (unsigned int)bias; + /* wrap around in case of overflow */ + val &= mask; + + if ( size == 1 ) *CHARP(ncp, i) = (signed char)(unsigned char)val; + else if ( size == 2 ) *SHORTP(ncp, i) = (short)(unsigned short)val; + else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(Py_UInt32)val; } return rv; } @@ -1022,15 +1015,15 @@ ncp = (unsigned char *)PyBytes_AsString(rv); for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); j = len - i - size; - if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1064,13 +1057,13 @@ ncp = (unsigned char *)PyBytes_AsString(rv); for ( i=0, j=0; i < len; i += size, j += size2 ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1134,6 +1127,10 @@ d = gcd(inrate, outrate); inrate /= d; outrate /= d; + /* divide weightA and weightB by their greatest common divisor */ + d = gcd(weightA, weightB); + weightA /= d; + weightA /= d; if ((size_t)nchannels > PY_SIZE_MAX/sizeof(int)) { PyErr_SetString(PyExc_MemoryError, @@ -1173,7 +1170,9 @@ } /* str <- Space for the output buffer. */ - { + if (len == 0) + str = PyBytes_FromStringAndSize(NULL, 0); + else { /* There are len input frames, so we need (mathematically) ceiling(len*outrate/inrate) output frames, and each frame requires bytes_per_frame bytes. Computing this @@ -1188,12 +1187,11 @@ else str = PyBytes_FromStringAndSize(NULL, q * outrate * bytes_per_frame); - - if (str == NULL) { - PyErr_SetString(PyExc_MemoryError, - "not enough memory for output buffer"); - goto exit; - } + } + if (str == NULL) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + goto exit; } ncp = PyBytes_AsString(str); @@ -1227,32 +1225,32 @@ for (chan = 0; chan < nchannels; chan++) { prev_i[chan] = cur_i[chan]; if (size == 1) - cur_i[chan] = ((int)*CHARP(cp, 0)) << 8; + cur_i[chan] = ((int)*CHARP(cp, 0)) << 24; else if (size == 2) - cur_i[chan] = (int)*SHORTP(cp, 0); + cur_i[chan] = ((int)*SHORTP(cp, 0)) << 16; else if (size == 4) - cur_i[chan] = ((int)*LONGP(cp, 0)) >> 16; + cur_i[chan] = (int)*LONGP(cp, 0); cp += size; /* implements a simple digital filter */ - cur_i[chan] = - (weightA * cur_i[chan] + - weightB * prev_i[chan]) / - (weightA + weightB); + cur_i[chan] = (int)( + ((double)weightA * (double)cur_i[chan] + + (double)weightB * (double)prev_i[chan]) / + ((double)weightA + (double)weightB)); } len--; d += outrate; } while (d >= 0) { for (chan = 0; chan < nchannels; chan++) { - cur_o = (prev_i[chan] * d + - cur_i[chan] * (outrate - d)) / - outrate; + cur_o = (int)(((double)prev_i[chan] * (double)d + + (double)cur_i[chan] * (double)(outrate - d)) / + (double)outrate); if (size == 1) - *CHARP(ncp, 0) = (signed char)(cur_o >> 8); + *CHARP(ncp, 0) = (signed char)(cur_o >> 24); else if (size == 2) - *SHORTP(ncp, 0) = (short)(cur_o); + *SHORTP(ncp, 0) = (short)(cur_o >> 16); else if (size == 4) - *LONGP(ncp, 0) = (Py_Int32)(cur_o<<16); + *LONGP(ncp, 0) = (Py_Int32)(cur_o); ncp += size; } d -= inrate; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:17:51 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:17:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316686=3A_Fixed_a_lot_of_bugs_in_audioop_module?= =?utf-8?q?=2E?= Message-ID: <3Z372l5X0RzScp@mail.python.org> http://hg.python.org/cpython/rev/48747ef5f65b changeset: 82079:48747ef5f65b parent: 82075:5e874b2a0469 parent: 82078:63b164708e60 user: Serhiy Storchaka date: Sat Feb 09 11:13:46 2013 +0200 summary: Issue #16686: Fixed a lot of bugs in audioop module. * avgpp() and maxpp() no more crash on empty and 1-samples input fragment. They now work when peak-peak values are greater INT_MAX. * ratecv() no more crashes on empty input fragment. * Fixed an integer overflow in ratecv(). * Fixed an integer overflow in add() and bias() for 32-bit samples. * reverse(), lin2lin() and ratecv() no more lose precision for 32-bit samples. * max() and rms() no more returns negative result for 32-bit sample -0x80000000. * minmax() now returns correct max value for 32-bit sample -0x80000000. * avg(), mul(), tomono() and tostereo() now round negative result down and can return 32-bit sample -0x80000000. * add() now can return 32-bit sample -0x80000000. files: Doc/library/audioop.rst | 6 +- Lib/test/test_audioop.py | 399 ++++++++++++++++++-------- Misc/NEWS | 6 + Modules/audioop.c | 310 ++++++++++---------- 4 files changed, 435 insertions(+), 286 deletions(-) diff --git a/Doc/library/audioop.rst b/Doc/library/audioop.rst --- a/Doc/library/audioop.rst +++ b/Doc/library/audioop.rst @@ -36,7 +36,7 @@ Return a fragment which is the addition of the two samples passed as parameters. *width* is the sample width in bytes, either ``1``, ``2`` or ``4``. Both - fragments should have the same length. + fragments should have the same length. Samples are truncated in case of overflow. .. function:: adpcm2lin(adpcmfragment, width, state) @@ -67,7 +67,7 @@ .. function:: bias(fragment, width, bias) Return a fragment that is the original fragment with a bias added to each - sample. + sample. Samples wrap around in case of overflow. .. function:: cross(fragment, width) @@ -175,7 +175,7 @@ .. function:: mul(fragment, width, factor) Return a fragment that has all samples in the original fragment multiplied by - the floating-point value *factor*. Overflow is silently ignored. + the floating-point value *factor*. Samples are truncated in case of overflow. .. function:: ratecv(fragment, width, nchannels, inrate, outrate, state[, weightA[, weightB]]) diff --git a/Lib/test/test_audioop.py b/Lib/test/test_audioop.py --- a/Lib/test/test_audioop.py +++ b/Lib/test/test_audioop.py @@ -1,25 +1,21 @@ import audioop +import sys import unittest from test.support import run_unittest -endian = 'big' if audioop.getsample(b'\0\1', 2, 0) == 1 else 'little' +def pack(width, data): + return b''.join(v.to_bytes(width, sys.byteorder, signed=True) for v in data) -def gendata1(): - return b'\0\1\2' +packs = {w: (lambda *data, width=w: pack(width, data)) for w in (1, 2, 4)} +maxvalues = {w: (1 << (8 * w - 1)) - 1 for w in (1, 2, 4)} +minvalues = {w: -1 << (8 * w - 1) for w in (1, 2, 4)} -def gendata2(): - if endian == 'big': - return b'\0\0\0\1\0\2' - else: - return b'\0\0\1\0\2\0' - -def gendata4(): - if endian == 'big': - return b'\0\0\0\0\0\0\0\1\0\0\0\2' - else: - return b'\0\0\0\0\1\0\0\0\2\0\0\0' - -data = [gendata1(), gendata2(), gendata4()] +datas = { + 1: b'\x00\x12\x45\xbb\x7f\x80\xff', + 2: packs[2](0, 0x1234, 0x4567, -0x4567, 0x7fff, -0x8000, -1), + 4: packs[4](0, 0x12345678, 0x456789ab, -0x456789ab, + 0x7fffffff, -0x80000000, -1), +} INVALID_DATA = [ (b'abc', 0), @@ -31,171 +27,320 @@ class TestAudioop(unittest.TestCase): def test_max(self): - self.assertEqual(audioop.max(data[0], 1), 2) - self.assertEqual(audioop.max(data[1], 2), 2) - self.assertEqual(audioop.max(data[2], 4), 2) + for w in 1, 2, 4: + self.assertEqual(audioop.max(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.max(p(5), w), 5) + self.assertEqual(audioop.max(p(5, -8, -1), w), 8) + self.assertEqual(audioop.max(p(maxvalues[w]), w), maxvalues[w]) + self.assertEqual(audioop.max(p(minvalues[w]), w), -minvalues[w]) + self.assertEqual(audioop.max(datas[w], w), -minvalues[w]) def test_minmax(self): - self.assertEqual(audioop.minmax(data[0], 1), (0, 2)) - self.assertEqual(audioop.minmax(data[1], 2), (0, 2)) - self.assertEqual(audioop.minmax(data[2], 4), (0, 2)) + for w in 1, 2, 4: + self.assertEqual(audioop.minmax(b'', w), + (0x7fffffff, -0x80000000)) + p = packs[w] + self.assertEqual(audioop.minmax(p(5), w), (5, 5)) + self.assertEqual(audioop.minmax(p(5, -8, -1), w), (-8, 5)) + self.assertEqual(audioop.minmax(p(maxvalues[w]), w), + (maxvalues[w], maxvalues[w])) + self.assertEqual(audioop.minmax(p(minvalues[w]), w), + (minvalues[w], minvalues[w])) + self.assertEqual(audioop.minmax(datas[w], w), + (minvalues[w], maxvalues[w])) def test_maxpp(self): - self.assertEqual(audioop.maxpp(data[0], 1), 0) - self.assertEqual(audioop.maxpp(data[1], 2), 0) - self.assertEqual(audioop.maxpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.maxpp(b'', w), 0) + self.assertEqual(audioop.maxpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.maxpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.maxpp(datas[w], w), + maxvalues[w] - minvalues[w]) def test_avg(self): - self.assertEqual(audioop.avg(data[0], 1), 1) - self.assertEqual(audioop.avg(data[1], 2), 1) - self.assertEqual(audioop.avg(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.avg(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.avg(p(5), w), 5) + self .assertEqual(audioop.avg(p(5, 8), w), 6) + self.assertEqual(audioop.avg(p(5, -8), w), -2) + self.assertEqual(audioop.avg(p(maxvalues[w], maxvalues[w]), w), + maxvalues[w]) + self.assertEqual(audioop.avg(p(minvalues[w], minvalues[w]), w), + minvalues[w]) + self.assertEqual(audioop.avg(packs[4](0x50000000, 0x70000000), 4), + 0x60000000) + self.assertEqual(audioop.avg(packs[4](-0x50000000, -0x70000000), 4), + -0x60000000) def test_avgpp(self): - self.assertEqual(audioop.avgpp(data[0], 1), 0) - self.assertEqual(audioop.avgpp(data[1], 2), 0) - self.assertEqual(audioop.avgpp(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.avgpp(b'', w), 0) + self.assertEqual(audioop.avgpp(packs[w](*range(100)), w), 0) + self.assertEqual(audioop.avgpp(packs[w](9, 10, 5, 5, 0, 1), w), 10) + self.assertEqual(audioop.avgpp(datas[1], 1), 196) + self.assertEqual(audioop.avgpp(datas[2], 2), 50534) + self.assertEqual(audioop.avgpp(datas[4], 4), 3311897002) def test_rms(self): - self.assertEqual(audioop.rms(data[0], 1), 1) - self.assertEqual(audioop.rms(data[1], 2), 1) - self.assertEqual(audioop.rms(data[2], 4), 1) + for w in 1, 2, 4: + self.assertEqual(audioop.rms(b'', w), 0) + p = packs[w] + self.assertEqual(audioop.rms(p(*range(100)), w), 57) + self.assertAlmostEqual(audioop.rms(p(maxvalues[w]) * 5, w), + maxvalues[w], delta=1) + self.assertAlmostEqual(audioop.rms(p(minvalues[w]) * 5, w), + -minvalues[w], delta=1) + self.assertEqual(audioop.rms(datas[1], 1), 77) + self.assertEqual(audioop.rms(datas[2], 2), 20001) + self.assertEqual(audioop.rms(datas[4], 4), 1310854152) def test_cross(self): - self.assertEqual(audioop.cross(data[0], 1), 0) - self.assertEqual(audioop.cross(data[1], 2), 0) - self.assertEqual(audioop.cross(data[2], 4), 0) + for w in 1, 2, 4: + self.assertEqual(audioop.cross(b'', w), -1) + p = packs[w] + self.assertEqual(audioop.cross(p(0, 1, 2), w), 0) + self.assertEqual(audioop.cross(p(1, 2, -3, -4), w), 1) + self.assertEqual(audioop.cross(p(-1, -2, 3, 4), w), 1) + self.assertEqual(audioop.cross(p(0, minvalues[w]), w), 1) + self.assertEqual(audioop.cross(p(minvalues[w], maxvalues[w]), w), 1) def test_add(self): - data2 = [] - for d in data: - str = bytearray(len(d)) - for i,b in enumerate(d): - str[i] = 2*b - data2.append(str) - self.assertEqual(audioop.add(data[0], data[0], 1), data2[0]) - self.assertEqual(audioop.add(data[1], data[1], 2), data2[1]) - self.assertEqual(audioop.add(data[2], data[2], 4), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.add(b'', b'', w), b'') + self.assertEqual(audioop.add(datas[w], b'\0' * len(datas[w]), w), + datas[w]) + self.assertEqual(audioop.add(datas[1], datas[1], 1), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.add(datas[2], datas[2], 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.add(datas[4], datas[4], 4), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_bias(self): - # Note: this test assumes that avg() works - d1 = audioop.bias(data[0], 1, 100) - d2 = audioop.bias(data[1], 2, 100) - d4 = audioop.bias(data[2], 4, 100) - self.assertEqual(audioop.avg(d1, 1), 101) - self.assertEqual(audioop.avg(d2, 2), 101) - self.assertEqual(audioop.avg(d4, 4), 101) + for w in 1, 2, 4: + for bias in 0, 1, -1, 127, -128, 0x7fffffff, -0x80000000: + self.assertEqual(audioop.bias(b'', w, bias), b'') + self.assertEqual(audioop.bias(datas[1], 1, 1), + b'\x01\x13\x46\xbc\x80\x81\x00') + self.assertEqual(audioop.bias(datas[1], 1, -1), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, 0x7fffffff), + b'\xff\x11\x44\xba\x7e\x7f\xfe') + self.assertEqual(audioop.bias(datas[1], 1, -0x80000000), + datas[1]) + self.assertEqual(audioop.bias(datas[2], 2, 1), + packs[2](1, 0x1235, 0x4568, -0x4566, -0x8000, -0x7fff, 0)) + self.assertEqual(audioop.bias(datas[2], 2, -1), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, 0x7fffffff), + packs[2](-1, 0x1233, 0x4566, -0x4568, 0x7ffe, 0x7fff, -2)) + self.assertEqual(audioop.bias(datas[2], 2, -0x80000000), + datas[2]) + self.assertEqual(audioop.bias(datas[4], 4, 1), + packs[4](1, 0x12345679, 0x456789ac, -0x456789aa, + -0x80000000, -0x7fffffff, 0)) + self.assertEqual(audioop.bias(datas[4], 4, -1), + packs[4](-1, 0x12345677, 0x456789aa, -0x456789ac, + 0x7ffffffe, 0x7fffffff, -2)) + self.assertEqual(audioop.bias(datas[4], 4, 0x7fffffff), + packs[4](0x7fffffff, -0x6dcba989, -0x3a987656, 0x3a987654, + -2, -1, 0x7ffffffe)) + self.assertEqual(audioop.bias(datas[4], 4, -0x80000000), + packs[4](-0x80000000, -0x6dcba988, -0x3a987655, 0x3a987655, + -1, 0, 0x7fffffff)) def test_lin2lin(self): - # too simple: we test only the size - for d1 in data: - for d2 in data: - got = len(d1)//3 - wtd = len(d2)//3 - self.assertEqual(len(audioop.lin2lin(d1, got, wtd)), len(d2)) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2lin(datas[w], w, w), datas[w]) + + self.assertEqual(audioop.lin2lin(datas[1], 1, 2), + packs[2](0, 0x1200, 0x4500, -0x4500, 0x7f00, -0x8000, -0x100)) + self.assertEqual(audioop.lin2lin(datas[1], 1, 4), + packs[4](0, 0x12000000, 0x45000000, -0x45000000, + 0x7f000000, -0x80000000, -0x1000000)) + self.assertEqual(audioop.lin2lin(datas[2], 2, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[2], 2, 4), + packs[4](0, 0x12340000, 0x45670000, -0x45670000, + 0x7fff0000, -0x80000000, -0x10000)) + self.assertEqual(audioop.lin2lin(datas[4], 4, 1), + b'\x00\x12\x45\xba\x7f\x80\xff') + self.assertEqual(audioop.lin2lin(datas[4], 4, 2), + packs[2](0, 0x1234, 0x4567, -0x4568, 0x7fff, -0x8000, -1)) def test_adpcm2lin(self): + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 1, None), + (b'\x00\x00\x00\xff\x00\xff', (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 2, None), + (packs[2](0, 0xb, 0x29, -0x16, 0x72, -0xb3), (-179, 40))) + self.assertEqual(audioop.adpcm2lin(b'\x07\x7f\x7f', 4, None), + (packs[4](0, 0xb0000, 0x290000, -0x160000, 0x720000, + -0xb30000), (-179, 40))) + # Very cursory test - self.assertEqual(audioop.adpcm2lin(b'\0\0', 1, None), (b'\0' * 4, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 2, None), (b'\0' * 8, (0,0))) - self.assertEqual(audioop.adpcm2lin(b'\0\0', 4, None), (b'\0' * 16, (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.adpcm2lin(b'\0' * 5, w, None), + (b'\0' * w * 10, (0, 0))) def test_lin2adpcm(self): + self.assertEqual(audioop.lin2adpcm(datas[1], 1, None), + (b'\x07\x7f\x7f', (-221, 39))) + self.assertEqual(audioop.lin2adpcm(datas[2], 2, None), + (b'\x07\x7f\x7f', (31, 39))) + self.assertEqual(audioop.lin2adpcm(datas[4], 4, None), + (b'\x07\x7f\x7f', (31, 39))) + # Very cursory test - self.assertEqual(audioop.lin2adpcm(b'\0\0\0\0', 1, None), (b'\0\0', (0,0))) + for w in 1, 2, 4: + self.assertEqual(audioop.lin2adpcm(b'\0' * w * 10, w, None), + (b'\0' * 5, (0, 0))) def test_lin2alaw(self): - self.assertEqual(audioop.lin2alaw(data[0], 1), b'\xd5\xc5\xf5') - self.assertEqual(audioop.lin2alaw(data[1], 2), b'\xd5\xd5\xd5') - self.assertEqual(audioop.lin2alaw(data[2], 4), b'\xd5\xd5\xd5') + self.assertEqual(audioop.lin2alaw(datas[1], 1), + b'\xd5\x87\xa4\x24\xaa\x2a\x5a') + self.assertEqual(audioop.lin2alaw(datas[2], 2), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') + self.assertEqual(audioop.lin2alaw(datas[4], 4), + b'\xd5\x87\xa4\x24\xaa\x2a\x55') def test_alaw2lin(self): - # Cursory - d = audioop.lin2alaw(data[0], 1) - self.assertEqual(audioop.alaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x00\x08\x01\x08\x02\x10') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x08\x00\x00\x01\x08\x00\x00\x02\x10\x00\x00') - else: - self.assertEqual(audioop.alaw2lin(d, 2), - b'\x08\x00\x08\x01\x10\x02') - self.assertEqual(audioop.alaw2lin(d, 4), - b'\x00\x00\x08\x00\x00\x00\x08\x01\x00\x00\x10\x02') + encoded = b'\x00\x03\x24\x2a\x51\x54\x55\x58\x6b\x71\x7f'\ + b'\x80\x83\xa4\xaa\xd1\xd4\xd5\xd8\xeb\xf1\xff' + src = [-688, -720, -2240, -4032, -9, -3, -1, -27, -244, -82, -106, + 688, 720, 2240, 4032, 9, 3, 1, 27, 244, 82, 106] + for w in 1, 2, 4: + self.assertEqual(audioop.alaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 13 for x in src))) + + encoded = bytes(range(256)) + for w in 2, 4: + decoded = audioop.alaw2lin(encoded, w) + self.assertEqual(audioop.lin2alaw(decoded, w), encoded) def test_lin2ulaw(self): - self.assertEqual(audioop.lin2ulaw(data[0], 1), b'\xff\xe7\xdb') - self.assertEqual(audioop.lin2ulaw(data[1], 2), b'\xff\xff\xff') - self.assertEqual(audioop.lin2ulaw(data[2], 4), b'\xff\xff\xff') + self.assertEqual(audioop.lin2ulaw(datas[1], 1), + b'\xff\xad\x8e\x0e\x80\x00\x67') + self.assertEqual(audioop.lin2ulaw(datas[2], 2), + b'\xff\xad\x8e\x0e\x80\x00\x7e') + self.assertEqual(audioop.lin2ulaw(datas[4], 4), + b'\xff\xad\x8e\x0e\x80\x00\x7e') def test_ulaw2lin(self): - # Cursory - d = audioop.lin2ulaw(data[0], 1) - self.assertEqual(audioop.ulaw2lin(d, 1), data[0]) - if endian == 'big': - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x01\x04\x02\x0c') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x01\x04\x00\x00\x02\x0c\x00\x00') - else: - self.assertEqual(audioop.ulaw2lin(d, 2), - b'\x00\x00\x04\x01\x0c\x02') - self.assertEqual(audioop.ulaw2lin(d, 4), - b'\x00\x00\x00\x00\x00\x00\x04\x01\x00\x00\x0c\x02') + encoded = b'\x00\x0e\x28\x3f\x57\x6a\x76\x7c\x7e\x7f'\ + b'\x80\x8e\xa8\xbf\xd7\xea\xf6\xfc\xfe\xff' + src = [-8031, -4447, -1471, -495, -163, -53, -18, -6, -2, 0, + 8031, 4447, 1471, 495, 163, 53, 18, 6, 2, 0] + for w in 1, 2, 4: + self.assertEqual(audioop.ulaw2lin(encoded, w), + packs[w](*(x << (w * 8) >> 14 for x in src))) + + # Current u-law implementation has two codes fo 0: 0x7f and 0xff. + encoded = bytes(range(127)) + bytes(range(128, 256)) + for w in 2, 4: + decoded = audioop.ulaw2lin(encoded, w) + self.assertEqual(audioop.lin2ulaw(decoded, w), encoded) def test_mul(self): - data2 = [] - for d in data: - str = bytearray(len(d)) - for i,b in enumerate(d): - str[i] = 2*b - data2.append(str) - self.assertEqual(audioop.mul(data[0], 1, 2), data2[0]) - self.assertEqual(audioop.mul(data[1],2, 2), data2[1]) - self.assertEqual(audioop.mul(data[2], 4, 2), data2[2]) + for w in 1, 2, 4: + self.assertEqual(audioop.mul(b'', w, 2), b'') + self.assertEqual(audioop.mul(datas[w], w, 0), + b'\0' * len(datas[w])) + self.assertEqual(audioop.mul(datas[w], w, 1), + datas[w]) + self.assertEqual(audioop.mul(datas[1], 1, 2), + b'\x00\x24\x7f\x80\x7f\x80\xfe') + self.assertEqual(audioop.mul(datas[2], 2, 2), + packs[2](0, 0x2468, 0x7fff, -0x8000, 0x7fff, -0x8000, -2)) + self.assertEqual(audioop.mul(datas[4], 4, 2), + packs[4](0, 0x2468acf0, 0x7fffffff, -0x80000000, + 0x7fffffff, -0x80000000, -2)) def test_ratecv(self): + for w in 1, 2, 4: + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 8000, None), + (b'', (-1, ((0, 0),)))) + self.assertEqual(audioop.ratecv(b'', w, 5, 8000, 8000, None), + (b'', (-1, ((0, 0),) * 5))) + self.assertEqual(audioop.ratecv(b'', w, 1, 8000, 16000, None), + (b'', (-2, ((0, 0),)))) + self.assertEqual(audioop.ratecv(datas[w], w, 1, 8000, 8000, None)[0], + datas[w]) state = None - d1, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) - d2, state = audioop.ratecv(data[0], 1, 1, 8000, 16000, state) + d1, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) + d2, state = audioop.ratecv(b'\x00\x01\x02', 1, 1, 8000, 16000, state) self.assertEqual(d1 + d2, b'\000\000\001\001\002\001\000\000\001\001\002') + for w in 1, 2, 4: + d0, state0 = audioop.ratecv(datas[w], w, 1, 8000, 16000, None) + d, state = b'', None + for i in range(0, len(datas[w]), w): + d1, state = audioop.ratecv(datas[w][i:i + w], w, 1, + 8000, 16000, state) + d += d1 + self.assertEqual(d, d0) + self.assertEqual(state, state0) + def test_reverse(self): - self.assertEqual(audioop.reverse(data[0], 1), b'\2\1\0') + for w in 1, 2, 4: + self.assertEqual(audioop.reverse(b'', w), b'') + self.assertEqual(audioop.reverse(packs[w](0, 1, 2), w), + packs[w](2, 1, 0)) def test_tomono(self): - data2 = bytearray() - for d in data[0]: - data2.append(d) - data2.append(d) - self.assertEqual(audioop.tomono(data2, 1, 0.5, 0.5), data[0]) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(data2, w, 1, 0), data1) + self.assertEqual(audioop.tomono(data2, w, 0, 1), b'\0' * len(data1)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tomono(data2, w, 0.5, 0.5), data1) def test_tostereo(self): - data2 = bytearray() - for d in data[0]: - data2.append(d) - data2.append(d) - self.assertEqual(audioop.tostereo(data[0], 1, 1, 1), data2) + for w in 1, 2, 4: + data1 = datas[w] + data2 = bytearray(2 * len(data1)) + for k in range(w): + data2[k::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 0), data2) + self.assertEqual(audioop.tostereo(data1, w, 0, 0), b'\0' * len(data2)) + for k in range(w): + data2[k+w::2*w] = data1[k::w] + self.assertEqual(audioop.tostereo(data1, w, 1, 1), data2) def test_findfactor(self): - self.assertEqual(audioop.findfactor(data[1], data[1]), 1.0) + self.assertEqual(audioop.findfactor(datas[2], datas[2]), 1.0) + self.assertEqual(audioop.findfactor(b'\0' * len(datas[2]), datas[2]), + 0.0) def test_findfit(self): - self.assertEqual(audioop.findfit(data[1], data[1]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], datas[2]), (0, 1.0)) + self.assertEqual(audioop.findfit(datas[2], packs[2](1, 2, 0)), + (1, 8038.8)) + self.assertEqual(audioop.findfit(datas[2][:-2] * 5 + datas[2], datas[2]), + (30, 1.0)) def test_findmax(self): - self.assertEqual(audioop.findmax(data[1], 1), 2) + self.assertEqual(audioop.findmax(datas[2], 1), 5) def test_getsample(self): - for i in range(3): - self.assertEqual(audioop.getsample(data[0], 1, i), i) - self.assertEqual(audioop.getsample(data[1], 2, i), i) - self.assertEqual(audioop.getsample(data[2], 4, i), i) + for w in 1, 2, 4: + data = packs[w](0, 1, -1, maxvalues[w], minvalues[w]) + self.assertEqual(audioop.getsample(data, w, 0), 0) + self.assertEqual(audioop.getsample(data, w, 1), 1) + self.assertEqual(audioop.getsample(data, w, 2), -1) + self.assertEqual(audioop.getsample(data, w, 3), maxvalues[w]) + self.assertEqual(audioop.getsample(data, w, 4), minvalues[w]) def test_negativelen(self): # from issue 3306, previously it segfaulted self.assertRaises(audioop.error, - audioop.findmax, ''.join(chr(x) for x in range(256)), -2392392) + audioop.findmax, bytes(range(256)), -2392392) def test_issue7673(self): state = None @@ -222,9 +367,9 @@ self.assertRaises(audioop.error, audioop.lin2adpcm, data, size, state) def test_wrongsize(self): - data = b'abc' + data = b'abcdefgh' state = None - for size in (-1, 3, 5): + for size in (-1, 0, 3, 5, 1024): self.assertRaises(audioop.error, audioop.ulaw2lin, data, size) self.assertRaises(audioop.error, audioop.alaw2lin, data, size) self.assertRaises(audioop.error, audioop.adpcm2lin, data, size, state) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -241,6 +241,12 @@ Library ------- +- Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in + avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), + and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for + 32-bit samples. max() and rms() no more returns a negative result and + various other functions now work correctly with 32-bit sample -0x80000000. + - Issue #17073: Fix some integer overflows in sqlite3 module. - Issue #17114: IDLE now uses non-strict config parser. diff --git a/Modules/audioop.c b/Modules/audioop.c --- a/Modules/audioop.c +++ b/Modules/audioop.c @@ -26,6 +26,21 @@ #endif #endif +static const int maxvals[] = {0, 0x7F, 0x7FFF, 0x7FFFFF, 0x7FFFFFFF}; +static const int minvals[] = {0, -0x80, -0x8000, -0x800000, -0x80000000}; +static const unsigned int masks[] = {0, 0xFF, 0xFFFF, 0xFFFFFF, 0xFFFFFFFF}; + +static int +fbound(double val, double minval, double maxval) +{ + if (val > maxval) + val = maxval; + else if (val < minval + 1) + val = minval; + return val; +} + + /* Code shamelessly stolen from sox, 12.17.7, g711.c ** (c) Craig Reese, Joe Campbell and Jeff Poskanzer 1989 */ @@ -347,7 +362,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; - int max = 0; + unsigned int absval, max = 0; if ( !PyArg_ParseTuple(args, "s#i:max", &cp, &len, &size) ) return 0; @@ -357,10 +372,11 @@ if ( size == 1 ) val = (int)*CHARP(cp, i); else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( val < 0 ) val = (-val); - if ( val > max ) max = val; + if (val < 0) absval = (-val); + else absval = val; + if (absval > max) max = absval; } - return PyLong_FromLong(max); + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -369,7 +385,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; - int min = 0x7fffffff, max = -0x7fffffff; + int min = 0x7fffffff, max = -0x80000000; if (!PyArg_ParseTuple(args, "s#i:minmax", &cp, &len, &size)) return NULL; @@ -406,7 +422,7 @@ if ( len == 0 ) val = 0; else - val = (int)(avg / (double)(len/size)); + val = (int)floor(avg / (double)(len/size)); return PyLong_FromLong(val); } @@ -416,6 +432,7 @@ signed char *cp; Py_ssize_t len, i; int size, val = 0; + unsigned int res; double sum_squares = 0.0; if ( !PyArg_ParseTuple(args, "s#i:rms", &cp, &len, &size) ) @@ -429,10 +446,10 @@ sum_squares += (double)val*(double)val; } if ( len == 0 ) - val = 0; + res = 0; else - val = (int)sqrt(sum_squares / (double)(len/size)); - return PyLong_FromLong(val); + res = (unsigned int)sqrt(sum_squares / (double)(len/size)); + return PyLong_FromUnsignedLong(res); } static double _sum2(short *a, short *b, Py_ssize_t len) @@ -622,52 +639,46 @@ Py_ssize_t len, i; int size, val = 0, prevval = 0, prevextremevalid = 0, prevextreme = 0; - double avg = 0.0; - int diff, prevdiff, extremediff, nextreme = 0; + double sum = 0.0; + unsigned int avg; + int diff, prevdiff, nextreme = 0; if ( !PyArg_ParseTuple(args, "s#i:avgpp", &cp, &len, &size) ) return 0; if (!audioop_check_parameters(len, size)) return NULL; - /* Compute first delta value ahead. Also automatically makes us - ** skip the first extreme value - */ + if (len <= size) + return PyLong_FromLong(0); if ( size == 1 ) prevval = (int)*CHARP(cp, 0); else if ( size == 2 ) prevval = (int)*SHORTP(cp, 0); else if ( size == 4 ) prevval = (int)*LONGP(cp, 0); - if ( size == 1 ) val = (int)*CHARP(cp, size); - else if ( size == 2 ) val = (int)*SHORTP(cp, size); - else if ( size == 4 ) val = (int)*LONGP(cp, size); - prevdiff = val - prevval; - + prevdiff = 17; /* Anything != 0, 1 */ for ( i=size; i max ) - max = extremediff; + if (val != prevval) { + diff = val < prevval; + if (prevdiff == !diff) { + /* Derivative changed sign. Compute difference to + ** last extreme value and remember. + */ + if (prevextremevalid) { + if (prevval < prevextreme) + extremediff = (unsigned int)prevextreme - + (unsigned int)prevval; + else + extremediff = (unsigned int)prevval - + (unsigned int)prevextreme; + if ( extremediff > max ) + max = extremediff; + } + prevextremevalid = 1; + prevextreme = prevval; } - prevextremevalid = 1; - prevextreme = prevval; + prevval = val; + prevdiff = diff; } - prevval = val; - if ( diff != 0 ) - prevdiff = diff; } - return PyLong_FromLong(max); + return PyLong_FromUnsignedLong(max); } static PyObject * @@ -753,7 +763,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val = 0; - double factor, fval, maxval; + double factor, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#id:mul", &cp, &len, &size, &factor ) ) @@ -761,13 +771,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len); if ( rv == 0 ) @@ -780,9 +785,7 @@ else if ( size == 2 ) val = (int)*SHORTP(cp, i); else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*factor; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val = (int)fval; + val = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i) = (signed char)val; else if ( size == 2 ) *SHORTP(ncp, i) = (short)val; else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)val; @@ -797,7 +800,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val1 = 0, val2 = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s*idd:tomono", @@ -815,14 +818,8 @@ return NULL; } - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyBuffer_Release(&pcp); - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len/2); if ( rv == 0 ) { @@ -840,9 +837,7 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp, i+2); else if ( size == 4 ) val2 = (int)*LONGP(cp, i+4); fval = (double)val1*fac1 + (double)val2*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i/2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i/2) = (short)val1; else if ( size == 4 ) *LONGP(ncp, i/2)= (Py_Int32)val1; @@ -857,7 +852,7 @@ signed char *cp, *ncp; Py_ssize_t len, i; int size, val1, val2, val = 0; - double fac1, fac2, fval, maxval; + double fac1, fac2, fval, maxval, minval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#idd:tostereo", @@ -866,13 +861,8 @@ if (!audioop_check_parameters(len, size)) return NULL; - if ( size == 1 ) maxval = (double) 0x7f; - else if ( size == 2 ) maxval = (double) 0x7fff; - else if ( size == 4 ) maxval = (double) 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = (double) maxvals[size]; + minval = (double) minvals[size]; if (len > PY_SSIZE_T_MAX/2) { PyErr_SetString(PyExc_MemoryError, @@ -892,14 +882,10 @@ else if ( size == 4 ) val = (int)*LONGP(cp, i); fval = (double)val*fac1; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val1 = (int)fval; + val1 = (int)floor(fbound(fval, minval, maxval)); fval = (double)val*fac2; - if ( fval > maxval ) fval = maxval; - else if ( fval < -maxval ) fval = -maxval; - val2 = (int)fval; + val2 = (int)floor(fbound(fval, minval, maxval)); if ( size == 1 ) *CHARP(ncp, i*2) = (signed char)val1; else if ( size == 2 ) *SHORTP(ncp, i*2) = (short)val1; @@ -917,7 +903,7 @@ { signed char *cp1, *cp2, *ncp; Py_ssize_t len1, len2, i; - int size, val1 = 0, val2 = 0, maxval, newval; + int size, val1 = 0, val2 = 0, minval, maxval, newval; PyObject *rv; if ( !PyArg_ParseTuple(args, "s#s#i:add", @@ -930,13 +916,8 @@ return 0; } - if ( size == 1 ) maxval = 0x7f; - else if ( size == 2 ) maxval = 0x7fff; - else if ( size == 4 ) maxval = 0x7fffffff; - else { - PyErr_SetString(AudioopError, "Size should be 1, 2 or 4"); - return 0; - } + maxval = maxvals[size]; + minval = minvals[size]; rv = PyBytes_FromStringAndSize(NULL, len1); if ( rv == 0 ) @@ -952,12 +933,19 @@ else if ( size == 2 ) val2 = (int)*SHORTP(cp2, i); else if ( size == 4 ) val2 = (int)*LONGP(cp2, i); - newval = val1 + val2; - /* truncate in case of overflow */ - if (newval > maxval) newval = maxval; - else if (newval < -maxval) newval = -maxval; - else if (size == 4 && (newval^val1) < 0 && (newval^val2) < 0) - newval = val1 > 0 ? maxval : - maxval; + if (size < 4) { + newval = val1 + val2; + /* truncate in case of overflow */ + if (newval > maxval) + newval = maxval; + else if (newval < minval) + newval = minval; + } + else { + double fval = (double)val1 + (double)val2; + /* truncate in case of overflow */ + newval = (int)floor(fbound(fval, minval, maxval)); + } if ( size == 1 ) *CHARP(ncp, i) = (signed char)newval; else if ( size == 2 ) *SHORTP(ncp, i) = (short)newval; @@ -971,9 +959,9 @@ { signed char *cp, *ncp; Py_ssize_t len, i; - int size, val = 0; + int size, bias; + unsigned int val = 0, mask; PyObject *rv; - int bias; if ( !PyArg_ParseTuple(args, "s#ii:bias", &cp, &len, &size , &bias) ) @@ -987,15 +975,20 @@ return 0; ncp = (signed char *)PyBytes_AsString(rv); + mask = masks[size]; for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = (int)*CHARP(cp, i); - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = (int)*LONGP(cp, i); + if ( size == 1 ) val = (unsigned int)(unsigned char)*CHARP(cp, i); + else if ( size == 2 ) val = (unsigned int)(unsigned short)*SHORTP(cp, i); + else if ( size == 4 ) val = (unsigned int)(Py_UInt32)*LONGP(cp, i); - if ( size == 1 ) *CHARP(ncp, i) = (signed char)(val+bias); - else if ( size == 2 ) *SHORTP(ncp, i) = (short)(val+bias); - else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(val+bias); + val += (unsigned int)bias; + /* wrap around in case of overflow */ + val &= mask; + + if ( size == 1 ) *CHARP(ncp, i) = (signed char)(unsigned char)val; + else if ( size == 2 ) *SHORTP(ncp, i) = (short)(unsigned short)val; + else if ( size == 4 ) *LONGP(ncp, i) = (Py_Int32)(Py_UInt32)val; } return rv; } @@ -1022,15 +1015,15 @@ ncp = (unsigned char *)PyBytes_AsString(rv); for ( i=0; i < len; i += size ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); j = len - i - size; - if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1064,13 +1057,13 @@ ncp = (unsigned char *)PyBytes_AsString(rv); for ( i=0, j=0; i < len; i += size, j += size2 ) { - if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 8; - else if ( size == 2 ) val = (int)*SHORTP(cp, i); - else if ( size == 4 ) val = ((int)*LONGP(cp, i)) >> 16; + if ( size == 1 ) val = ((int)*CHARP(cp, i)) << 24; + else if ( size == 2 ) val = ((int)*SHORTP(cp, i)) << 16; + else if ( size == 4 ) val = (int)*LONGP(cp, i); - if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 8); - else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val); - else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)(val<<16); + if ( size2 == 1 ) *CHARP(ncp, j) = (signed char)(val >> 24); + else if ( size2 == 2 ) *SHORTP(ncp, j) = (short)(val >> 16); + else if ( size2 == 4 ) *LONGP(ncp, j) = (Py_Int32)val; } return rv; } @@ -1134,6 +1127,10 @@ d = gcd(inrate, outrate); inrate /= d; outrate /= d; + /* divide weightA and weightB by their greatest common divisor */ + d = gcd(weightA, weightB); + weightA /= d; + weightA /= d; if ((size_t)nchannels > PY_SIZE_MAX/sizeof(int)) { PyErr_SetString(PyExc_MemoryError, @@ -1173,7 +1170,9 @@ } /* str <- Space for the output buffer. */ - { + if (len == 0) + str = PyBytes_FromStringAndSize(NULL, 0); + else { /* There are len input frames, so we need (mathematically) ceiling(len*outrate/inrate) output frames, and each frame requires bytes_per_frame bytes. Computing this @@ -1188,12 +1187,11 @@ else str = PyBytes_FromStringAndSize(NULL, q * outrate * bytes_per_frame); - - if (str == NULL) { - PyErr_SetString(PyExc_MemoryError, - "not enough memory for output buffer"); - goto exit; - } + } + if (str == NULL) { + PyErr_SetString(PyExc_MemoryError, + "not enough memory for output buffer"); + goto exit; } ncp = PyBytes_AsString(str); @@ -1227,32 +1225,32 @@ for (chan = 0; chan < nchannels; chan++) { prev_i[chan] = cur_i[chan]; if (size == 1) - cur_i[chan] = ((int)*CHARP(cp, 0)) << 8; + cur_i[chan] = ((int)*CHARP(cp, 0)) << 24; else if (size == 2) - cur_i[chan] = (int)*SHORTP(cp, 0); + cur_i[chan] = ((int)*SHORTP(cp, 0)) << 16; else if (size == 4) - cur_i[chan] = ((int)*LONGP(cp, 0)) >> 16; + cur_i[chan] = (int)*LONGP(cp, 0); cp += size; /* implements a simple digital filter */ - cur_i[chan] = - (weightA * cur_i[chan] + - weightB * prev_i[chan]) / - (weightA + weightB); + cur_i[chan] = (int)( + ((double)weightA * (double)cur_i[chan] + + (double)weightB * (double)prev_i[chan]) / + ((double)weightA + (double)weightB)); } len--; d += outrate; } while (d >= 0) { for (chan = 0; chan < nchannels; chan++) { - cur_o = (prev_i[chan] * d + - cur_i[chan] * (outrate - d)) / - outrate; + cur_o = (int)(((double)prev_i[chan] * (double)d + + (double)cur_i[chan] * (double)(outrate - d)) / + (double)outrate); if (size == 1) - *CHARP(ncp, 0) = (signed char)(cur_o >> 8); + *CHARP(ncp, 0) = (signed char)(cur_o >> 24); else if (size == 2) - *SHORTP(ncp, 0) = (short)(cur_o); + *SHORTP(ncp, 0) = (short)(cur_o >> 16); else if (size == 4) - *LONGP(ncp, 0) = (Py_Int32)(cur_o<<16); + *LONGP(ncp, 0) = (Py_Int32)(cur_o); ncp += size; } d -= inrate; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:55:53 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:55:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTQ3?= =?utf-8?q?=2E_Mention_BytesIO_in_SpooledTemporaryFile_documentation=2E?= Message-ID: <3Z37td5ZPzzSYs@mail.python.org> http://hg.python.org/cpython/rev/fb4ed16f35bd changeset: 82080:fb4ed16f35bd branch: 3.2 parent: 82077:104b17f8316b user: Serhiy Storchaka date: Sat Feb 09 11:46:42 2013 +0200 summary: Issue #17147. Mention BytesIO in SpooledTemporaryFile documentation. files: Doc/library/tempfile.rst | 8 +++++--- Lib/tempfile.py | 4 ++-- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -83,9 +83,11 @@ causes the file to roll over to an on-disk file regardless of its size. The returned object is a file-like object whose :attr:`_file` attribute - is either a :class:`StringIO` object or a true file object, depending on - whether :func:`rollover` has been called. This file-like object can be - used in a :keyword:`with` statement, just like a normal file. + is either a :class:`BytesIO` or :class:`StringIO` object (depending on + whether specifies binary or text *mode* was specified) or a true file + object, depending on whether :func:`rollover` has been called. This + file-like object can be used in a :keyword:`with` statement, just like + a normal file. .. function:: TemporaryDirectory(suffix='', prefix='tmp', dir=None) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -479,8 +479,8 @@ raise class SpooledTemporaryFile: - """Temporary file wrapper, specialized to switch from - StringIO to a real file when it exceeds a certain size or + """Temporary file wrapper, specialized to switch from BytesIO + or StringIO to a real file when it exceeds a certain size or when a fileno is needed. """ _rolled = False -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:55:55 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:55:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317147=2E_Mention_BytesIO_in_SpooledTemporaryFile_docu?= =?utf-8?q?mentation=2E?= Message-ID: <3Z37tg13KnzSZG@mail.python.org> http://hg.python.org/cpython/rev/8f772825029f changeset: 82081:8f772825029f branch: 3.3 parent: 82078:63b164708e60 parent: 82080:fb4ed16f35bd user: Serhiy Storchaka date: Sat Feb 09 11:47:20 2013 +0200 summary: Issue #17147. Mention BytesIO in SpooledTemporaryFile documentation. files: Doc/library/tempfile.rst | 8 +++++--- Lib/tempfile.py | 4 ++-- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -82,9 +82,11 @@ causes the file to roll over to an on-disk file regardless of its size. The returned object is a file-like object whose :attr:`_file` attribute - is either a :class:`StringIO` object or a true file object, depending on - whether :func:`rollover` has been called. This file-like object can be - used in a :keyword:`with` statement, just like a normal file. + is either a :class:`BytesIO` or :class:`StringIO` object (depending on + whether specifies binary or text *mode* was specified) or a true file + object, depending on whether :func:`rollover` has been called. This + file-like object can be used in a :keyword:`with` statement, just like + a normal file. .. versionchanged:: 3.3 the truncate method now accepts a ``size`` argument. diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -470,8 +470,8 @@ raise class SpooledTemporaryFile: - """Temporary file wrapper, specialized to switch from - StringIO to a real file when it exceeds a certain size or + """Temporary file wrapper, specialized to switch from BytesIO + or StringIO to a real file when it exceeds a certain size or when a fileno is needed. """ _rolled = False -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:55:56 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:55:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Minor_fix_of_p?= =?utf-8?q?revious_commit=2E?= Message-ID: <3Z37th3ZdjzSff@mail.python.org> http://hg.python.org/cpython/rev/dde4e035b4aa changeset: 82082:dde4e035b4aa branch: 3.2 parent: 82080:fb4ed16f35bd user: Serhiy Storchaka date: Sat Feb 09 11:51:21 2013 +0200 summary: Minor fix of previous commit. files: Doc/library/tempfile.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -84,7 +84,7 @@ The returned object is a file-like object whose :attr:`_file` attribute is either a :class:`BytesIO` or :class:`StringIO` object (depending on - whether specifies binary or text *mode* was specified) or a true file + whether binary or text *mode* was specified) or a true file object, depending on whether :func:`rollover` has been called. This file-like object can be used in a :keyword:`with` statement, just like a normal file. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:55:57 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:55:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Minor_fix_of_previous_commit=2E?= Message-ID: <3Z37tj68ByzSfr@mail.python.org> http://hg.python.org/cpython/rev/e16f331689e6 changeset: 82083:e16f331689e6 branch: 3.3 parent: 82081:8f772825029f parent: 82082:dde4e035b4aa user: Serhiy Storchaka date: Sat Feb 09 11:51:48 2013 +0200 summary: Minor fix of previous commit. files: Doc/library/tempfile.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -83,7 +83,7 @@ The returned object is a file-like object whose :attr:`_file` attribute is either a :class:`BytesIO` or :class:`StringIO` object (depending on - whether specifies binary or text *mode* was specified) or a true file + whether binary or text *mode* was specified) or a true file object, depending on whether :func:`rollover` has been called. This file-like object can be used in a :keyword:`with` statement, just like a normal file. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 10:55:59 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 10:55:59 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317147=2E_Mention_BytesIO_in_SpooledTemporaryFil?= =?utf-8?q?e_documentation=2E?= Message-ID: <3Z37tl1dXdzSg6@mail.python.org> http://hg.python.org/cpython/rev/c75d065a6bc2 changeset: 82084:c75d065a6bc2 parent: 82079:48747ef5f65b parent: 82083:e16f331689e6 user: Serhiy Storchaka date: Sat Feb 09 11:53:09 2013 +0200 summary: Issue #17147. Mention BytesIO in SpooledTemporaryFile documentation. files: Doc/library/tempfile.rst | 8 +++++--- Lib/tempfile.py | 4 ++-- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -82,9 +82,11 @@ causes the file to roll over to an on-disk file regardless of its size. The returned object is a file-like object whose :attr:`_file` attribute - is either a :class:`StringIO` object or a true file object, depending on - whether :func:`rollover` has been called. This file-like object can be - used in a :keyword:`with` statement, just like a normal file. + is either a :class:`BytesIO` or :class:`StringIO` object (depending on + whether binary or text *mode* was specified) or a true file + object, depending on whether :func:`rollover` has been called. This + file-like object can be used in a :keyword:`with` statement, just like + a normal file. .. versionchanged:: 3.3 the truncate method now accepts a ``size`` argument. diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -472,8 +472,8 @@ raise class SpooledTemporaryFile: - """Temporary file wrapper, specialized to switch from - StringIO to a real file when it exceeds a certain size or + """Temporary file wrapper, specialized to switch from BytesIO + or StringIO to a real file when it exceeds a certain size or when a fileno is needed. """ _rolled = False -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 11:26:56 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 11:26:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzEwMzU1?= =?utf-8?q?=3A_SpooledTemporaryFile_properties_and_xreadline_method_now_wo?= =?utf-8?q?rk_for?= Message-ID: <3Z38ZS4XKszRTD@mail.python.org> http://hg.python.org/cpython/rev/5c2ff6e64c47 changeset: 82085:5c2ff6e64c47 branch: 2.7 parent: 82076:6add6ac6a802 user: Serhiy Storchaka date: Sat Feb 09 12:20:18 2013 +0200 summary: Issue #10355: SpooledTemporaryFile properties and xreadline method now work for unrolled files. files: Lib/tempfile.py | 23 ++++++++++--------- Lib/test/test_tempfile.py | 31 +++++++++++++++++++++++++++ Misc/NEWS | 5 ++++ 3 files changed, 48 insertions(+), 11 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -546,10 +546,6 @@ def closed(self): return self._file.closed - @property - def encoding(self): - return self._file.encoding - def fileno(self): self.rollover() return self._file.fileno() @@ -562,15 +558,17 @@ @property def mode(self): - return self._file.mode + try: + return self._file.mode + except AttributeError: + return self._TemporaryFileArgs[0] @property def name(self): - return self._file.name - - @property - def newlines(self): - return self._file.newlines + try: + return self._file.name + except AttributeError: + return None def next(self): return self._file.next @@ -610,4 +608,7 @@ return rv def xreadlines(self, *args): - return self._file.xreadlines(*args) + try: + return self._file.xreadlines(*args) + except AttributeError: + return iter(self._file.readlines(*args)) diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -738,6 +738,17 @@ f.write(b'x') self.assertTrue(f._rolled) + def test_xreadlines(self): + f = self.do_create(max_size=20) + f.write(b'abc\n' * 5) + f.seek(0) + self.assertFalse(f._rolled) + self.assertEqual(list(f.xreadlines()), [b'abc\n'] * 5) + f.write(b'x\ny') + self.assertTrue(f._rolled) + f.seek(0) + self.assertEqual(list(f.xreadlines()), [b'abc\n'] * 5 + [b'x\n', b'y']) + def test_sparse(self): # A SpooledTemporaryFile that is written late in the file will extend # when that occurs @@ -793,6 +804,26 @@ seek(0, 0) self.assertTrue(read(70) == 'a'*35 + 'b'*35) + def test_properties(self): + f = tempfile.SpooledTemporaryFile(max_size=10) + f.write(b'x' * 10) + self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+b') + self.assertIsNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + + f.write(b'x') + self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'w+b') + self.assertIsNotNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + def test_context_manager_before_rollover(self): # A SpooledTemporaryFile can be used as a context manager with tempfile.SpooledTemporaryFile(max_size=1) as f: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,11 @@ Library ------- +- Issue #10355: In SpooledTemporaryFile class mode and name properties and + xreadlines method now work for unrolled files. encoding and newlines + properties now removed as they have no sense and always produced + AttributeError. + - Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 11:26:58 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 11:26:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzEwMzU1?= =?utf-8?q?=3A_SpooledTemporaryFile_properties_now_work_for_unrolled_files?= =?utf-8?q?=2E?= Message-ID: <3Z38ZV1L0LzSf3@mail.python.org> http://hg.python.org/cpython/rev/dfc6902b63d7 changeset: 82086:dfc6902b63d7 branch: 3.2 parent: 82082:dde4e035b4aa user: Serhiy Storchaka date: Sat Feb 09 12:21:14 2013 +0200 summary: Issue #10355: SpooledTemporaryFile properties now work for unrolled files. Remove obsoleted xreadline method. files: Lib/tempfile.py | 30 +++++++++++++------ Lib/test/test_tempfile.py | 39 +++++++++++++++++++++++++++ Misc/NEWS | 4 ++ 3 files changed, 63 insertions(+), 10 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -546,7 +546,12 @@ @property def encoding(self): - return self._file.encoding + try: + return self._file.encoding + except AttributeError: + if 'b' in self._TemporaryFileArgs['mode']: + raise + return self._TemporaryFileArgs['encoding'] def fileno(self): self.rollover() @@ -560,18 +565,26 @@ @property def mode(self): - return self._file.mode + try: + return self._file.mode + except AttributeError: + return self._TemporaryFileArgs['mode'] @property def name(self): - return self._file.name + try: + return self._file.name + except AttributeError: + return None @property def newlines(self): - return self._file.newlines - - def next(self): - return self._file.next + try: + return self._file.newlines + except AttributeError: + if 'b' in self._TemporaryFileArgs['mode']: + raise + return self._TemporaryFileArgs['newline'] def read(self, *args): return self._file.read(*args) @@ -607,9 +620,6 @@ self._check(file) return rv - def xreadlines(self, *args): - return self._file.xreadlines(*args) - class TemporaryDirectory(object): """Create and return a temporary directory. This has the same diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -808,6 +808,26 @@ seek(0, 0) self.assertEqual(read(70), b'a'*35 + b'b'*35) + def test_properties(self): + f = tempfile.SpooledTemporaryFile(max_size=10) + f.write(b'x' * 10) + self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+b') + self.assertIsNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + + f.write(b'x') + self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'rb+') + self.assertIsNotNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + def test_text_mode(self): # Creating a SpooledTemporaryFile with a text mode should produce # a file object reading and writing (Unicode) text strings. @@ -818,6 +838,12 @@ f.write("def\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\n") + self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNone(f.name) + self.assertIsNone(f.newlines) + self.assertIsNone(f.encoding) + f.write("xyzzy\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\nxyzzy\n") @@ -825,6 +851,11 @@ f.write("foo\x1abar\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\nxyzzy\nfoo\x1abar\n") + self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNotNone(f.name) + self.assertEqual(f.newlines, '\n') + self.assertIsNotNone(f.encoding) def test_text_newline_and_encoding(self): f = tempfile.SpooledTemporaryFile(mode='w+', max_size=10, @@ -833,11 +864,19 @@ f.seek(0) self.assertEqual(f.read(), "\u039B\r\n") self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNone(f.name) + self.assertIsNone(f.newlines) + self.assertIsNone(f.encoding) f.write("\u039B" * 20 + "\r\n") f.seek(0) self.assertEqual(f.read(), "\u039B\r\n" + ("\u039B" * 20) + "\r\n") self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNotNone(f.name) + self.assertIsNotNone(f.newlines) + self.assertEqual(f.encoding, 'utf-8') def test_context_manager_before_rollover(self): # A SpooledTemporaryFile can be used as a context manager diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -215,6 +215,10 @@ Library ------- +- Issue #10355: In SpooledTemporaryFile class mode, name, encoding and + newlines properties now work for unrolled files. Obsoleted and never + working on Python 3 xreadline method now removed. + - Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 11:26:59 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 11:26:59 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2310355=3A_SpooledTemporaryFile_properties_now_work_for?= =?utf-8?q?_unrolled_files=2E?= Message-ID: <3Z38ZW5CJBzSfY@mail.python.org> http://hg.python.org/cpython/rev/f36d8ba4eeef changeset: 82087:f36d8ba4eeef branch: 3.3 parent: 82083:e16f331689e6 parent: 82086:dfc6902b63d7 user: Serhiy Storchaka date: Sat Feb 09 12:21:52 2013 +0200 summary: Issue #10355: SpooledTemporaryFile properties now work for unrolled files. Remove obsoleted xreadline method. files: Lib/tempfile.py | 30 +++++++++++++------ Lib/test/test_tempfile.py | 39 +++++++++++++++++++++++++++ Misc/NEWS | 4 ++ 3 files changed, 63 insertions(+), 10 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -537,7 +537,12 @@ @property def encoding(self): - return self._file.encoding + try: + return self._file.encoding + except AttributeError: + if 'b' in self._TemporaryFileArgs['mode']: + raise + return self._TemporaryFileArgs['encoding'] def fileno(self): self.rollover() @@ -551,18 +556,26 @@ @property def mode(self): - return self._file.mode + try: + return self._file.mode + except AttributeError: + return self._TemporaryFileArgs['mode'] @property def name(self): - return self._file.name + try: + return self._file.name + except AttributeError: + return None @property def newlines(self): - return self._file.newlines - - def next(self): - return self._file.next + try: + return self._file.newlines + except AttributeError: + if 'b' in self._TemporaryFileArgs['mode']: + raise + return self._TemporaryFileArgs['newline'] def read(self, *args): return self._file.read(*args) @@ -603,9 +616,6 @@ self._check(file) return rv - def xreadlines(self, *args): - return self._file.xreadlines(*args) - class TemporaryDirectory(object): """Create and return a temporary directory. This has the same diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -745,6 +745,26 @@ seek(0, 0) self.assertEqual(read(70), b'a'*35 + b'b'*35) + def test_properties(self): + f = tempfile.SpooledTemporaryFile(max_size=10) + f.write(b'x' * 10) + self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+b') + self.assertIsNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + + f.write(b'x') + self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'rb+') + self.assertIsNotNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + def test_text_mode(self): # Creating a SpooledTemporaryFile with a text mode should produce # a file object reading and writing (Unicode) text strings. @@ -755,6 +775,12 @@ f.write("def\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\n") + self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNone(f.name) + self.assertIsNone(f.newlines) + self.assertIsNone(f.encoding) + f.write("xyzzy\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\nxyzzy\n") @@ -762,6 +788,11 @@ f.write("foo\x1abar\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\nxyzzy\nfoo\x1abar\n") + self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNotNone(f.name) + self.assertEqual(f.newlines, '\n') + self.assertIsNotNone(f.encoding) def test_text_newline_and_encoding(self): f = tempfile.SpooledTemporaryFile(mode='w+', max_size=10, @@ -770,11 +801,19 @@ f.seek(0) self.assertEqual(f.read(), "\u039B\r\n") self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNone(f.name) + self.assertIsNone(f.newlines) + self.assertIsNone(f.encoding) f.write("\u039B" * 20 + "\r\n") f.seek(0) self.assertEqual(f.read(), "\u039B\r\n" + ("\u039B" * 20) + "\r\n") self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNotNone(f.name) + self.assertIsNotNone(f.newlines) + self.assertEqual(f.encoding, 'utf-8') def test_context_manager_before_rollover(self): # A SpooledTemporaryFile can be used as a context manager diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -169,6 +169,10 @@ Library ------- +- Issue #10355: In SpooledTemporaryFile class mode, name, encoding and + newlines properties now work for unrolled files. Obsoleted and never + working on Python 3 xreadline method now removed. + - Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 11:27:01 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 11:27:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2310355=3A_SpooledTemporaryFile_properties_now_wo?= =?utf-8?q?rk_for_unrolled_files=2E?= Message-ID: <3Z38ZY22jZzSfr@mail.python.org> http://hg.python.org/cpython/rev/f1a13191f0c8 changeset: 82088:f1a13191f0c8 parent: 82084:c75d065a6bc2 parent: 82087:f36d8ba4eeef user: Serhiy Storchaka date: Sat Feb 09 12:22:29 2013 +0200 summary: Issue #10355: SpooledTemporaryFile properties now work for unrolled files. Remove obsoleted xreadline method. files: Lib/tempfile.py | 30 +++++++++++++------ Lib/test/test_tempfile.py | 39 +++++++++++++++++++++++++++ Misc/NEWS | 4 ++ 3 files changed, 63 insertions(+), 10 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -539,7 +539,12 @@ @property def encoding(self): - return self._file.encoding + try: + return self._file.encoding + except AttributeError: + if 'b' in self._TemporaryFileArgs['mode']: + raise + return self._TemporaryFileArgs['encoding'] def fileno(self): self.rollover() @@ -553,18 +558,26 @@ @property def mode(self): - return self._file.mode + try: + return self._file.mode + except AttributeError: + return self._TemporaryFileArgs['mode'] @property def name(self): - return self._file.name + try: + return self._file.name + except AttributeError: + return None @property def newlines(self): - return self._file.newlines - - def next(self): - return self._file.next + try: + return self._file.newlines + except AttributeError: + if 'b' in self._TemporaryFileArgs['mode']: + raise + return self._TemporaryFileArgs['newline'] def read(self, *args): return self._file.read(*args) @@ -605,9 +618,6 @@ self._check(file) return rv - def xreadlines(self, *args): - return self._file.xreadlines(*args) - class TemporaryDirectory(object): """Create and return a temporary directory. This has the same diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -745,6 +745,26 @@ seek(0, 0) self.assertEqual(read(70), b'a'*35 + b'b'*35) + def test_properties(self): + f = tempfile.SpooledTemporaryFile(max_size=10) + f.write(b'x' * 10) + self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+b') + self.assertIsNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + + f.write(b'x') + self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'rb+') + self.assertIsNotNone(f.name) + with self.assertRaises(AttributeError): + f.newlines + with self.assertRaises(AttributeError): + f.encoding + def test_text_mode(self): # Creating a SpooledTemporaryFile with a text mode should produce # a file object reading and writing (Unicode) text strings. @@ -755,6 +775,12 @@ f.write("def\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\n") + self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNone(f.name) + self.assertIsNone(f.newlines) + self.assertIsNone(f.encoding) + f.write("xyzzy\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\nxyzzy\n") @@ -762,6 +788,11 @@ f.write("foo\x1abar\n") f.seek(0) self.assertEqual(f.read(), "abc\ndef\nxyzzy\nfoo\x1abar\n") + self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNotNone(f.name) + self.assertEqual(f.newlines, '\n') + self.assertIsNotNone(f.encoding) def test_text_newline_and_encoding(self): f = tempfile.SpooledTemporaryFile(mode='w+', max_size=10, @@ -770,11 +801,19 @@ f.seek(0) self.assertEqual(f.read(), "\u039B\r\n") self.assertFalse(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNone(f.name) + self.assertIsNone(f.newlines) + self.assertIsNone(f.encoding) f.write("\u039B" * 20 + "\r\n") f.seek(0) self.assertEqual(f.read(), "\u039B\r\n" + ("\u039B" * 20) + "\r\n") self.assertTrue(f._rolled) + self.assertEqual(f.mode, 'w+') + self.assertIsNotNone(f.name) + self.assertIsNotNone(f.newlines) + self.assertEqual(f.encoding, 'utf-8') def test_context_manager_before_rollover(self): # A SpooledTemporaryFile can be used as a context manager diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -241,6 +241,10 @@ Library ------- +- Issue #10355: In SpooledTemporaryFile class mode, name, encoding and + newlines properties now work for unrolled files. Obsoleted and never + working on Python 3 xreadline method now removed. + - Issue #16686: Fixed a lot of bugs in audioop module. Fixed crashes in avgpp(), maxpp() and ratecv(). Fixed an integer overflow in add(), bias(), and ratecv(). reverse(), lin2lin() and ratecv() no more lose precision for -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 12:48:38 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 12:48:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzczNTg6?= =?utf-8?q?_cStringIO=2EStringIO_now_supports_writing_to_and_reading_from?= Message-ID: <3Z3BNk19WXzSbG@mail.python.org> http://hg.python.org/cpython/rev/a025b04332fe changeset: 82089:a025b04332fe branch: 2.7 parent: 82085:5c2ff6e64c47 user: Serhiy Storchaka date: Sat Feb 09 13:47:43 2013 +0200 summary: Issue #7358: cStringIO.StringIO now supports writing to and reading from a stream larger than 2 GiB on 64-bit systems. files: Lib/test/test_StringIO.py | 40 ++++++++++++ Misc/NEWS | 3 + Modules/cStringIO.c | 85 ++++++++++++++++---------- 3 files changed, 96 insertions(+), 32 deletions(-) diff --git a/Lib/test/test_StringIO.py b/Lib/test/test_StringIO.py --- a/Lib/test/test_StringIO.py +++ b/Lib/test/test_StringIO.py @@ -5,6 +5,7 @@ import cStringIO import types import array +import sys from test import test_support @@ -105,6 +106,45 @@ self._fp.close() self.assertRaises(ValueError, self._fp.getvalue) + @test_support.bigmemtest(test_support._2G + 2**26, memuse=2.001) + def test_reads_from_large_stream(self, size): + linesize = 2**26 # 64 MiB + lines = ['x' * (linesize - 1) + '\n'] * (size // linesize) + \ + ['y' * (size % linesize)] + f = self.MODULE.StringIO(''.join(lines)) + for i, expected in enumerate(lines): + line = f.read(len(expected)) + self.assertEqual(len(line), len(expected)) + self.assertEqual(line, expected) + self.assertEqual(f.read(), '') + f.seek(0) + for i, expected in enumerate(lines): + line = f.readline() + self.assertEqual(len(line), len(expected)) + self.assertEqual(line, expected) + self.assertEqual(f.readline(), '') + f.seek(0) + self.assertEqual(f.readlines(), lines) + self.assertEqual(f.readlines(), []) + f.seek(0) + self.assertEqual(f.readlines(size), lines) + self.assertEqual(f.readlines(), []) + + # In worst case cStringIO requires 2 + 1 + 1/2 + 1/2**2 + ... = 4 + # bytes per input character. + @test_support.bigmemtest(test_support._2G, memuse=4) + def test_writes_to_large_stream(self, size): + s = 'x' * 2**26 # 64 MiB + f = self.MODULE.StringIO() + n = size + while n > len(s): + f.write(s) + n -= len(s) + s = None + f.write('x' * n) + self.assertEqual(len(f.getvalue()), size) + + class TestStringIO(TestGenericStringIO): MODULE = StringIO diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,9 @@ Library ------- +- Issue #7358: cStringIO.StringIO now supports writing to and reading from + a stream larger than 2 GiB on 64-bit systems. + - Issue #10355: In SpooledTemporaryFile class mode and name properties and xreadlines method now work for unrolled files. encoding and newlines properties now removed as they have no sense and always produced diff --git a/Modules/cStringIO.c b/Modules/cStringIO.c --- a/Modules/cStringIO.c +++ b/Modules/cStringIO.c @@ -170,10 +170,15 @@ n = l; if (n < 0) n=0; } + if (n > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "length too large"); + return -1; + } *output=((IOobject*)self)->buf + ((IOobject*)self)->pos; ((IOobject*)self)->pos += n; - return n; + return (int)n; } static PyObject * @@ -192,26 +197,33 @@ static int IO_creadline(PyObject *self, char **output) { - char *n, *s; - Py_ssize_t l; + char *n, *start, *end; + Py_ssize_t len; if (!IO__opencheck(IOOOBJECT(self))) return -1; - for (n = ((IOobject*)self)->buf + ((IOobject*)self)->pos, - s = ((IOobject*)self)->buf + ((IOobject*)self)->string_size; - n < s && *n != '\n'; n++); + n = start = ((IOobject*)self)->buf + ((IOobject*)self)->pos; + end = ((IOobject*)self)->buf + ((IOobject*)self)->string_size; + while (n < end && *n != '\n') + n++; - if (n < s) n++; + if (n < end) n++; - *output=((IOobject*)self)->buf + ((IOobject*)self)->pos; - l = n - ((IOobject*)self)->buf - ((IOobject*)self)->pos; + len = n - start; + if (len > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "length too large"); + return -1; + } - assert(IOOOBJECT(self)->pos <= PY_SSIZE_T_MAX - l); + *output=start; + + assert(IOOOBJECT(self)->pos <= PY_SSIZE_T_MAX - len); assert(IOOOBJECT(self)->pos >= 0); assert(IOOOBJECT(self)->string_size >= 0); - ((IOobject*)self)->pos += l; - return (int)l; + ((IOobject*)self)->pos += len; + return (int)len; } static PyObject * @@ -239,9 +251,9 @@ int n; char *output; PyObject *result, *line; - int hint = 0, length = 0; + Py_ssize_t hint = 0, length = 0; - if (!PyArg_ParseTuple(args, "|i:readlines", &hint)) return NULL; + if (!PyArg_ParseTuple(args, "|n:readlines", &hint)) return NULL; result = PyList_New(0); if (!result) @@ -377,31 +389,41 @@ static int -O_cwrite(PyObject *self, const char *c, Py_ssize_t l) { - Py_ssize_t newl; +O_cwrite(PyObject *self, const char *c, Py_ssize_t len) { + Py_ssize_t newpos; Oobject *oself; char *newbuf; if (!IO__opencheck(IOOOBJECT(self))) return -1; oself = (Oobject *)self; - newl = oself->pos+l; - if (newl >= oself->buf_size) { - oself->buf_size *= 2; - if (oself->buf_size <= newl) { - assert(newl + 1 < INT_MAX); - oself->buf_size = (int)(newl+1); + if (len > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "length too large"); + return -1; + } + assert(len >= 0); + if (oself->pos >= PY_SSIZE_T_MAX - len) { + PyErr_SetString(PyExc_OverflowError, + "new position too large"); + return -1; + } + newpos = oself->pos + len; + if (newpos >= oself->buf_size) { + size_t newsize = oself->buf_size; + newsize *= 2; + if (newsize <= (size_t)newpos || newsize > PY_SSIZE_T_MAX) { + assert(newpos < PY_SSIZE_T_MAX - 1); + newsize = newpos + 1; } - newbuf = (char*)realloc(oself->buf, oself->buf_size); + newbuf = (char*)realloc(oself->buf, newsize); if (!newbuf) { PyErr_SetString(PyExc_MemoryError,"out of memory"); - free(oself->buf); - oself->buf = 0; - oself->buf_size = oself->pos = 0; return -1; - } + } + oself->buf_size = (Py_ssize_t)newsize; oself->buf = newbuf; - } + } if (oself->string_size < oself->pos) { /* In case of overseek, pad with null bytes the buffer region between @@ -416,16 +438,15 @@ (oself->pos - oself->string_size) * sizeof(char)); } - memcpy(oself->buf+oself->pos,c,l); + memcpy(oself->buf + oself->pos, c, len); - assert(oself->pos + l < INT_MAX); - oself->pos += (int)l; + oself->pos = newpos; if (oself->string_size < oself->pos) { oself->string_size = oself->pos; } - return (int)l; + return (int)len; } static PyObject * -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 13:42:26 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 9 Feb 2013 13:42:26 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Explain_the_version_scheme_in?= =?utf-8?q?_PEP_426?= Message-ID: <3Z3CZp1XRRzSbT@mail.python.org> http://hg.python.org/peps/rev/415f9d9a0014 changeset: 4722:415f9d9a0014 user: Nick Coghlan date: Sat Feb 09 22:40:18 2013 +1000 summary: Explain the version scheme in PEP 426 files: pep-0426.txt | 517 +++++++++++++++++++++++++++++++------- 1 files changed, 414 insertions(+), 103 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -29,7 +29,7 @@ extension mechanism. It also adds support for optional features of distributions and allows the description to be placed into a payload section. Finally, this version addresses several issues with the -previous iteration of the standard version numbering scheme. +previous iteration of the standard version identification scheme. Metadata files @@ -101,7 +101,7 @@ Version ------- -A string containing the distribution's version number. See `Version scheme`_ +A string containing the distribution's version identifier. See `Version scheme`_ below. Example:: @@ -300,7 +300,7 @@ projects to depend only on having at least one of them installed. A version declaration may be supplied and must follow the rules described -in `Version scheme`_. The distribution's version number will be implied +in `Version scheme`_. The distribution's version identifier will be implied if none is specified. Examples:: @@ -446,7 +446,7 @@ dependency, optionally followed by a version declaration within parentheses. -Because they refer to non-Python software releases, version numbers +Because they refer to non-Python software releases, version identifiers for this field are **not** required to conform to the format described in `Version scheme`_: they should correspond to the version scheme used by the external dependency. @@ -549,31 +549,310 @@ Version scheme ============== -Version numbers must comply with the following scheme:: +Version identifiers must comply with the following scheme:: - N.N[.N]+[{a|b|c|rc}N][.postN][.devN] + N[.N]+[{a|b|c|rc}N][.postN][.devN] -Version numbers which do not comply with this scheme are an -error. Projects which wish to use non-compliant version numbers may -be heuristically normalized to this scheme and are less likely to sort -correctly. +Version identifiers which do not comply with this scheme are an error. +Projects which wish to use non-compliant version identifiers must restrict +themselves to metadata v1.1 (PEP 314) or earlier, as those specifications +do not constrain the versioning scheme. -Suffixes and ordering ---------------------- +Any given version will be a "release", "pre-release", "post-release" or +"developmental release" as defined in the following sections. -The following suffixes are the only ones allowed at the given level of the -version hierarchy and they are ordered as listed. +.. note:: -Within a numeric release (``1.0``, ``2.7.3``):: + Some hard to read version identifiers are permitted by this scheme + in order to better accommodate the wide range of versioning practices + across existing public and private Python projects. + + Accordingly, some of the versioning practices which are technically + permitted by the PEP are strongly discouraged for new projects. Where + this is the case, the relevant details are noted in the following + sections. + + +Releases +-------- + +A release number is a version identifier that consists solely of one or +more non-negative integer values, separated by dots:: + + N[.N]+ + +Releases within a project must be numbered in a consistently increasing +fashion. Ordering considers the numeric value of each component +in turn, with "component does not exist" sorted ahead of all numeric +values. + +While any number of additional components after the first are permitted +under this scheme, the most common variants are to use two components +("major.minor") or three components ("major.minor.micro"). + +For example:: + + 0.9 + 0.9.1 + 0.9.2 + ... + 0.9.10 + 0.9.11 + 1.0 + 1.0.1 + 1.1 + 2.0 + 2.0.1 + +A release series is any set of release numbers that start with a common +prefix. For example, ``3.3.1``, ``3.3.5`` and ``3.3.9.45`` are all +part of the ``3.3`` release series. + +.. note:: + + Using both ``X.Y`` and ``X.Y.0`` as distinct release numbers within the + scope of a single release series is strongly discouraged, as it makes the + version ordering ambiguous for human readers. Automated tools should + either treat this case as an error, or else interpret an ``X.Y.0`` + release as coming *after* the corresponding ``X.Y`` release. + + The recommended practice is to always use release numbers of a consistent + length (that is, always include the trailing ``.0``). An acceptable + alternative is to consistently omit the trailing ``.0``. The example + above shows both styles, always including the ``.0`` at the second + level and consistently omitting it at the third level. + +.. note:: + + While date based release numbers, using the forms ``year.month`` or + ``year.month.day``, are technically compliant with this scheme, their use + is strongly discouraged as they can hinder automatic translation to + other versioning schemes. In particular, they are completely + incompatible with semantic versioning. + + +Semantic versioning +------------------- + +`Semantic versioning`_ is a popular version identification scheme that is +more prescriptive than this PEP regarding the significance of different +elements of a release number. Even if a project chooses not to abide by +the details of semantic versioning, the scheme is worth understanding as +it covers many of the issues that can arise when depending on other +distributions, and when publishing a distribution that others rely on. + +The "Major.Minor.Patch" (described in this PEP as "major.minor.micro") +aspects of semantic versioning (clauses 1-9 in the 2.0.0-rc-1 specification) +are fully compatible with the version scheme defined in this PEP, and abiding +by these aspects is encouraged. + +Semantic versions containing a hyphen (pre-releases - clause 10) or a +plus sign (builds - clause 11) are *not* compatible with this PEP +and are not permitted in compliant metadata. Use this PEP's deliberately +more restricted pre-release and developmental release notation instead. + +.. _Semantic versioning: http://semver.org/ + + +Pre-releases +------------ + +Some projects use an "alpha, beta, release candidate" pre-release cycle to +support testing by their users prior to a full release. + +If used as part of a project's development cycle, these pre-releases are +indicated by a suffix appended directly to the last component of the +release number:: + + X.YaN # Alpha release + X.YbN # Beta release + X.YcN # Release candidate (alternative notation: X.YrcN) + X.Y # Full release + +The pre-release suffix consists of an alphabetical identifier for the +pre-release phase, along with a non-negative integer value. Pre-releases for +a given release are ordered first by phase (alpha, beta, release candidate) +and then by the numerical component within that phase. + +.. note:: + + Using both ``c`` and ``rc`` to identify release candidates within + the scope of a single release is strongly discouraged, as it makes the + version ordering ambiguous for human readers. Automated tools should + either treat this case as an error, or else interpret all ``rc`` versions + as coming after all ``c`` versions (that is, ``rc1`` indicates a later + version than ``c2``). + + +Post-releases +------------- + +Some projects use post-releases to address minor errors in a release that +do not affect the distributed software (for example, correcting an error +in the release notes). + +If used as part of a project's development cycle, these post-releases are +indicated by a suffix appended directly to the last component of the +release number:: + + X.Y.postN # Post-release + +The post-release suffix consists of the string ``.post``, followed by a +non-negative integer value. Post-releases are ordered by their +numerical component, immediately following the corresponding release, +and ahead of any subsequent release. + +.. note:: + + The use of post-releases to publish maintenance releases containing + actual bug fixes is strongly discouraged. In general, it is better + to use a longer release number and increment the final component + for each maintenance release. + +Post-releases are also permitted for pre-releases:: + + X.YaN.postM # Post-release of an alpha release + X.YbN.postM # Post-release of a beta release + X.YcN.postM # Post-release of a release candidate + +.. note:: + + Creating post-releases of pre-releases is strongly discouraged, as + it makes the version identifier difficult to parse for human readers. + In general, it is substantially clearer to simply create a new + pre-release by incrementing the numeric component. + + +Developmental releases +---------------------- + +Some projects make regular developmental releases, and system packagers +(especially for Linux distributions) may wish to create early releases +which do not conflict with later project releases. + +If used as part of a project's development cycle, these developmental +releases are indicated by a suffix appended directly to the last +component of the release number:: + + X.Y.devN # Developmental release + +The developmental release suffix consists of the string ``.dev``, +followed by a non-negative integer value. Developmental releases are ordered +by their numerical component, immediately before the corresponding release +(and before any pre-releases), and following any previous release. + +Developmental releases are also permitted for pre-releases and +post-releases:: + + X.YaN.devM # Developmental release of an alpha release + X.YbN.devM # Developmental release of a beta release + X.YcN.devM # Developmental release of a release candidate + X.Y.postN.devM # Developmental release of a post-release + +.. note:: + + Creating developmental releases of pre-releases is strongly + discouraged, as it makes the version identifier difficult to parse for + human readers. In general, it is substantially clearer to simply create + a additional pre-releases by incrementing the numeric component. + + Developmental releases of post-releases are also generally discouraged, + but they may be appropriate for projects which use the post-release + notation for full maintenance releases which may include code changes. + + +Examples of compliant version schemes +------------------------------------- + +The standard version scheme is designed to encompass a wide range of +identification practices across public and private Python projects. In +practice, a single project attempting to use the full flexibility offered +by the scheme would create a situation where human users had difficulty +figuring out the relative order of versions, even though the rules above +ensure all compliant tools will order them consistently. + +The following examples illustrate a small selection of the different +approaches projects may choose to identify their releases, while still +ensuring that the "latest release" and the "latest stable release" can +be easily determined, both by human users and automated tools. + +Simple "major.minor" versioning:: + + 0.1 + 0.2 + 0.3 + 1.0 + 1.1 + ... + +Simple "major.minor.micro" versioning:: + + 1.1.0 + 1.1.1 + 1.1.2 + 1.2.0 + ... + +"major.minor" versioning with alpha, beta and release candidate +pre-releases:: + + 0.9 + 1.0a1 + 1.0a2 + 1.0b1 + 1.0c1 + 1.0 + 1.1a1 + ... + +"major.minor" versioning with developmental releases, release candidates +and post-releases for minor corrections:: + + 0.9 + 1.0.dev1 + 1.0.dev2 + 1.0.dev3 + 1.0.dev4 + 1.0rc1 + 1.0rc2 + 1.0 + 1.0.post1 + 1.1.dev1 + ... + + +Summary of permitted suffixes and relative ordering +--------------------------------------------------- + +.. note:: + + This section is intended primarily for authors of tools that + automatically process distribution metadata, rather than authors + of Python distributions deciding on a versioning scheme. + +The numeric release component of version identifiers should be sorted in +the same order as Python's tuple sorting when the release number is +parsed as follows:: + + tuple(map(int, release_number.split("."))) + +Within a numeric release (``1.0``, ``2.7.3``), the following suffixes +are permitted and are ordered as shown:: .devN, aN, bN, cN, rcN, , .postN +Note that `rc` will always sort after `c` (regardless of the numeric +component) although they are semantically equivalent. Tools are free to +reject this case as ambiguous and remain in compliance with the PEP. + Within an alpha (``1.0a1``), beta (``1.0b1``), or release candidate -(``1.0c1``, ``1.0rc1``):: +(``1.0c1``, ``1.0rc1``), the following suffixes are permitted and are +ordered as shown:: .devN, , .postN -Within a post release (``1.0.post1``):: +Within a post-release (``1.0.post1``), the following suffixes are permitted +and are ordered as shown:: devN, @@ -583,16 +862,7 @@ Within a given suffix, ordering is by the value of the numeric component. -Note that `rc` will always sort after `c` (regardless of the numeric -component) although they are semantically equivalent. It is suggested -that within a particular project you do not mix `c` and `rc`, especially -within the same numeric version. - - -Example version order ---------------------- - -:: +The following example covers many of the possible combinations:: 1.0.dev456 1.0a1 @@ -601,6 +871,7 @@ 1.0a12 1.0b1.dev456 1.0b2 + 1.0b2.post345.dev456 1.0b2.post345 1.0c1.dev456 1.0c1 @@ -609,81 +880,85 @@ 1.0.post456 1.1.dev1 -Recommended subset ------------------- -The PEP authors recommend using a subset of the allowed version scheme, -similar to http://semver.org/ but without hyphenated versions. - -* Version numbers are always three positive digits ``X.Y.Z`` (Major.Minor.Patch) -* The patch version is incremented for backwards-compatible bug fixes. -* The minor version is incremented for backwards-compatible API additions. - When the minor version is incremented the patch version resets to 0. -* The major version is incremented for backwards-incompatible API changes. - When the major version is incremented the minor and patch versions - reset to 0. -* Pre-release versions ending in ``a``, ``b``, and ``c`` may be used. -* Dev- and post-release versions are discouraged. Increment the patch number - instead of issuing a post-release. - -When the major version is 0, the API is not considered stable, may change at -any time, and the rules about when to increment the minor and patch version -numbers are relaxed. - -Ordering across different metadata versions -------------------------------------------- +Version ordering across different metadata versions +--------------------------------------------------- Metadata v1.0 (PEP 241) and metadata v1.1 (PEP 314) do not -specify a standard version numbering or sorting scheme. This PEP does +specify a standard version identification or ordering scheme. This PEP does not mandate any particular approach to handling such versions, but -acknowledges that the de facto standard for sorting such versions is +acknowledges that the de facto standard for ordering them is the scheme used by the ``pkg_resources`` component of ``setuptools``. -For metadata v1.2 (PEP 345), the recommended sort order is defined in -PEP 386. +Software that automatically processes distribution metadata may either +treat non-compliant version identifiers as an error, or attempt to normalize +them to the standard scheme. This means that projects using non-compliant +version identifiers may not be handled consistently across different tools, +even when correctly publishing the earlier metadata versions. -The best way for a publisher to get predictable ordering is to excuse -non-compliant versions from sorting by hiding them on PyPI or by removing -them from any private index that is being used. Otherwise a client -may be restricted to using exact versions to get the correct or latest -version of your project. +Package developers can help ensure consistent automated handling by +marking non-compliant versions as "hidden" on the Python Package Index +(removing them is generally undesirable, as users may be depending on +those specific versions being available). + +Package users may also wish to remove non-compliant versions from any +private package indexes they control. + +For metadata v1.2 (PEP 345), the version ordering described in this PEP +should be used in preference to the one defined in PEP 386. + Version specifiers ================== A version specifier consists of a series of version clauses, separated by commas. Each version clause consists of an optional comparison operator -followed by a version number. For example:: +followed by a version identifier. For example:: 0.9, >= 1.0, != 1.3.4, < 2.0 -Each version number must be in the standard format described in +Each version identifier must be in the standard format described in `Version scheme`_. -Comparison operators must be one of ``<``, ``>``, ``<=``, ``>=``, ``==`` -``!=``, and ``~>``. - -When no comparison operator is provided, it is equivalent to using ``==``. - -The ``~>`` operator, "equal or greater in the last digit" is equivalent -to a pair of version clauses:: - - ~> 2.3.3 - -is equivalent to:: - - >= 2.3.3, < 2.4.0 - The comma (",") is equivalent to a logical **and** operator. -Whitespace between a conditional operator and the following version number -is optional, as is the whitespace around the commas. +Comparison operators must be one of ``<``, ``>``, ``<=``, ``>=``, ``==`` +or ``!=``. -Pre-releases of any kind (indicated by the presence of ``dev``, ``a``, -``b``, ``c`` or ``rc`` in the version number) are implicitly excluded -from all version specifiers, *unless* a pre-release version is explicitly -mentioned in one of the clauses. For example, this specifier implicitly -excludes all pre-releases of later versions:: +The ``==`` and ``!=`` operators are strict - in order to match, the +version supplied must exactly match the specified version, with no +additional trailing suffix. + +However, when no comparison operator is provided along with a version +identifier ``V``, it is equivalent to using the following pair of version +clauses:: + + >= V, < V+1 + +where ``V+1`` is the next version after ``V``, as determined by +incrementing the last numeric component in ``V`` (for example, if +``V == 1.0a3``, then ``V+1 == 1.0a4``, while if ``V == 1.0``, then +``V+1 == 1.1``). + +This approach makes it easy to depend on a particular release series +simply by naming it in a version specifier, without requiring any +additional annotation. For example, the following pairs of version +specifiers are equivalent:: + + 2 + >= 2, < 3 + + 3.3 + >= 3.3, < 3.4 + +Whitespace between a conditional operator and the following version +identifier is optional, as is the whitespace around the commas. + +Pre-releases of any kind, including developmental releases, are implicitly +excluded from all version specifiers, *unless* a pre-release or developmental +developmental release is explicitly mentioned in one of the clauses. For +example, this specifier implicitly excludes all pre-releases and development +releases of later versions:: >= 1.0 @@ -694,13 +969,13 @@ >= 1.0, != 1.0b2 >= 1.0, < 2.0.dev123 -Dependency resolution tools should use the above rules by default, but may -also allow users to request the following alternative behaviours: +Dependency resolution tools should use the above rules by default, but +should also allow users to request the following alternative behaviours: * accept already installed pre-releases for all version specifiers * retrieve and install available pre-releases for all version specifiers -Post releases and purely numeric releases receive no special treatment - +Post-releases and purely numeric releases receive no special treatment - they are always included unless explicitly excluded. Given the above rules, projects which include the ``.0`` suffix for the @@ -711,29 +986,29 @@ ``2.5.0``, will need to use an explicit clause like ``>= 2.5, < 2.5.1`` to refer specifically to that initial release. -Some Examples: +Some examples: -- ``Requires-Dist: zope.interface (3.1)``: any version that starts with 3.1, +* ``Requires-Dist: zope.interface (3.1)``: any version that starts with 3.1, excluding pre-releases. -- ``Requires-Dist: zope.interface (==3.1)``: equivalent to ``Requires-Dist: +* ``Requires-Dist: zope.interface (==3.1)``: equivalent to ``Requires-Dist: zope.interface (3.1)``. -- ``Requires-Dist: zope.interface (3.1.0)``: any version that starts with +* ``Requires-Dist: zope.interface (3.1.0)``: any version that starts with 3.1.0, excluding pre-releases. Since that particular project doesn't use more than 3 digits, it also means "only the 3.1.0 release". -- ``Requires-Python: 3``: Any Python 3 version, excluding pre-releases. -- ``Requires-Python: >=2.6,<3``: Any version of Python 2.6 or 2.7, including - post releases (if they were used for Python). It excludes pre releases of +* ``Requires-Python: 3``: Any Python 3 version, excluding pre-releases. +* ``Requires-Python: >=2.6,<3``: Any version of Python 2.6 or 2.7, including + post-releases (if they were used for Python). It excludes pre releases of Python 3. -- ``Requires-Python: 2.6.2``: Equivalent to ">=2.6.2,<2.6.3". So this includes +* ``Requires-Python: 2.6.2``: Equivalent to ">=2.6.2,<2.6.3". So this includes only Python 2.6.2. Of course, if Python was numbered with 4 digits, it would include all versions of the 2.6.2 series, excluding pre-releases. -- ``Requires-Python: 2.5``: Equivalent to ">=2.5,<2.6". -- ``Requires-Dist: zope.interface (3.1,!=3.1.3)``: any version that starts with - 3.1, excluding pre-releases of 3.1 *and* excluding any version that +* ``Requires-Python: 2.5``: Equivalent to ">=2.5,<2.6". +* ``Requires-Dist: zope.interface (3.1,!=3.1.3)``: any version that starts + with 3.1, excluding pre-releases of 3.1 *and* excluding any version that starts with "3.1.3". For this particular project, this means: "any version of the 3.1 series but not 3.1.3". This is equivalent to: ">=3.1,!=3.1.3,<3.2". -- ``Requires-Python: >=3.3a1``: Any version of Python 3.3+, including +* ``Requires-Python: >=3.3a1``: Any version of Python 3.3+, including pre-releases like 3.4a1. @@ -910,13 +1185,14 @@ The rationale for major changes is given in the following sections. + Standard encoding and other format clarifications ------------------------------------------------- Several aspects of the file format, including the expected file encoding, were underspecified in previous versions of the metadata standard. To -simplify the process of developing interoperable tools, these details are -now explicitly specified. +make it easier to develop interoperable tools, these details are now +explicitly specified. Changing the version scheme @@ -939,21 +1215,56 @@ Making this change should make it easier for affected existing projects to migrate to the latest version of the metadata standard. -Furthermore, as the version scheme in use is dependent on the metadata +Another change to the version scheme is to allow single number +versions, similar to those used by non-Python projects like Mozilla +Firefox, Google Chrome and the Fedora Linux distribution. This is actually +expected to be more useful for version specifiers (allowing things like +the simple ``Requires-Python: 3`` rather than the more convoluted +``Requires-Python: >= 3.0, < 4``), but it is easier to allow it for both +version specifiers and release numbers, rather than splitting the +two definitions. + +Finally, as the version scheme in use is dependent on the metadata version, it was deemed simpler to merge the scheme definition directly into this PEP rather than continuing to maintain it as a separate PEP. This will also allow all of the distutils-specific elements of PEP 386 to finally be formally rejected. +A more opinionated description of the versioning scheme +----------------------------------------------------- + +As in PEP 386, the primary focus is on codifying existing practices to make +them more amenable to automation, rather than demanding that existing +projects make non-trivial changes to their workflow. However, the +standard scheme allows significantly more flexibility than is needed +for the vast majority of simple Python packages (which often don't even +need maintenance releases - many users are happy with needing to upgrade to a +new feature release to get bug fixes). + +For the benefit of novice developers, and for experienced developers +wishing to better understand the various use cases, the specification +now goes into much greater detail on the components of the defined +version scheme, including examples of how each component may be used +in practice. + +The PEP also explicitly guides developers in the direction of +semantic versioning (without requiring it), and discourages the use of +several aspects of the full versioning scheme that have largely been +included in order to cover esoteric corner cases in the practices of +existing projects and in repackaging software for Linux distributions. + + Changing the interpretation of version specifiers ------------------------------------------------- The previous interpretation of version specifiers made it very easy to accidentally download a pre-release version of a dependency. This in turn made it difficult for developers to publish pre-release versions -of software to the Python Package Index, as such an action would lead -to users inadvertently downloaded pre-release software. +of software to the Python Package Index, as leaving the package set as +public would lead to users inadvertently downloading pre-release software, +while hiding it would defeat the purpose of publishing it for user +testing. The previous interpretation also excluded post-releases from some version specifiers for no adequately justified reason. @@ -963,8 +1274,8 @@ pre-release versions to be explicitly requested when needed. -Packaging, build and installation dependencies ----------------------------------------------- +Packaging and build and installation dependencies +------------------------------------------------- The new ``Setup-Requires-Dist`` field allows a distribution to indicate when a dependency is needed to package, build or install the distribution, rather -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 14:00:56 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 9 Feb 2013 14:00:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Clarify_a_behaviour_guideline?= =?utf-8?q?_in_PEP_426?= Message-ID: <3Z3D080tj8zRdH@mail.python.org> http://hg.python.org/peps/rev/2629b361cb41 changeset: 4723:2629b361cb41 user: Nick Coghlan date: Sat Feb 09 23:00:44 2013 +1000 summary: Clarify a behaviour guideline in PEP 426 files: pep-0426.txt | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -975,6 +975,9 @@ * accept already installed pre-releases for all version specifiers * retrieve and install available pre-releases for all version specifiers +Dependency resolution tools may also allow the above behaviour to be +controlled on a per-distribution basis. + Post-releases and purely numeric releases receive no special treatment - they are always included unless explicitly excluded. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 14:23:28 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 9 Feb 2013 14:23:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Rough_PEP_426_post_history=2C?= =?utf-8?q?_fix_header?= Message-ID: <3Z3DV84hYyzRGx@mail.python.org> http://hg.python.org/peps/rev/80483f935c80 changeset: 4724:80483f935c80 user: Nick Coghlan date: Sat Feb 09 23:23:15 2013 +1000 summary: Rough PEP 426 post history, fix header files: pep-0426.txt | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -10,6 +10,7 @@ Type: Standards Track Content-Type: text/x-rst Created: 30 Aug 2012 +Post-History: 14 Nov 2012, 5 Feb 2013, 7 Feb 2013, 9 Feb 2013 Abstract @@ -1235,7 +1236,7 @@ A more opinionated description of the versioning scheme ------------------------------------------------------ +------------------------------------------------------- As in PEP 386, the primary focus is on codifying existing practices to make them more amenable to automation, rather than demanding that existing -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 14:28:29 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 9 Feb 2013 14:28:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Tweak_a_couple_of_PEP_426_gui?= =?utf-8?q?delines?= Message-ID: <3Z3Dbx4kPpzQGN@mail.python.org> http://hg.python.org/peps/rev/07720cc06818 changeset: 4725:07720cc06818 user: Nick Coghlan date: Sat Feb 09 23:28:20 2013 +1000 summary: Tweak a couple of PEP 426 guidelines files: pep-0426.txt | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -755,9 +755,9 @@ Creating developmental releases of pre-releases is strongly discouraged, as it makes the version identifier difficult to parse for human readers. In general, it is substantially clearer to simply create - a additional pre-releases by incrementing the numeric component. + an additional pre-releases by incrementing the numeric component. - Developmental releases of post-releases are also generally discouraged, + Developmental releases of post-releases are also strongly discouraged, but they may be appropriate for projects which use the post-release notation for full maintenance releases which may include code changes. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 14:29:35 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 9 Feb 2013 14:29:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Previous_typo_fix_was_incorre?= =?utf-8?q?ct?= Message-ID: <3Z3DdC3cYRzQGN@mail.python.org> http://hg.python.org/peps/rev/0f8af659bae7 changeset: 4726:0f8af659bae7 user: Nick Coghlan date: Sat Feb 09 23:29:27 2013 +1000 summary: Previous typo fix was incorrect files: pep-0426.txt | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -755,7 +755,7 @@ Creating developmental releases of pre-releases is strongly discouraged, as it makes the version identifier difficult to parse for human readers. In general, it is substantially clearer to simply create - an additional pre-releases by incrementing the numeric component. + additional pre-releases by incrementing the numeric component. Developmental releases of post-releases are also strongly discouraged, but they may be appropriate for projects which use the post-release -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 14:39:18 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 9 Feb 2013 14:39:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Revert_unintended_change_to_a?= =?utf-8?q?_header_in_PEP_426?= Message-ID: <3Z3DrQ5Hl1zSgh@mail.python.org> http://hg.python.org/peps/rev/cb6f1b4791a9 changeset: 4727:cb6f1b4791a9 user: Nick Coghlan date: Sat Feb 09 23:39:08 2013 +1000 summary: Revert unintended change to a header in PEP 426 files: pep-0426.txt | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -1278,8 +1278,8 @@ pre-release versions to be explicitly requested when needed. -Packaging and build and installation dependencies -------------------------------------------------- +Packaging, build and installation dependencies +---------------------------------------------- The new ``Setup-Requires-Dist`` field allows a distribution to indicate when a dependency is needed to package, build or install the distribution, rather -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 14:58:01 2013 From: python-checkins at python.org (benjamin.peterson) Date: Sat, 9 Feb 2013 14:58:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogU3RyaW5nSU8uU3Ry?= =?utf-8?q?ingIO_-=3E_io=2EStringIO_=28closes_=2317168=29?= Message-ID: <3Z3FG14SHFzSZ2@mail.python.org> http://hg.python.org/cpython/rev/474296d6d4a1 changeset: 82090:474296d6d4a1 branch: 3.3 parent: 82087:f36d8ba4eeef user: Benjamin Peterson date: Sat Feb 09 08:57:28 2013 -0500 summary: StringIO.StringIO -> io.StringIO (closes #17168) files: Doc/library/test.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/test.rst b/Doc/library/test.rst --- a/Doc/library/test.rst +++ b/Doc/library/test.rst @@ -364,9 +364,9 @@ .. function:: captured_stdout() - A context manager that runs the :keyword:`with` statement body using - a :class:`StringIO.StringIO` object as sys.stdout. That object can be - retrieved using the ``as`` clause of the :keyword:`with` statement. + A context manager that runs the :keyword:`with` statement body using a + :class:`io.StringIO` object as sys.stdout. That object can be retrieved + using the ``as`` clause of the :keyword:`with` statement. Example use:: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 14:58:03 2013 From: python-checkins at python.org (benjamin.peterson) Date: Sat, 9 Feb 2013 14:58:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogbWVyZ2UgMy4zICgjMTcxNjgp?= Message-ID: <3Z3FG30C8JzSc6@mail.python.org> http://hg.python.org/cpython/rev/87e95b853be2 changeset: 82091:87e95b853be2 parent: 82088:f1a13191f0c8 parent: 82090:474296d6d4a1 user: Benjamin Peterson date: Sat Feb 09 08:57:53 2013 -0500 summary: merge 3.3 (#17168) files: Doc/library/test.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/test.rst b/Doc/library/test.rst --- a/Doc/library/test.rst +++ b/Doc/library/test.rst @@ -364,9 +364,9 @@ .. function:: captured_stdout() - A context manager that runs the :keyword:`with` statement body using - a :class:`StringIO.StringIO` object as sys.stdout. That object can be - retrieved using the ``as`` clause of the :keyword:`with` statement. + A context manager that runs the :keyword:`with` statement body using a + :class:`io.StringIO` object as sys.stdout. That object can be retrieved + using the ``as`` clause of the :keyword:`with` statement. Example use:: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 15:29:01 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 9 Feb 2013 15:29:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Use_the_right_term_in_PEP_426?= Message-ID: <3Z3Fxn6WN0zSgV@mail.python.org> http://hg.python.org/peps/rev/e85481d9e6ef changeset: 4728:e85481d9e6ef user: Nick Coghlan date: Sun Feb 10 00:28:52 2013 +1000 summary: Use the right term in PEP 426 files: pep-0426.txt | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -897,12 +897,12 @@ version identifiers may not be handled consistently across different tools, even when correctly publishing the earlier metadata versions. -Package developers can help ensure consistent automated handling by +Distribution developers can help ensure consistent automated handling by marking non-compliant versions as "hidden" on the Python Package Index (removing them is generally undesirable, as users may be depending on those specific versions being available). -Package users may also wish to remove non-compliant versions from any +Distribution users may also wish to remove non-compliant versions from any private package indexes they control. For metadata v1.2 (PEP 345), the version ordering described in this PEP -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 9 17:04:33 2013 From: python-checkins at python.org (christian.heimes) Date: Sat, 9 Feb 2013 17:04:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_add_proper_dep?= =?utf-8?q?endencies_on_expat_headers_and_sources?= Message-ID: <3Z3J410yk9zQHS@mail.python.org> http://hg.python.org/cpython/rev/ce411fd690fd changeset: 82092:ce411fd690fd branch: 3.2 parent: 82086:dfc6902b63d7 user: Christian Heimes date: Sat Feb 09 17:02:06 2013 +0100 summary: add proper dependencies on expat headers and sources files: setup.py | 18 +++++++++++++++++- 1 files changed, 17 insertions(+), 1 deletions(-) diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1309,6 +1309,7 @@ define_macros = [] expat_lib = ['expat'] expat_sources = [] + expat_depends = [] else: expat_inc = [os.path.join(os.getcwd(), srcdir, 'Modules', 'expat')] define_macros = [ @@ -1318,12 +1319,25 @@ expat_sources = ['expat/xmlparse.c', 'expat/xmlrole.c', 'expat/xmltok.c'] + expat_depends = ['expat/ascii.h', + 'expat/asciitab.h', + 'expat/expat.h', + 'expat/expat_config.h', + 'expat/expat_external.h', + 'expat/internal.h', + 'expat/latin1tab.h', + 'expat/utf8tab.h', + 'expat/xmlrole.h', + 'expat/xmltok.h', + 'expat/xmltok_impl.h' + ] exts.append(Extension('pyexpat', define_macros = define_macros, include_dirs = expat_inc, libraries = expat_lib, - sources = ['pyexpat.c'] + expat_sources + sources = ['pyexpat.c'] + expat_sources, + depends = expat_depends, )) # Fredrik Lundh's cElementTree module. Note that this also @@ -1336,6 +1350,8 @@ include_dirs = expat_inc, libraries = expat_lib, sources = ['_elementtree.c'], + depends = ['pyexpat.c'] + expat_sources + + expat_depends, )) else: missing.append('_elementtree') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 17:04:34 2013 From: python-checkins at python.org (christian.heimes) Date: Sat, 9 Feb 2013 17:04:34 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_add_proper_dependencies_on_expat_headers_and_sources?= Message-ID: <3Z3J423WS5zSWy@mail.python.org> http://hg.python.org/cpython/rev/f2c2846f0c2f changeset: 82093:f2c2846f0c2f branch: 3.3 parent: 82090:474296d6d4a1 parent: 82092:ce411fd690fd user: Christian Heimes date: Sat Feb 09 17:02:16 2013 +0100 summary: add proper dependencies on expat headers and sources files: setup.py | 18 +++++++++++++++++- 1 files changed, 17 insertions(+), 1 deletions(-) diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1394,6 +1394,7 @@ define_macros = [] expat_lib = ['expat'] expat_sources = [] + expat_depends = [] else: expat_inc = [os.path.join(os.getcwd(), srcdir, 'Modules', 'expat')] define_macros = [ @@ -1403,12 +1404,25 @@ expat_sources = ['expat/xmlparse.c', 'expat/xmlrole.c', 'expat/xmltok.c'] + expat_depends = ['expat/ascii.h', + 'expat/asciitab.h', + 'expat/expat.h', + 'expat/expat_config.h', + 'expat/expat_external.h', + 'expat/internal.h', + 'expat/latin1tab.h', + 'expat/utf8tab.h', + 'expat/xmlrole.h', + 'expat/xmltok.h', + 'expat/xmltok_impl.h' + ] exts.append(Extension('pyexpat', define_macros = define_macros, include_dirs = expat_inc, libraries = expat_lib, - sources = ['pyexpat.c'] + expat_sources + sources = ['pyexpat.c'] + expat_sources, + depends = expat_depends, )) # Fredrik Lundh's cElementTree module. Note that this also @@ -1421,6 +1435,8 @@ include_dirs = expat_inc, libraries = expat_lib, sources = ['_elementtree.c'], + depends = ['pyexpat.c'] + expat_sources + + expat_depends, )) else: missing.append('_elementtree') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 17:04:35 2013 From: python-checkins at python.org (christian.heimes) Date: Sat, 9 Feb 2013 17:04:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_add_proper_dependencies_on_expat_headers_and_sources?= Message-ID: <3Z3J436LnjzSXy@mail.python.org> http://hg.python.org/cpython/rev/80320773d755 changeset: 82094:80320773d755 parent: 82091:87e95b853be2 parent: 82093:f2c2846f0c2f user: Christian Heimes date: Sat Feb 09 17:02:24 2013 +0100 summary: add proper dependencies on expat headers and sources files: setup.py | 18 +++++++++++++++++- 1 files changed, 17 insertions(+), 1 deletions(-) diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1405,6 +1405,7 @@ define_macros = [] expat_lib = ['expat'] expat_sources = [] + expat_depends = [] else: expat_inc = [os.path.join(os.getcwd(), srcdir, 'Modules', 'expat')] define_macros = [ @@ -1414,12 +1415,25 @@ expat_sources = ['expat/xmlparse.c', 'expat/xmlrole.c', 'expat/xmltok.c'] + expat_depends = ['expat/ascii.h', + 'expat/asciitab.h', + 'expat/expat.h', + 'expat/expat_config.h', + 'expat/expat_external.h', + 'expat/internal.h', + 'expat/latin1tab.h', + 'expat/utf8tab.h', + 'expat/xmlrole.h', + 'expat/xmltok.h', + 'expat/xmltok_impl.h' + ] exts.append(Extension('pyexpat', define_macros = define_macros, include_dirs = expat_inc, libraries = expat_lib, - sources = ['pyexpat.c'] + expat_sources + sources = ['pyexpat.c'] + expat_sources, + depends = expat_depends, )) # Fredrik Lundh's cElementTree module. Note that this also @@ -1432,6 +1446,8 @@ include_dirs = expat_inc, libraries = expat_lib, sources = ['_elementtree.c'], + depends = ['pyexpat.c'] + expat_sources + + expat_depends, )) else: missing.append('_elementtree') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 17:04:37 2013 From: python-checkins at python.org (christian.heimes) Date: Sat, 9 Feb 2013 17:04:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_add_proper_dep?= =?utf-8?q?endencies_on_expat_headers_and_sources?= Message-ID: <3Z3J451xmxzScB@mail.python.org> http://hg.python.org/cpython/rev/bf43e8c30a83 changeset: 82095:bf43e8c30a83 branch: 2.7 parent: 82089:a025b04332fe user: Christian Heimes date: Sat Feb 09 17:02:06 2013 +0100 summary: add proper dependencies on expat headers and sources files: setup.py | 18 +++++++++++++++++- 1 files changed, 17 insertions(+), 1 deletions(-) diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1449,6 +1449,7 @@ define_macros = [] expat_lib = ['expat'] expat_sources = [] + expat_depends = [] else: expat_inc = [os.path.join(os.getcwd(), srcdir, 'Modules', 'expat')] define_macros = [ @@ -1458,12 +1459,25 @@ expat_sources = ['expat/xmlparse.c', 'expat/xmlrole.c', 'expat/xmltok.c'] + expat_depends = ['expat/ascii.h', + 'expat/asciitab.h', + 'expat/expat.h', + 'expat/expat_config.h', + 'expat/expat_external.h', + 'expat/internal.h', + 'expat/latin1tab.h', + 'expat/utf8tab.h', + 'expat/xmlrole.h', + 'expat/xmltok.h', + 'expat/xmltok_impl.h' + ] exts.append(Extension('pyexpat', define_macros = define_macros, include_dirs = expat_inc, libraries = expat_lib, - sources = ['pyexpat.c'] + expat_sources + sources = ['pyexpat.c'] + expat_sources, + depends = expat_depends, )) # Fredrik Lundh's cElementTree module. Note that this also @@ -1476,6 +1490,8 @@ include_dirs = expat_inc, libraries = expat_lib, sources = ['_elementtree.c'], + depends = ['pyexpat.c'] + expat_sources + + expat_depends, )) else: missing.append('_elementtree') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 19:17:06 2013 From: python-checkins at python.org (r.david.murray) Date: Sat, 9 Feb 2013 19:17:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE2NTY0OiB0ZXN0?= =?utf-8?q?_to_confirm_behavior_that_regressed_in_python3=2E?= Message-ID: <3Z3M0y6j2FzRXP@mail.python.org> http://hg.python.org/cpython/rev/30f92600df9d changeset: 82096:30f92600df9d branch: 2.7 user: R David Murray date: Sat Feb 09 12:53:29 2013 -0500 summary: #16564: test to confirm behavior that regressed in python3. Also add running of test_email_renamed to the email regrtest. It contains tests that the base email/tests/test_email.py does not, which I discovered while trying to backport this test for confirmation of the behavior. files: Lib/email/test/test_email_renamed.py | 15 +++++++++++++++ Lib/test/test_email.py | 2 ++ Misc/NEWS | 4 ++++ 3 files changed, 21 insertions(+), 0 deletions(-) diff --git a/Lib/email/test/test_email_renamed.py b/Lib/email/test/test_email_renamed.py --- a/Lib/email/test/test_email_renamed.py +++ b/Lib/email/test/test_email_renamed.py @@ -994,6 +994,21 @@ eq(msg.get_payload(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytes) + def test_body_with_encode_noop(self): + # Issue 16564: This does not produce an RFC valid message, since to be + # valid it should have a CTE of binary. But the below works, and is + # documented as working this way. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_noop) + self.assertEqual(msg.get_payload(), bytesdata) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + s = StringIO() + g = Generator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_string(wireform) + self.assertEqual(msg.get_payload(), bytesdata) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) # Test the basic MIMEText class diff --git a/Lib/test/test_email.py b/Lib/test/test_email.py --- a/Lib/test/test_email.py +++ b/Lib/test/test_email.py @@ -3,10 +3,12 @@ # The specific tests now live in Lib/email/test from email.test.test_email import suite +from email.test.test_email_renamed import suite as suite2 from test import test_support def test_main(): test_support.run_unittest(suite()) + test_support.run_unittest(suite2()) if __name__ == '__main__': test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -748,6 +748,10 @@ Tests ----- +- We now run both test_email.py and test_email_renamed.py when running the + test_email regression test. test_email_renamed contains some tests that + test_email does not. + - Issue #17041: Fix testing when Python is configured with the --without-doc-strings option. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 19:17:08 2013 From: python-checkins at python.org (r.david.murray) Date: Sat, 9 Feb 2013 19:17:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE2NTY0OiBGaXgg?= =?utf-8?q?regression_in_use_of_encoders=2Eencode=5Fnoop_with_binary_data?= =?utf-8?q?=2E?= Message-ID: <3Z3M102XbqzRXP@mail.python.org> http://hg.python.org/cpython/rev/a1a04f76d08c changeset: 82097:a1a04f76d08c branch: 3.2 parent: 82092:ce411fd690fd user: R David Murray date: Sat Feb 09 13:02:58 2013 -0500 summary: #16564: Fix regression in use of encoders.encode_noop with binary data. files: Lib/email/encoders.py | 6 ++++++ Lib/email/generator.py | 3 +++ Lib/email/test/test_email.py | 16 ++++++++++++++++ Misc/NEWS | 3 +++ 4 files changed, 28 insertions(+), 0 deletions(-) diff --git a/Lib/email/encoders.py b/Lib/email/encoders.py --- a/Lib/email/encoders.py +++ b/Lib/email/encoders.py @@ -76,3 +76,9 @@ def encode_noop(msg): """Do nothing.""" + # Well, not quite *nothing*: in Python3 we have to turn bytes into a string + # in our internal surrogateescaped form in order to keep the model + # consistent. + orig = msg.get_payload() + if not isinstance(orig, str): + msg.set_payload(orig.decode('ascii', 'surrogateescape')) diff --git a/Lib/email/generator.py b/Lib/email/generator.py --- a/Lib/email/generator.py +++ b/Lib/email/generator.py @@ -397,6 +397,9 @@ else: super(BytesGenerator,self)._handle_text(msg) + # Default body handler + _writeBody = _handle_text + @classmethod def _compile_re(cls, s, flags): return re.compile(s.encode('ascii'), flags) diff --git a/Lib/email/test/test_email.py b/Lib/email/test/test_email.py --- a/Lib/email/test/test_email.py +++ b/Lib/email/test/test_email.py @@ -1438,6 +1438,22 @@ eq(msg.get_payload().strip(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytesdata) + def test_body_with_encode_noop(self): + # Issue 16564: This does not produce an RFC valid message, since to be + # valid it should have a CTE of binary. But the below works in + # Python2, and is documented as working this way. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_noop) + # Treated as a string, this will be invalid code points. + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + s = BytesIO() + g = BytesGenerator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_bytes(wireform) + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) # Test the basic MIMEText class diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -215,6 +215,9 @@ Library ------- +- Issue #16564: Fixed regression relative to Python2 in the operation of + email.encoders.encode_noop when used with binary data. + - Issue #10355: In SpooledTemporaryFile class mode, name, encoding and newlines properties now work for unrolled files. Obsoleted and never working on Python 3 xreadline method now removed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 19:17:09 2013 From: python-checkins at python.org (r.david.murray) Date: Sat, 9 Feb 2013 19:17:09 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge=3A_=2316564=3A_Fix_regression_in_use_of_encoders=2Eencod?= =?utf-8?q?e=5Fnoop_with_binary_data=2E?= Message-ID: <3Z3M115JvVzScY@mail.python.org> http://hg.python.org/cpython/rev/2b1edefc1e99 changeset: 82098:2b1edefc1e99 branch: 3.3 parent: 82093:f2c2846f0c2f parent: 82097:a1a04f76d08c user: R David Murray date: Sat Feb 09 13:10:54 2013 -0500 summary: Merge: #16564: Fix regression in use of encoders.encode_noop with binary data. files: Lib/email/encoders.py | 6 ++++++ Lib/email/generator.py | 3 +++ Lib/test/test_email/test_email.py | 16 ++++++++++++++++ Misc/NEWS | 3 +++ 4 files changed, 28 insertions(+), 0 deletions(-) diff --git a/Lib/email/encoders.py b/Lib/email/encoders.py --- a/Lib/email/encoders.py +++ b/Lib/email/encoders.py @@ -76,3 +76,9 @@ def encode_noop(msg): """Do nothing.""" + # Well, not quite *nothing*: in Python3 we have to turn bytes into a string + # in our internal surrogateescaped form in order to keep the model + # consistent. + orig = msg.get_payload() + if not isinstance(orig, str): + msg.set_payload(orig.decode('ascii', 'surrogateescape')) diff --git a/Lib/email/generator.py b/Lib/email/generator.py --- a/Lib/email/generator.py +++ b/Lib/email/generator.py @@ -406,6 +406,9 @@ else: super(BytesGenerator,self)._handle_text(msg) + # Default body handler + _writeBody = _handle_text + @classmethod def _compile_re(cls, s, flags): return re.compile(s.encode('ascii'), flags) diff --git a/Lib/test/test_email/test_email.py b/Lib/test/test_email/test_email.py --- a/Lib/test/test_email/test_email.py +++ b/Lib/test/test_email/test_email.py @@ -1440,6 +1440,22 @@ eq(msg.get_payload().strip(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytesdata) + def test_body_with_encode_noop(self): + # Issue 16564: This does not produce an RFC valid message, since to be + # valid it should have a CTE of binary. But the below works in + # Python2, and is documented as working this way. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_noop) + # Treated as a string, this will be invalid code points. + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + s = BytesIO() + g = BytesGenerator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_bytes(wireform) + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) # Test the basic MIMEText class diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -169,6 +169,9 @@ Library ------- +- Issue #16564: Fixed regression relative to Python2 in the operation of + email.encoders.encode_noop when used with binary data. + - Issue #10355: In SpooledTemporaryFile class mode, name, encoding and newlines properties now work for unrolled files. Obsoleted and never working on Python 3 xreadline method now removed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 19:17:11 2013 From: python-checkins at python.org (r.david.murray) Date: Sat, 9 Feb 2013 19:17:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=2316564=3A_Fix_regression_in_use_of_encoders?= =?utf-8?q?=2Eencode=5Fnoop_with_binary_data=2E?= Message-ID: <3Z3M131b8qzSXm@mail.python.org> http://hg.python.org/cpython/rev/5a0478bd5f11 changeset: 82099:5a0478bd5f11 parent: 82094:80320773d755 parent: 82098:2b1edefc1e99 user: R David Murray date: Sat Feb 09 13:13:14 2013 -0500 summary: Merge: #16564: Fix regression in use of encoders.encode_noop with binary data. files: Lib/email/encoders.py | 6 ++++++ Lib/email/generator.py | 3 +++ Lib/test/test_email/test_email.py | 16 ++++++++++++++++ Misc/NEWS | 3 +++ 4 files changed, 28 insertions(+), 0 deletions(-) diff --git a/Lib/email/encoders.py b/Lib/email/encoders.py --- a/Lib/email/encoders.py +++ b/Lib/email/encoders.py @@ -76,3 +76,9 @@ def encode_noop(msg): """Do nothing.""" + # Well, not quite *nothing*: in Python3 we have to turn bytes into a string + # in our internal surrogateescaped form in order to keep the model + # consistent. + orig = msg.get_payload() + if not isinstance(orig, str): + msg.set_payload(orig.decode('ascii', 'surrogateescape')) diff --git a/Lib/email/generator.py b/Lib/email/generator.py --- a/Lib/email/generator.py +++ b/Lib/email/generator.py @@ -406,6 +406,9 @@ else: super(BytesGenerator,self)._handle_text(msg) + # Default body handler + _writeBody = _handle_text + @classmethod def _compile_re(cls, s, flags): return re.compile(s.encode('ascii'), flags) diff --git a/Lib/test/test_email/test_email.py b/Lib/test/test_email/test_email.py --- a/Lib/test/test_email/test_email.py +++ b/Lib/test/test_email/test_email.py @@ -1440,6 +1440,22 @@ eq(msg.get_payload().strip(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytesdata) + def test_body_with_encode_noop(self): + # Issue 16564: This does not produce an RFC valid message, since to be + # valid it should have a CTE of binary. But the below works in + # Python2, and is documented as working this way. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_noop) + # Treated as a string, this will be invalid code points. + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + s = BytesIO() + g = BytesGenerator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_bytes(wireform) + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) # Test the basic MIMEText class diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -241,6 +241,9 @@ Library ------- +- Issue #16564: Fixed regression relative to Python2 in the operation of + email.encoders.encode_noop when used with binary data. + - Issue #10355: In SpooledTemporaryFile class mode, name, encoding and newlines properties now work for unrolled files. Obsoleted and never working on Python 3 xreadline method now removed. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 19:25:30 2013 From: python-checkins at python.org (r.david.murray) Date: Sat, 9 Feb 2013 19:25:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MTY2OiBmaXgg?= =?utf-8?q?=5Fdummy=5Fthread_import_example=2E?= Message-ID: <3Z3MBf1wzszSXj@mail.python.org> http://hg.python.org/cpython/rev/6af3afbc7211 changeset: 82100:6af3afbc7211 branch: 3.2 parent: 82097:a1a04f76d08c user: R David Murray date: Sat Feb 09 13:23:46 2013 -0500 summary: #17166: fix _dummy_thread import example. Report and patch by Berker Peksag. files: Doc/library/_dummy_thread.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/_dummy_thread.rst b/Doc/library/_dummy_thread.rst --- a/Doc/library/_dummy_thread.rst +++ b/Doc/library/_dummy_thread.rst @@ -17,7 +17,7 @@ try: import _thread except ImportError: - import dummy_thread as _thread + import _dummy_thread as _thread Be careful to not use this module where deadlock might occur from a thread being created that blocks waiting for another thread to be created. This often occurs -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 19:25:31 2013 From: python-checkins at python.org (r.david.murray) Date: Sat, 9 Feb 2013 19:25:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge=3A_=2317166=3A_fix_=5Fdummy=5Fthread_import_example=2E?= Message-ID: <3Z3MBg4rmrzSY9@mail.python.org> http://hg.python.org/cpython/rev/dfefae8df4f7 changeset: 82101:dfefae8df4f7 branch: 3.3 parent: 82098:2b1edefc1e99 parent: 82100:6af3afbc7211 user: R David Murray date: Sat Feb 09 13:24:44 2013 -0500 summary: Merge: #17166: fix _dummy_thread import example. Report and patch by Berker Peksag. files: Doc/library/_dummy_thread.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/_dummy_thread.rst b/Doc/library/_dummy_thread.rst --- a/Doc/library/_dummy_thread.rst +++ b/Doc/library/_dummy_thread.rst @@ -17,7 +17,7 @@ try: import _thread except ImportError: - import dummy_thread as _thread + import _dummy_thread as _thread Be careful to not use this module where deadlock might occur from a thread being created that blocks waiting for another thread to be created. This often occurs -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 19:25:33 2013 From: python-checkins at python.org (r.david.murray) Date: Sat, 9 Feb 2013 19:25:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=2317166=3A_fix_=5Fdummy=5Fthread_import_exampl?= =?utf-8?q?e=2E?= Message-ID: <3Z3MBj0ZQRzSfv@mail.python.org> http://hg.python.org/cpython/rev/c4512797b879 changeset: 82102:c4512797b879 parent: 82099:5a0478bd5f11 parent: 82101:dfefae8df4f7 user: R David Murray date: Sat Feb 09 13:25:12 2013 -0500 summary: Merge: #17166: fix _dummy_thread import example. Report and patch by Berker Peksag. files: Doc/library/_dummy_thread.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/_dummy_thread.rst b/Doc/library/_dummy_thread.rst --- a/Doc/library/_dummy_thread.rst +++ b/Doc/library/_dummy_thread.rst @@ -17,7 +17,7 @@ try: import _thread except ImportError: - import dummy_thread as _thread + import _dummy_thread as _thread Be careful to not use this module where deadlock might occur from a thread being created that blocks waiting for another thread to be created. This often occurs -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 20:21:16 2013 From: python-checkins at python.org (raymond.hettinger) Date: Sat, 9 Feb 2013 20:21:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Keep_IDLE_from?= =?utf-8?q?_displaying_spurious_SystemExit_tracebacks?= Message-ID: <3Z3NR03jGvzScX@mail.python.org> http://hg.python.org/cpython/rev/872a3aca2120 changeset: 82103:872a3aca2120 branch: 2.7 parent: 82096:30f92600df9d user: Raymond Hettinger date: Sat Feb 09 14:20:55 2013 -0500 summary: Keep IDLE from displaying spurious SystemExit tracebacks when running scripts that terminated by raising SystemExit (i.e. unittest and turtledemo). files: Lib/idlelib/run.py | 5 ++++- Misc/NEWS | 3 +++ 2 files changed, 7 insertions(+), 1 deletions(-) diff --git a/Lib/idlelib/run.py b/Lib/idlelib/run.py --- a/Lib/idlelib/run.py +++ b/Lib/idlelib/run.py @@ -301,11 +301,14 @@ exec code in self.locals finally: interruptable = False + except SystemExit: + # Scripts that raise SystemExit should just + # return to the interactive prompt + pass except: self.usr_exc_info = sys.exc_info() if quitting: exit() - # even print a user code SystemExit exception, continue print_exception() jit = self.rpchandler.console.getvar("<>") if jit: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -205,6 +205,9 @@ - Issue #7358: cStringIO.StringIO now supports writing to and reading from a stream larger than 2 GiB on 64-bit systems. +- IDLE was displaying spurious SystemExit tracebacks when running scripts + that terminated by raising SystemExit (i.e. unittest and turtledemo). + - Issue #10355: In SpooledTemporaryFile class mode and name properties and xreadlines method now work for unrolled files. encoding and newlines properties now removed as they have no sense and always produced -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 21:28:20 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 21:28:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MTY5?= =?utf-8?q?=3A_Restore_errno_in_tempfile_exceptions=2E?= Message-ID: <3Z3PwN0xkLzQ2D@mail.python.org> http://hg.python.org/cpython/rev/11eaa61124c2 changeset: 82104:11eaa61124c2 branch: 3.3 parent: 82101:dfefae8df4f7 user: Serhiy Storchaka date: Sat Feb 09 22:25:49 2013 +0200 summary: Issue #17169: Restore errno in tempfile exceptions. files: Lib/tempfile.py | 14 ++++++++++---- Lib/test/test_tempfile.py | 4 +++- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -31,6 +31,7 @@ import sys as _sys import io as _io import os as _os +import errno as _errno from random import Random as _Random try: @@ -181,7 +182,9 @@ pass except OSError: break # no point trying more names in this directory - raise FileNotFoundError("No usable temporary directory found in %s" % dirlist) + raise FileNotFoundError(_errno.ENOENT, + "No usable temporary directory found in %s" % + dirlist) _name_sequence = None @@ -214,7 +217,8 @@ except FileExistsError: continue # try again - raise FileExistsError("No usable temporary file name found") + raise FileExistsError(_errno.EEXIST, + "No usable temporary file name found") # User visible interfaces. @@ -301,7 +305,8 @@ except FileExistsError: continue # try again - raise FileExistsError("No usable temporary directory name found") + raise FileExistsError(_errno.EEXIST, + "No usable temporary directory name found") def mktemp(suffix="", prefix=template, dir=None): """User-callable function to return a unique temporary file name. The @@ -330,7 +335,8 @@ if not _exists(file): return file - raise FileExistsError("No usable temporary filename found") + raise FileExistsError(_errno.EEXIST, + "No usable temporary filename found") class _TemporaryFileWrapper: diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -1,5 +1,6 @@ # tempfile.py unit tests. import tempfile +import errno import os import signal import sys @@ -963,8 +964,9 @@ # (noted as part of Issue #10188) with tempfile.TemporaryDirectory() as nonexistent: pass - with self.assertRaises(os.error): + with self.assertRaises(FileNotFoundError) as cm: tempfile.TemporaryDirectory(dir=nonexistent) + self.assertEqual(cm.exception.errno, errno.ENOENT) def test_explicit_cleanup(self): # A TemporaryDirectory is deleted when cleaned up -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 21:28:21 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 21:28:21 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317169=3A_Restore_errno_in_tempfile_exceptions?= =?utf-8?q?=2E?= Message-ID: <3Z3PwP3cnXzSc6@mail.python.org> http://hg.python.org/cpython/rev/fd3e3059381a changeset: 82105:fd3e3059381a parent: 82102:c4512797b879 parent: 82104:11eaa61124c2 user: Serhiy Storchaka date: Sat Feb 09 22:27:23 2013 +0200 summary: Issue #17169: Restore errno in tempfile exceptions. files: Lib/tempfile.py | 14 ++++++++++---- Lib/test/test_tempfile.py | 4 +++- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -31,6 +31,7 @@ import sys as _sys import io as _io import os as _os +import errno as _errno from random import Random as _Random try: @@ -183,7 +184,9 @@ pass except OSError: break # no point trying more names in this directory - raise FileNotFoundError("No usable temporary directory found in %s" % dirlist) + raise FileNotFoundError(_errno.ENOENT, + "No usable temporary directory found in %s" % + dirlist) _name_sequence = None @@ -216,7 +219,8 @@ except FileExistsError: continue # try again - raise FileExistsError("No usable temporary file name found") + raise FileExistsError(_errno.EEXIST, + "No usable temporary file name found") # User visible interfaces. @@ -303,7 +307,8 @@ except FileExistsError: continue # try again - raise FileExistsError("No usable temporary directory name found") + raise FileExistsError(_errno.EEXIST, + "No usable temporary directory name found") def mktemp(suffix="", prefix=template, dir=None): """User-callable function to return a unique temporary file name. The @@ -332,7 +337,8 @@ if not _exists(file): return file - raise FileExistsError("No usable temporary filename found") + raise FileExistsError(_errno.EEXIST, + "No usable temporary filename found") class _TemporaryFileWrapper: diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -1,5 +1,6 @@ # tempfile.py unit tests. import tempfile +import errno import os import signal import sys @@ -963,8 +964,9 @@ # (noted as part of Issue #10188) with tempfile.TemporaryDirectory() as nonexistent: pass - with self.assertRaises(OSError): + with self.assertRaises(FileNotFoundError) as cm: tempfile.TemporaryDirectory(dir=nonexistent) + self.assertEqual(cm.exception.errno, errno.ENOENT) def test_explicit_cleanup(self): # A TemporaryDirectory is deleted when cleaned up -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 21:41:24 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 21:41:24 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MTU2?= =?utf-8?q?=3A_pygettext=2Epy_now_correctly_escapes_non-ascii_characters?= =?utf-8?q?=2E?= Message-ID: <3Z3QCS0Qq9zMYT@mail.python.org> http://hg.python.org/cpython/rev/49b1fde510a6 changeset: 82106:49b1fde510a6 branch: 2.7 parent: 82103:872a3aca2120 user: Serhiy Storchaka date: Sat Feb 09 22:36:22 2013 +0200 summary: Issue #17156: pygettext.py now correctly escapes non-ascii characters. files: Misc/NEWS | 2 ++ Tools/i18n/pygettext.py | 11 +++++------ 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,8 @@ Library ------- +- Issue #17156: pygettext.py now correctly escapes non-ascii characters. + - Issue #7358: cStringIO.StringIO now supports writing to and reading from a stream larger than 2 GiB on 64-bit systems. diff --git a/Tools/i18n/pygettext.py b/Tools/i18n/pygettext.py --- a/Tools/i18n/pygettext.py +++ b/Tools/i18n/pygettext.py @@ -208,6 +208,7 @@ def make_escapes(pass_iso8859): global escapes + escapes = [chr(i) for i in range(256)] if pass_iso8859: # Allow iso-8859 characters to pass through so that e.g. 'msgid # "H?he"' would result not result in 'msgid "H\366he"'. Otherwise we @@ -215,11 +216,9 @@ mod = 128 else: mod = 256 - for i in range(256): - if 32 <= (i % mod) <= 126: - escapes.append(chr(i)) - else: - escapes.append("\\%03o" % i) + for i in range(mod): + if not(32 <= i <= 126): + escapes[i] = "\\%03o" % i escapes[ord('\\')] = '\\\\' escapes[ord('\t')] = '\\t' escapes[ord('\r')] = '\\r' @@ -593,7 +592,7 @@ fp.close() # calculate escapes - make_escapes(options.escape) + make_escapes(not options.escape) # calculate all keywords options.keywords.extend(default_keywords) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 21:41:25 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 21:41:25 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTU2?= =?utf-8?q?=3A_pygettext=2Epy_now_uses_an_encoding_of_source_file_and_corr?= =?utf-8?q?ectly?= Message-ID: <3Z3QCT4fDlzSfs@mail.python.org> http://hg.python.org/cpython/rev/cd59b398907d changeset: 82107:cd59b398907d branch: 3.2 parent: 82100:6af3afbc7211 user: Serhiy Storchaka date: Sat Feb 09 22:37:22 2013 +0200 summary: Issue #17156: pygettext.py now uses an encoding of source file and correctly writes and escapes non-ascii characters. files: Misc/NEWS | 3 + Tools/i18n/pygettext.py | 66 ++++++++++++++-------------- 2 files changed, 36 insertions(+), 33 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -215,6 +215,9 @@ Library ------- +- Issue #17156: pygettext.py now uses an encoding of source file and correctly + writes and escapes non-ascii characters. + - Issue #16564: Fixed regression relative to Python2 in the operation of email.encoders.encode_noop when used with binary data. diff --git a/Tools/i18n/pygettext.py b/Tools/i18n/pygettext.py --- a/Tools/i18n/pygettext.py +++ b/Tools/i18n/pygettext.py @@ -189,8 +189,8 @@ "Last-Translator: FULL NAME \\n" "Language-Team: LANGUAGE \\n" "MIME-Version: 1.0\\n" -"Content-Type: text/plain; charset=CHARSET\\n" -"Content-Transfer-Encoding: ENCODING\\n" +"Content-Type: text/plain; charset=%(charset)s\\n" +"Content-Transfer-Encoding: %(encoding)s\\n" "Generated-By: pygettext.py %(version)s\\n" ''') @@ -204,35 +204,32 @@ -escapes = [] - -def make_escapes(pass_iso8859): - global escapes - if pass_iso8859: - # Allow iso-8859 characters to pass through so that e.g. 'msgid +def make_escapes(pass_nonascii): + global escapes, escape + if pass_nonascii: + # Allow non-ascii characters to pass through so that e.g. 'msgid # "H?he"' would result not result in 'msgid "H\366he"'. Otherwise we # escape any character outside the 32..126 range. mod = 128 + escape = escape_ascii else: mod = 256 - for i in range(256): - if 32 <= (i % mod) <= 126: - escapes.append(chr(i)) - else: - escapes.append("\\%03o" % i) - escapes[ord('\\')] = '\\\\' - escapes[ord('\t')] = '\\t' - escapes[ord('\r')] = '\\r' - escapes[ord('\n')] = '\\n' - escapes[ord('\"')] = '\\"' + escape = escape_nonascii + escapes = [r"\%03o" % i for i in range(mod)] + for i in range(32, 127): + escapes[i] = chr(i) + escapes[ord('\\')] = r'\\' + escapes[ord('\t')] = r'\t' + escapes[ord('\r')] = r'\r' + escapes[ord('\n')] = r'\n' + escapes[ord('\"')] = r'\"' -def escape(s): - global escapes - s = list(s) - for i in range(len(s)): - s[i] = escapes[ord(s[i])] - return EMPTYSTRING.join(s) +def escape_ascii(s, encoding): + return ''.join(escapes[ord(c)] if ord(c) < 128 else c for c in s) + +def escape_nonascii(s, encoding): + return ''.join(escapes[b] for b in s.encode(encoding)) def safe_eval(s): @@ -240,18 +237,18 @@ return eval(s, {'__builtins__':{}}, {}) -def normalize(s): +def normalize(s, encoding): # This converts the various Python string types into a format that is # appropriate for .po files, namely much closer to C style. lines = s.split('\n') if len(lines) == 1: - s = '"' + escape(s) + '"' + s = '"' + escape(s, encoding) + '"' else: if not lines[-1]: del lines[-1] lines[-1] = lines[-1] + '\n' for i in range(len(lines)): - lines[i] = escape(lines[i]) + lines[i] = escape(lines[i], encoding) lineterm = '\\n"\n"' s = '""\n"' + lineterm.join(lines) + '"' return s @@ -448,7 +445,10 @@ timestamp = time.strftime('%Y-%m-%d %H:%M+%Z') # The time stamp in the header doesn't have the same format as that # generated by xgettext... - print(pot_header % {'time': timestamp, 'version': __version__}, file=fp) + encoding = fp.encoding if fp.encoding else 'UTF-8' + print(pot_header % {'time': timestamp, 'version': __version__, + 'charset': encoding, + 'encoding': '8bit'}, file=fp) # Sort the entries. First sort each particular entry's keys, then # sort all the entries by their first item. reverse = {} @@ -492,7 +492,7 @@ print(locline, file=fp) if isdocstring: print('#, docstring', file=fp) - print('msgid', normalize(k), file=fp) + print('msgid', normalize(k, encoding), file=fp) print('msgstr ""\n', file=fp) @@ -588,7 +588,7 @@ fp.close() # calculate escapes - make_escapes(options.escape) + make_escapes(not options.escape) # calculate all keywords options.keywords.extend(default_keywords) @@ -621,17 +621,17 @@ if filename == '-': if options.verbose: print(_('Reading standard input')) - fp = sys.stdin + fp = sys.stdin.buffer closep = 0 else: if options.verbose: print(_('Working on %s') % filename) - fp = open(filename) + fp = open(filename, 'rb') closep = 1 try: eater.set_filename(filename) try: - tokens = tokenize.generate_tokens(fp.readline) + tokens = tokenize.tokenize(fp.readline) for _token in tokens: eater(*_token) except tokenize.TokenError as e: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 21:41:27 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 21:41:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317156=3A_pygettext=2Epy_now_uses_an_encoding_of_sourc?= =?utf-8?q?e_file_and_correctly?= Message-ID: <3Z3QCW1RcpzSg0@mail.python.org> http://hg.python.org/cpython/rev/062406c06cc1 changeset: 82108:062406c06cc1 branch: 3.3 parent: 82104:11eaa61124c2 parent: 82107:cd59b398907d user: Serhiy Storchaka date: Sat Feb 09 22:38:12 2013 +0200 summary: Issue #17156: pygettext.py now uses an encoding of source file and correctly writes and escapes non-ascii characters. files: Misc/NEWS | 3 + Tools/i18n/pygettext.py | 66 ++++++++++++++-------------- 2 files changed, 36 insertions(+), 33 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -169,6 +169,9 @@ Library ------- +- Issue #17156: pygettext.py now uses an encoding of source file and correctly + writes and escapes non-ascii characters. + - Issue #16564: Fixed regression relative to Python2 in the operation of email.encoders.encode_noop when used with binary data. diff --git a/Tools/i18n/pygettext.py b/Tools/i18n/pygettext.py --- a/Tools/i18n/pygettext.py +++ b/Tools/i18n/pygettext.py @@ -188,8 +188,8 @@ "Last-Translator: FULL NAME \\n" "Language-Team: LANGUAGE \\n" "MIME-Version: 1.0\\n" -"Content-Type: text/plain; charset=CHARSET\\n" -"Content-Transfer-Encoding: ENCODING\\n" +"Content-Type: text/plain; charset=%(charset)s\\n" +"Content-Transfer-Encoding: %(encoding)s\\n" "Generated-By: pygettext.py %(version)s\\n" ''') @@ -203,35 +203,32 @@ -escapes = [] - -def make_escapes(pass_iso8859): - global escapes - if pass_iso8859: - # Allow iso-8859 characters to pass through so that e.g. 'msgid +def make_escapes(pass_nonascii): + global escapes, escape + if pass_nonascii: + # Allow non-ascii characters to pass through so that e.g. 'msgid # "H?he"' would result not result in 'msgid "H\366he"'. Otherwise we # escape any character outside the 32..126 range. mod = 128 + escape = escape_ascii else: mod = 256 - for i in range(256): - if 32 <= (i % mod) <= 126: - escapes.append(chr(i)) - else: - escapes.append("\\%03o" % i) - escapes[ord('\\')] = '\\\\' - escapes[ord('\t')] = '\\t' - escapes[ord('\r')] = '\\r' - escapes[ord('\n')] = '\\n' - escapes[ord('\"')] = '\\"' + escape = escape_nonascii + escapes = [r"\%03o" % i for i in range(mod)] + for i in range(32, 127): + escapes[i] = chr(i) + escapes[ord('\\')] = r'\\' + escapes[ord('\t')] = r'\t' + escapes[ord('\r')] = r'\r' + escapes[ord('\n')] = r'\n' + escapes[ord('\"')] = r'\"' -def escape(s): - global escapes - s = list(s) - for i in range(len(s)): - s[i] = escapes[ord(s[i])] - return EMPTYSTRING.join(s) +def escape_ascii(s, encoding): + return ''.join(escapes[ord(c)] if ord(c) < 128 else c for c in s) + +def escape_nonascii(s, encoding): + return ''.join(escapes[b] for b in s.encode(encoding)) def safe_eval(s): @@ -239,18 +236,18 @@ return eval(s, {'__builtins__':{}}, {}) -def normalize(s): +def normalize(s, encoding): # This converts the various Python string types into a format that is # appropriate for .po files, namely much closer to C style. lines = s.split('\n') if len(lines) == 1: - s = '"' + escape(s) + '"' + s = '"' + escape(s, encoding) + '"' else: if not lines[-1]: del lines[-1] lines[-1] = lines[-1] + '\n' for i in range(len(lines)): - lines[i] = escape(lines[i]) + lines[i] = escape(lines[i], encoding) lineterm = '\\n"\n"' s = '""\n"' + lineterm.join(lines) + '"' return s @@ -447,7 +444,10 @@ timestamp = time.strftime('%Y-%m-%d %H:%M+%Z') # The time stamp in the header doesn't have the same format as that # generated by xgettext... - print(pot_header % {'time': timestamp, 'version': __version__}, file=fp) + encoding = fp.encoding if fp.encoding else 'UTF-8' + print(pot_header % {'time': timestamp, 'version': __version__, + 'charset': encoding, + 'encoding': '8bit'}, file=fp) # Sort the entries. First sort each particular entry's keys, then # sort all the entries by their first item. reverse = {} @@ -491,7 +491,7 @@ print(locline, file=fp) if isdocstring: print('#, docstring', file=fp) - print('msgid', normalize(k), file=fp) + print('msgid', normalize(k, encoding), file=fp) print('msgstr ""\n', file=fp) @@ -587,7 +587,7 @@ fp.close() # calculate escapes - make_escapes(options.escape) + make_escapes(not options.escape) # calculate all keywords options.keywords.extend(default_keywords) @@ -620,17 +620,17 @@ if filename == '-': if options.verbose: print(_('Reading standard input')) - fp = sys.stdin + fp = sys.stdin.buffer closep = 0 else: if options.verbose: print(_('Working on %s') % filename) - fp = open(filename) + fp = open(filename, 'rb') closep = 1 try: eater.set_filename(filename) try: - tokens = tokenize.generate_tokens(fp.readline) + tokens = tokenize.tokenize(fp.readline) for _token in tokens: eater(*_token) except tokenize.TokenError as e: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 21:41:28 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 9 Feb 2013 21:41:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317156=3A_pygettext=2Epy_now_uses_an_encoding_of?= =?utf-8?q?_source_file_and_correctly?= Message-ID: <3Z3QCX5S4ZzSgK@mail.python.org> http://hg.python.org/cpython/rev/99795d711a40 changeset: 82109:99795d711a40 parent: 82105:fd3e3059381a parent: 82108:062406c06cc1 user: Serhiy Storchaka date: Sat Feb 09 22:38:29 2013 +0200 summary: Issue #17156: pygettext.py now uses an encoding of source file and correctly writes and escapes non-ascii characters. files: Misc/NEWS | 3 + Tools/i18n/pygettext.py | 66 ++++++++++++++-------------- 2 files changed, 36 insertions(+), 33 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -241,6 +241,9 @@ Library ------- +- Issue #17156: pygettext.py now uses an encoding of source file and correctly + writes and escapes non-ascii characters. + - Issue #16564: Fixed regression relative to Python2 in the operation of email.encoders.encode_noop when used with binary data. diff --git a/Tools/i18n/pygettext.py b/Tools/i18n/pygettext.py --- a/Tools/i18n/pygettext.py +++ b/Tools/i18n/pygettext.py @@ -188,8 +188,8 @@ "Last-Translator: FULL NAME \\n" "Language-Team: LANGUAGE \\n" "MIME-Version: 1.0\\n" -"Content-Type: text/plain; charset=CHARSET\\n" -"Content-Transfer-Encoding: ENCODING\\n" +"Content-Type: text/plain; charset=%(charset)s\\n" +"Content-Transfer-Encoding: %(encoding)s\\n" "Generated-By: pygettext.py %(version)s\\n" ''') @@ -203,35 +203,32 @@ -escapes = [] - -def make_escapes(pass_iso8859): - global escapes - if pass_iso8859: - # Allow iso-8859 characters to pass through so that e.g. 'msgid +def make_escapes(pass_nonascii): + global escapes, escape + if pass_nonascii: + # Allow non-ascii characters to pass through so that e.g. 'msgid # "H?he"' would result not result in 'msgid "H\366he"'. Otherwise we # escape any character outside the 32..126 range. mod = 128 + escape = escape_ascii else: mod = 256 - for i in range(256): - if 32 <= (i % mod) <= 126: - escapes.append(chr(i)) - else: - escapes.append("\\%03o" % i) - escapes[ord('\\')] = '\\\\' - escapes[ord('\t')] = '\\t' - escapes[ord('\r')] = '\\r' - escapes[ord('\n')] = '\\n' - escapes[ord('\"')] = '\\"' + escape = escape_nonascii + escapes = [r"\%03o" % i for i in range(mod)] + for i in range(32, 127): + escapes[i] = chr(i) + escapes[ord('\\')] = r'\\' + escapes[ord('\t')] = r'\t' + escapes[ord('\r')] = r'\r' + escapes[ord('\n')] = r'\n' + escapes[ord('\"')] = r'\"' -def escape(s): - global escapes - s = list(s) - for i in range(len(s)): - s[i] = escapes[ord(s[i])] - return EMPTYSTRING.join(s) +def escape_ascii(s, encoding): + return ''.join(escapes[ord(c)] if ord(c) < 128 else c for c in s) + +def escape_nonascii(s, encoding): + return ''.join(escapes[b] for b in s.encode(encoding)) def safe_eval(s): @@ -239,18 +236,18 @@ return eval(s, {'__builtins__':{}}, {}) -def normalize(s): +def normalize(s, encoding): # This converts the various Python string types into a format that is # appropriate for .po files, namely much closer to C style. lines = s.split('\n') if len(lines) == 1: - s = '"' + escape(s) + '"' + s = '"' + escape(s, encoding) + '"' else: if not lines[-1]: del lines[-1] lines[-1] = lines[-1] + '\n' for i in range(len(lines)): - lines[i] = escape(lines[i]) + lines[i] = escape(lines[i], encoding) lineterm = '\\n"\n"' s = '""\n"' + lineterm.join(lines) + '"' return s @@ -447,7 +444,10 @@ timestamp = time.strftime('%Y-%m-%d %H:%M+%Z') # The time stamp in the header doesn't have the same format as that # generated by xgettext... - print(pot_header % {'time': timestamp, 'version': __version__}, file=fp) + encoding = fp.encoding if fp.encoding else 'UTF-8' + print(pot_header % {'time': timestamp, 'version': __version__, + 'charset': encoding, + 'encoding': '8bit'}, file=fp) # Sort the entries. First sort each particular entry's keys, then # sort all the entries by their first item. reverse = {} @@ -491,7 +491,7 @@ print(locline, file=fp) if isdocstring: print('#, docstring', file=fp) - print('msgid', normalize(k), file=fp) + print('msgid', normalize(k, encoding), file=fp) print('msgstr ""\n', file=fp) @@ -587,7 +587,7 @@ fp.close() # calculate escapes - make_escapes(options.escape) + make_escapes(not options.escape) # calculate all keywords options.keywords.extend(default_keywords) @@ -620,17 +620,17 @@ if filename == '-': if options.verbose: print(_('Reading standard input')) - fp = sys.stdin + fp = sys.stdin.buffer closep = 0 else: if options.verbose: print(_('Working on %s') % filename) - fp = open(filename) + fp = open(filename, 'rb') closep = 1 try: eater.set_filename(filename) try: - tokens = tokenize.generate_tokens(fp.readline) + tokens = tokenize.tokenize(fp.readline) for _token in tokens: eater(*_token) except tokenize.TokenError as e: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 23:29:07 2013 From: python-checkins at python.org (antoine.pitrou) Date: Sat, 9 Feb 2013 23:29:07 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTcz?= =?utf-8?q?=3A_Remove_uses_of_locale-dependent_C_functions_=28isalpha=28?= =?utf-8?q?=29_etc=2E=29_in?= Message-ID: <3Z3Sbl5FkRzSD9@mail.python.org> http://hg.python.org/cpython/rev/38830281d43b changeset: 82110:38830281d43b branch: 3.2 parent: 82107:cd59b398907d user: Antoine Pitrou date: Sat Feb 09 23:11:27 2013 +0100 summary: Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) in the interpreter. I've left a couple of them in: zlib (third-party lib), getaddrinfo.c (doesn't include Python.h, and probably obsolete), _sre.c (legitimate use for the re.LOCALE flag). files: Misc/NEWS | 3 +++ Modules/_struct.c | 4 ++-- Modules/binascii.c | 2 +- Modules/posixmodule.c | 2 +- Modules/socketmodule.c | 2 +- Objects/longobject.c | 4 ++-- Objects/stringlib/formatter.h | 2 +- Python/ast.c | 2 +- Python/dynload_aix.c | 3 +-- Python/getargs.c | 6 +++--- Python/mystrtoul.c | 6 +++--- 11 files changed, 19 insertions(+), 17 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) + in the interpreter. + - Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. diff --git a/Modules/_struct.c b/Modules/_struct.c --- a/Modules/_struct.c +++ b/Modules/_struct.c @@ -1184,7 +1184,7 @@ size = 0; len = 0; while ((c = *s++) != '\0') { - if (isspace(Py_CHARMASK(c))) + if (Py_ISSPACE(Py_CHARMASK(c))) continue; if ('0' <= c && c <= '9') { num = c - '0'; @@ -1249,7 +1249,7 @@ s = fmt; size = 0; while ((c = *s++) != '\0') { - if (isspace(Py_CHARMASK(c))) + if (Py_ISSPACE(Py_CHARMASK(c))) continue; if ('0' <= c && c <= '9') { num = c - '0'; diff --git a/Modules/binascii.c b/Modules/binascii.c --- a/Modules/binascii.c +++ b/Modules/binascii.c @@ -1099,7 +1099,7 @@ static int to_int(int c) { - if (isdigit(c)) + if (Py_ISDIGIT(c)) return c - '0'; else { if (Py_ISUPPER(c)) diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -695,7 +695,7 @@ if (strlen(msgbuf) > 0) { /* If Non-Empty Msg, Trim CRLF */ char *lastc = &msgbuf[ strlen(msgbuf)-1 ]; - while (lastc > msgbuf && isspace(Py_CHARMASK(*lastc))) + while (lastc > msgbuf && Py_ISSPACE(Py_CHARMASK(*lastc))) *lastc-- = '\0'; /* Trim Trailing Whitespace (CRLF) */ } diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c --- a/Modules/socketmodule.c +++ b/Modules/socketmodule.c @@ -519,7 +519,7 @@ /* If non-empty msg, trim CRLF */ char *lastc = &outbuf[ strlen(outbuf)-1 ]; while (lastc > outbuf && - isspace(Py_CHARMASK(*lastc))) { + Py_ISSPACE(Py_CHARMASK(*lastc))) { /* Trim trailing whitespace (CRLF) */ *lastc-- = '\0'; } diff --git a/Objects/longobject.c b/Objects/longobject.c --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -1887,7 +1887,7 @@ "int() arg 2 must be >= 2 and <= 36"); return NULL; } - while (*str != '\0' && isspace(Py_CHARMASK(*str))) + while (*str != '\0' && Py_ISSPACE(Py_CHARMASK(*str))) str++; if (*str == '+') ++str; @@ -2131,7 +2131,7 @@ goto onError; if (sign < 0) Py_SIZE(z) = -(Py_SIZE(z)); - while (*str && isspace(Py_CHARMASK(*str))) + while (*str && Py_ISSPACE(Py_CHARMASK(*str))) str++; if (*str != '\0') goto onError; diff --git a/Objects/stringlib/formatter.h b/Objects/stringlib/formatter.h --- a/Objects/stringlib/formatter.h +++ b/Objects/stringlib/formatter.h @@ -414,7 +414,7 @@ STRINGLIB_CHAR *end = ptr + len; STRINGLIB_CHAR *remainder; - while (ptr /* for isdigit() */ #include /* for global errno */ #include /* for strerror() */ #include /* for malloc(), free() */ @@ -144,7 +143,7 @@ if (nerr == load_errtab[j].errNo && load_errtab[j].errstr) ERRBUF_APPEND(load_errtab[j].errstr); } - while (isdigit(Py_CHARMASK(*message[i]))) message[i]++ ; + while (Py_ISDIGIT(Py_CHARMASK(*message[i]))) message[i]++ ; ERRBUF_APPEND(message[i]); ERRBUF_APPEND("\n"); } diff --git a/Python/getargs.c b/Python/getargs.c --- a/Python/getargs.c +++ b/Python/getargs.c @@ -288,7 +288,7 @@ if (level == 0) { if (c == 'O') max++; - else if (isalpha(Py_CHARMASK(c))) { + else if (Py_ISALPHA(Py_CHARMASK(c))) { if (c != 'e') /* skip encoded */ max++; } else if (c == '|') @@ -378,7 +378,7 @@ } } - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + if (*format != '\0' && !Py_ISALPHA(Py_CHARMASK(*format)) && *format != '(' && *format != '|' && *format != ':' && *format != ';') { PyErr_Format(PyExc_SystemError, @@ -471,7 +471,7 @@ } else if (c == ':' || c == ';' || c == '\0') break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) + else if (level == 0 && Py_ISALPHA(Py_CHARMASK(c))) n++; } diff --git a/Python/mystrtoul.c b/Python/mystrtoul.c --- a/Python/mystrtoul.c +++ b/Python/mystrtoul.c @@ -99,7 +99,7 @@ register int ovlimit; /* required digits to overflow */ /* skip leading white space */ - while (*str && isspace(Py_CHARMASK(*str))) + while (*str && Py_ISSPACE(Py_CHARMASK(*str))) ++str; /* check for leading 0b, 0o or 0x for auto-base or base 16 */ @@ -138,7 +138,7 @@ /* skip all zeroes... */ while (*str == '0') ++str; - while (isspace(Py_CHARMASK(*str))) + while (Py_ISSPACE(Py_CHARMASK(*str))) ++str; if (ptr) *ptr = str; @@ -266,7 +266,7 @@ unsigned long uresult; char sign; - while (*str && isspace(Py_CHARMASK(*str))) + while (*str && Py_ISSPACE(Py_CHARMASK(*str))) str++; sign = *str; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 9 23:29:09 2013 From: python-checkins at python.org (antoine.pitrou) Date: Sat, 9 Feb 2013 23:29:09 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317173=3A_Remove_uses_of_locale-dependent_C_functions_?= =?utf-8?b?KGlzYWxwaGEoKSBldGMuKSBpbg==?= Message-ID: <3Z3Sbn2ys7zSd3@mail.python.org> http://hg.python.org/cpython/rev/c08bcf5302ec changeset: 82111:c08bcf5302ec branch: 3.3 parent: 82108:062406c06cc1 parent: 82110:38830281d43b user: Antoine Pitrou date: Sat Feb 09 23:14:42 2013 +0100 summary: Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) in the interpreter. I've left a couple of them in: zlib (third-party lib), getaddrinfo.c (doesn't include Python.h, and probably obsolete), _sre.c (legitimate use for the re.LOCALE flag), mpdecimal (needs to build without Python.h). files: Misc/NEWS | 3 +++ Modules/_struct.c | 4 ++-- Modules/binascii.c | 2 +- Modules/posixmodule.c | 2 +- Modules/socketmodule.c | 2 +- Objects/longobject.c | 4 ++-- Python/ast.c | 2 +- Python/dynload_aix.c | 3 +-- Python/formatter_unicode.c | 2 +- Python/getargs.c | 6 +++--- Python/mystrtoul.c | 6 +++--- 11 files changed, 19 insertions(+), 17 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) + in the interpreter. + - Issue #17137: When an Unicode string is resized, the internal wide character string (wstr) format is now cleared. diff --git a/Modules/_struct.c b/Modules/_struct.c --- a/Modules/_struct.c +++ b/Modules/_struct.c @@ -1271,7 +1271,7 @@ size = 0; len = 0; while ((c = *s++) != '\0') { - if (isspace(Py_CHARMASK(c))) + if (Py_ISSPACE(Py_CHARMASK(c))) continue; if ('0' <= c && c <= '9') { num = c - '0'; @@ -1336,7 +1336,7 @@ s = fmt; size = 0; while ((c = *s++) != '\0') { - if (isspace(Py_CHARMASK(c))) + if (Py_ISSPACE(Py_CHARMASK(c))) continue; if ('0' <= c && c <= '9') { num = c - '0'; diff --git a/Modules/binascii.c b/Modules/binascii.c --- a/Modules/binascii.c +++ b/Modules/binascii.c @@ -1135,7 +1135,7 @@ static int to_int(int c) { - if (isdigit(c)) + if (Py_ISDIGIT(c)) return c - '0'; else { if (Py_ISUPPER(c)) diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -1172,7 +1172,7 @@ if (strlen(msgbuf) > 0) { /* If Non-Empty Msg, Trim CRLF */ char *lastc = &msgbuf[ strlen(msgbuf)-1 ]; - while (lastc > msgbuf && isspace(Py_CHARMASK(*lastc))) + while (lastc > msgbuf && Py_ISSPACE(Py_CHARMASK(*lastc))) *lastc-- = '\0'; /* Trim Trailing Whitespace (CRLF) */ } diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c --- a/Modules/socketmodule.c +++ b/Modules/socketmodule.c @@ -555,7 +555,7 @@ /* If non-empty msg, trim CRLF */ char *lastc = &outbuf[ strlen(outbuf)-1 ]; while (lastc > outbuf && - isspace(Py_CHARMASK(*lastc))) { + Py_ISSPACE(Py_CHARMASK(*lastc))) { /* Trim trailing whitespace (CRLF) */ *lastc-- = '\0'; } diff --git a/Objects/longobject.c b/Objects/longobject.c --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -2019,7 +2019,7 @@ "int() arg 2 must be >= 2 and <= 36"); return NULL; } - while (*str != '\0' && isspace(Py_CHARMASK(*str))) + while (*str != '\0' && Py_ISSPACE(Py_CHARMASK(*str))) str++; if (*str == '+') ++str; @@ -2263,7 +2263,7 @@ goto onError; if (sign < 0) Py_SIZE(z) = -(Py_SIZE(z)); - while (*str && isspace(Py_CHARMASK(*str))) + while (*str && Py_ISSPACE(Py_CHARMASK(*str))) str++; if (*str != '\0') goto onError; diff --git a/Python/ast.c b/Python/ast.c --- a/Python/ast.c +++ b/Python/ast.c @@ -3747,7 +3747,7 @@ int quote = Py_CHARMASK(*s); int rawmode = 0; int need_encoding; - if (isalpha(quote)) { + if (Py_ISALPHA(quote)) { while (!*bytesmode || !rawmode) { if (quote == 'b' || quote == 'B') { quote = *++s; diff --git a/Python/dynload_aix.c b/Python/dynload_aix.c --- a/Python/dynload_aix.c +++ b/Python/dynload_aix.c @@ -4,7 +4,6 @@ #include "Python.h" #include "importdl.h" -#include /* for isdigit() */ #include /* for global errno */ #include /* for strerror() */ #include /* for malloc(), free() */ @@ -141,7 +140,7 @@ if (nerr == load_errtab[j].errNo && load_errtab[j].errstr) ERRBUF_APPEND(load_errtab[j].errstr); } - while (isdigit(Py_CHARMASK(*message[i]))) message[i]++ ; + while (Py_ISDIGIT(Py_CHARMASK(*message[i]))) message[i]++ ; ERRBUF_APPEND(message[i]); ERRBUF_APPEND("\n"); } diff --git a/Python/formatter_unicode.c b/Python/formatter_unicode.c --- a/Python/formatter_unicode.c +++ b/Python/formatter_unicode.c @@ -401,7 +401,7 @@ { Py_ssize_t remainder; - while (pos http://hg.python.org/cpython/rev/10e59553a8de changeset: 82112:10e59553a8de parent: 82109:99795d711a40 parent: 82111:c08bcf5302ec user: Antoine Pitrou date: Sat Feb 09 23:16:51 2013 +0100 summary: Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) in the interpreter. I've left a couple of them in: zlib (third-party lib), getaddrinfo.c (doesn't include Python.h, and probably obsolete), _sre.c (legitimate use for the re.LOCALE flag), mpdecimal (needs to build without Python.h). files: Misc/NEWS | 3 +++ Modules/_struct.c | 4 ++-- Modules/binascii.c | 2 +- Objects/longobject.c | 4 ++-- Python/ast.c | 2 +- Python/dynload_aix.c | 3 +-- Python/formatter_unicode.c | 2 +- Python/getargs.c | 6 +++--- Python/mystrtoul.c | 6 +++--- 9 files changed, 17 insertions(+), 15 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) + in the interpreter. + - Issue #17137: When an Unicode string is resized, the internal wide character string (wstr) format is now cleared. diff --git a/Modules/_struct.c b/Modules/_struct.c --- a/Modules/_struct.c +++ b/Modules/_struct.c @@ -1270,7 +1270,7 @@ size = 0; len = 0; while ((c = *s++) != '\0') { - if (isspace(Py_CHARMASK(c))) + if (Py_ISSPACE(Py_CHARMASK(c))) continue; if ('0' <= c && c <= '9') { num = c - '0'; @@ -1335,7 +1335,7 @@ s = fmt; size = 0; while ((c = *s++) != '\0') { - if (isspace(Py_CHARMASK(c))) + if (Py_ISSPACE(Py_CHARMASK(c))) continue; if ('0' <= c && c <= '9') { num = c - '0'; diff --git a/Modules/binascii.c b/Modules/binascii.c --- a/Modules/binascii.c +++ b/Modules/binascii.c @@ -1135,7 +1135,7 @@ static int to_int(int c) { - if (isdigit(c)) + if (Py_ISDIGIT(c)) return c - '0'; else { if (Py_ISUPPER(c)) diff --git a/Objects/longobject.c b/Objects/longobject.c --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -2008,7 +2008,7 @@ "int() arg 2 must be >= 2 and <= 36"); return NULL; } - while (*str != '\0' && isspace(Py_CHARMASK(*str))) + while (*str != '\0' && Py_ISSPACE(Py_CHARMASK(*str))) str++; if (*str == '+') ++str; @@ -2252,7 +2252,7 @@ goto onError; if (sign < 0) Py_SIZE(z) = -(Py_SIZE(z)); - while (*str && isspace(Py_CHARMASK(*str))) + while (*str && Py_ISSPACE(Py_CHARMASK(*str))) str++; if (*str != '\0') goto onError; diff --git a/Python/ast.c b/Python/ast.c --- a/Python/ast.c +++ b/Python/ast.c @@ -3761,7 +3761,7 @@ int quote = Py_CHARMASK(*s); int rawmode = 0; int need_encoding; - if (isalpha(quote)) { + if (Py_ISALPHA(quote)) { while (!*bytesmode || !rawmode) { if (quote == 'b' || quote == 'B') { quote = *++s; diff --git a/Python/dynload_aix.c b/Python/dynload_aix.c --- a/Python/dynload_aix.c +++ b/Python/dynload_aix.c @@ -4,7 +4,6 @@ #include "Python.h" #include "importdl.h" -#include /* for isdigit() */ #include /* for global errno */ #include /* for strerror() */ #include /* for malloc(), free() */ @@ -141,7 +140,7 @@ if (nerr == load_errtab[j].errNo && load_errtab[j].errstr) ERRBUF_APPEND(load_errtab[j].errstr); } - while (isdigit(Py_CHARMASK(*message[i]))) message[i]++ ; + while (Py_ISDIGIT(Py_CHARMASK(*message[i]))) message[i]++ ; ERRBUF_APPEND(message[i]); ERRBUF_APPEND("\n"); } diff --git a/Python/formatter_unicode.c b/Python/formatter_unicode.c --- a/Python/formatter_unicode.c +++ b/Python/formatter_unicode.c @@ -401,7 +401,7 @@ { Py_ssize_t remainder; - while (pos http://hg.python.org/cpython/rev/f13bb1e40fbc changeset: 82113:f13bb1e40fbc user: Antoine Pitrou date: Sun Feb 10 00:02:44 2013 +0100 summary: Issue #13773: sqlite3.connect() gets a new `uri` parameter to pass the filename as a URI, allowing to pass custom options. files: Doc/library/sqlite3.rst | 14 +++++++++++++- Lib/sqlite3/test/dbapi.py | 18 ++++++++++++++++++ Misc/NEWS | 3 +++ Modules/_sqlite/connection.c | 24 +++++++++++++++++++++--- Modules/_sqlite/module.c | 16 ++++++++++++---- 5 files changed, 67 insertions(+), 8 deletions(-) diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -159,7 +159,7 @@ first blank for the column name: the column name would simply be "x". -.. function:: connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements]) +.. function:: connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri]) Opens a connection to the SQLite database file *database*. You can use ``":memory:"`` to open a database connection to a database that resides in RAM @@ -195,6 +195,18 @@ for the connection, you can set the *cached_statements* parameter. The currently implemented default is to cache 100 statements. + If *uri* is true, *database* is interpreted as a URI. This allows you + to specify options. For example, to open a database in read-only mode + you can use:: + + db = sqlite3.connect('file:path/to/database?mode=ro', uri=True) + + More information about this feature, including a list of recognized options, can + be found in the `SQLite URI documentation `_. + + .. versionchanged:: 3.4 + Added the *uri* parameter. + .. function:: register_converter(typename, callable) diff --git a/Lib/sqlite3/test/dbapi.py b/Lib/sqlite3/test/dbapi.py --- a/Lib/sqlite3/test/dbapi.py +++ b/Lib/sqlite3/test/dbapi.py @@ -28,6 +28,9 @@ except ImportError: threading = None +from test.support import TESTFN, unlink + + class ModuleTests(unittest.TestCase): def CheckAPILevel(self): self.assertEqual(sqlite.apilevel, "2.0", @@ -163,6 +166,21 @@ with self.assertRaises(AttributeError): self.cx.in_transaction = True + def CheckOpenUri(self): + if sqlite.sqlite_version_info < (3, 7, 7): + with self.assertRaises(sqlite.NotSupportedError): + sqlite.connect(':memory:', uri=True) + return + self.addCleanup(unlink, TESTFN) + with sqlite.connect(TESTFN) as cx: + cx.execute('create table test(id integer)') + with sqlite.connect('file:' + TESTFN, uri=True) as cx: + cx.execute('insert into test(id) values(0)') + with sqlite.connect('file:' + TESTFN + '?mode=ro', uri=True) as cx: + with self.assertRaises(sqlite.OperationalError): + cx.execute('insert into test(id) values(1)') + + class CursorTests(unittest.TestCase): def setUp(self): self.cx = sqlite.connect(":memory:") diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -244,6 +244,9 @@ Library ------- +- Issue #13773: sqlite3.connect() gets a new `uri` parameter to pass the + filename as a URI, allowing to pass custom options. + - Issue #17156: pygettext.py now uses an encoding of source file and correctly writes and escapes non-ascii characters. diff --git a/Modules/_sqlite/connection.c b/Modules/_sqlite/connection.c --- a/Modules/_sqlite/connection.c +++ b/Modules/_sqlite/connection.c @@ -60,7 +60,11 @@ int pysqlite_connection_init(pysqlite_Connection* self, PyObject* args, PyObject* kwargs) { - static char *kwlist[] = {"database", "timeout", "detect_types", "isolation_level", "check_same_thread", "factory", "cached_statements", NULL, NULL}; + static char *kwlist[] = { + "database", "timeout", "detect_types", "isolation_level", + "check_same_thread", "factory", "cached_statements", "uri", + NULL + }; char* database; int detect_types = 0; @@ -68,11 +72,14 @@ PyObject* factory = NULL; int check_same_thread = 1; int cached_statements = 100; + int uri = 0; double timeout = 5.0; int rc; - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|diOiOi", kwlist, - &database, &timeout, &detect_types, &isolation_level, &check_same_thread, &factory, &cached_statements)) + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|diOiOip", kwlist, + &database, &timeout, &detect_types, + &isolation_level, &check_same_thread, + &factory, &cached_statements, &uri)) { return -1; } @@ -91,8 +98,19 @@ Py_INCREF(&PyUnicode_Type); self->text_factory = (PyObject*)&PyUnicode_Type; +#ifdef SQLITE_OPEN_URI + Py_BEGIN_ALLOW_THREADS + rc = sqlite3_open_v2(database, &self->db, + SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | + (uri ? SQLITE_OPEN_URI : 0), NULL); +#else + if (uri) { + PyErr_SetString(pysqlite_NotSupportedError, "URIs not supported"); + return -1; + } Py_BEGIN_ALLOW_THREADS rc = sqlite3_open(database, &self->db); +#endif Py_END_ALLOW_THREADS if (rc != SQLITE_OK) { diff --git a/Modules/_sqlite/module.c b/Modules/_sqlite/module.c --- a/Modules/_sqlite/module.c +++ b/Modules/_sqlite/module.c @@ -50,19 +50,26 @@ * C-level, so this code is redundant with the one in connection_init in * connection.c and must always be copied from there ... */ - static char *kwlist[] = {"database", "timeout", "detect_types", "isolation_level", "check_same_thread", "factory", "cached_statements", NULL, NULL}; + static char *kwlist[] = { + "database", "timeout", "detect_types", "isolation_level", + "check_same_thread", "factory", "cached_statements", "uri", + NULL + }; char* database; int detect_types = 0; PyObject* isolation_level; PyObject* factory = NULL; int check_same_thread = 1; int cached_statements; + int uri = 0; double timeout = 5.0; PyObject* result; - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|diOiOi", kwlist, - &database, &timeout, &detect_types, &isolation_level, &check_same_thread, &factory, &cached_statements)) + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|diOiOip", kwlist, + &database, &timeout, &detect_types, + &isolation_level, &check_same_thread, + &factory, &cached_statements, &uri)) { return NULL; } @@ -77,7 +84,8 @@ } PyDoc_STRVAR(module_connect_doc, -"connect(database[, timeout, isolation_level, detect_types, factory])\n\ +"connect(database[, timeout, detect_types, isolation_level,\n\ + check_same_thread, factory, cached_statements, uri])\n\ \n\ Opens a connection to the SQLite database file *database*. You can use\n\ \":memory:\" to open a database connection to a database that resides in\n\ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 00:55:54 2013 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 10 Feb 2013 00:55:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Minor_cleanups?= =?utf-8?q?=2E?= Message-ID: <3Z3VWt0xFCzSgR@mail.python.org> http://hg.python.org/cpython/rev/3afa4c8eee1e changeset: 82114:3afa4c8eee1e branch: 2.7 parent: 82106:49b1fde510a6 user: Raymond Hettinger date: Sat Feb 09 18:55:44 2013 -0500 summary: Minor cleanups. files: Modules/_collectionsmodule.c | 38 ++++++++++++------------ 1 files changed, 19 insertions(+), 19 deletions(-) diff --git a/Modules/_collectionsmodule.c b/Modules/_collectionsmodule.c --- a/Modules/_collectionsmodule.c +++ b/Modules/_collectionsmodule.c @@ -413,8 +413,7 @@ static int _deque_rotate(dequeobject *deque, Py_ssize_t n) { - Py_ssize_t i, m, len=deque->len, halflen=(len+1)>>1; - block *prevblock; + Py_ssize_t m, len=deque->len, halflen=len>>1; if (len <= 1) return 0; @@ -425,12 +424,13 @@ else if (n < -halflen) n += len; } + assert(len > 1); + assert(-halflen <= n && n <= halflen); - assert(deque->len > 1); deque->state++; - for (i=0 ; i 0) { if (deque->leftindex == 0) { - block *b = newblock(NULL, deque->leftblock, deque->len); + block *b = newblock(NULL, deque->leftblock, len); if (b == NULL) return -1; assert(deque->leftblock->leftlink == NULL); @@ -440,22 +440,22 @@ } assert(deque->leftindex > 0); - m = n - i; + m = n; if (m > deque->rightindex + 1) m = deque->rightindex + 1; if (m > deque->leftindex) m = deque->leftindex; - assert (m > 0); + assert (m > 0 && m <= len); memcpy(&deque->leftblock->data[deque->leftindex - m], - &deque->rightblock->data[deque->rightindex - m + 1], + &deque->rightblock->data[deque->rightindex + 1 - m], m * sizeof(PyObject *)); deque->rightindex -= m; deque->leftindex -= m; - i += m; + n -= m; if (deque->rightindex == -1) { + block *prevblock = deque->rightblock->leftlink; assert(deque->rightblock != NULL); - prevblock = deque->rightblock->leftlink; assert(deque->leftblock != deque->rightblock); freeblock(deque->rightblock); prevblock->rightlink = NULL; @@ -463,9 +463,9 @@ deque->rightindex = BLOCKLEN - 1; } } - for (i=0 ; i>n ; ) { + while (n < 0) { if (deque->rightindex == BLOCKLEN - 1) { - block *b = newblock(deque->rightblock, NULL, deque->len); + block *b = newblock(deque->rightblock, NULL, len); if (b == NULL) return -1; assert(deque->rightblock->rightlink == NULL); @@ -475,26 +475,26 @@ } assert (deque->rightindex < BLOCKLEN - 1); - m = i - n; + m = -n; if (m > BLOCKLEN - deque->leftindex) m = BLOCKLEN - deque->leftindex; if (m > BLOCKLEN - 1 - deque->rightindex) m = BLOCKLEN - 1 - deque->rightindex; - assert (m > 0); + assert (m > 0 && m <= len); memcpy(&deque->rightblock->data[deque->rightindex + 1], &deque->leftblock->data[deque->leftindex], m * sizeof(PyObject *)); deque->leftindex += m; deque->rightindex += m; - i -= m; + n += m; if (deque->leftindex == BLOCKLEN) { + block *nextblock = deque->leftblock->rightlink; assert(deque->leftblock != deque->rightblock); - prevblock = deque->leftblock->rightlink; freeblock(deque->leftblock); - assert(prevblock != NULL); - prevblock->leftlink = NULL; - deque->leftblock = prevblock; + assert(nextblock != NULL); + nextblock->leftlink = NULL; + deque->leftblock = nextblock; deque->leftindex = 0; } } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 02:02:20 2013 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 10 Feb 2013 02:02:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Backport_deque?= =?utf-8?q?=2Erotate=28=29_improvements=2E?= Message-ID: <3Z3X0X49kdzSZG@mail.python.org> http://hg.python.org/cpython/rev/5a410900e2b6 changeset: 82115:5a410900e2b6 branch: 3.3 parent: 82111:c08bcf5302ec user: Raymond Hettinger date: Sat Feb 09 20:00:55 2013 -0500 summary: Backport deque.rotate() improvements. files: Modules/_collectionsmodule.c | 92 +++++++++++++++++++---- 1 files changed, 73 insertions(+), 19 deletions(-) diff --git a/Modules/_collectionsmodule.c b/Modules/_collectionsmodule.c --- a/Modules/_collectionsmodule.c +++ b/Modules/_collectionsmodule.c @@ -413,10 +413,9 @@ static int _deque_rotate(dequeobject *deque, Py_ssize_t n) { - Py_ssize_t i, len=deque->len, halflen=(len+1)>>1; - PyObject *item, *rv; + Py_ssize_t m, len=deque->len, halflen=len>>1; - if (len == 0) + if (len <= 1) return 0; if (n > halflen || n < -halflen) { n %= len; @@ -425,24 +424,79 @@ else if (n < -halflen) n += len; } + assert(len > 1); + assert(-halflen <= n && n <= halflen); - for (i=0 ; istate++; + while (n > 0) { + if (deque->leftindex == 0) { + block *b = newblock(NULL, deque->leftblock, len); + if (b == NULL) + return -1; + assert(deque->leftblock->leftlink == NULL); + deque->leftblock->leftlink = b; + deque->leftblock = b; + deque->leftindex = BLOCKLEN; + } + assert(deque->leftindex > 0); + + m = n; + if (m > deque->rightindex + 1) + m = deque->rightindex + 1; + if (m > deque->leftindex) + m = deque->leftindex; + assert (m > 0 && m <= len); + memcpy(&deque->leftblock->data[deque->leftindex - m], + &deque->rightblock->data[deque->rightindex + 1 - m], + m * sizeof(PyObject *)); + deque->rightindex -= m; + deque->leftindex -= m; + n -= m; + + if (deque->rightindex == -1) { + block *prevblock = deque->rightblock->leftlink; + assert(deque->rightblock != NULL); + assert(deque->leftblock != deque->rightblock); + freeblock(deque->rightblock); + prevblock->rightlink = NULL; + deque->rightblock = prevblock; + deque->rightindex = BLOCKLEN - 1; + } } - for (i=0 ; i>n ; i--) { - item = deque_popleft(deque, NULL); - assert (item != NULL); - rv = deque_append(deque, item); - Py_DECREF(item); - if (rv == NULL) - return -1; - Py_DECREF(rv); + while (n < 0) { + if (deque->rightindex == BLOCKLEN - 1) { + block *b = newblock(deque->rightblock, NULL, len); + if (b == NULL) + return -1; + assert(deque->rightblock->rightlink == NULL); + deque->rightblock->rightlink = b; + deque->rightblock = b; + deque->rightindex = -1; + } + assert (deque->rightindex < BLOCKLEN - 1); + + m = -n; + if (m > BLOCKLEN - deque->leftindex) + m = BLOCKLEN - deque->leftindex; + if (m > BLOCKLEN - 1 - deque->rightindex) + m = BLOCKLEN - 1 - deque->rightindex; + assert (m > 0 && m <= len); + memcpy(&deque->rightblock->data[deque->rightindex + 1], + &deque->leftblock->data[deque->leftindex], + m * sizeof(PyObject *)); + deque->leftindex += m; + deque->rightindex += m; + n += m; + + if (deque->leftindex == BLOCKLEN) { + block *nextblock = deque->leftblock->rightlink; + assert(deque->leftblock != deque->rightblock); + freeblock(deque->leftblock); + assert(nextblock != NULL); + nextblock->leftlink = NULL; + deque->leftblock = nextblock; + deque->leftindex = 0; + } } return 0; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 02:02:21 2013 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 10 Feb 2013 02:02:21 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge?= Message-ID: <3Z3X0Y6p0SzSc8@mail.python.org> http://hg.python.org/cpython/rev/bd6a2fcc7711 changeset: 82116:bd6a2fcc7711 parent: 82113:f13bb1e40fbc parent: 82115:5a410900e2b6 user: Raymond Hettinger date: Sat Feb 09 20:01:59 2013 -0500 summary: Merge files: -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Sun Feb 10 06:00:18 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Sun, 10 Feb 2013 06:00:18 +0100 Subject: [Python-checkins] Daily reference leaks (bd6a2fcc7711): sum=2 Message-ID: results for bd6a2fcc7711 on branch "default" -------------------------------------------- test_concurrent_futures leaked [-2, 3, 1] memory blocks, sum=2 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflog1DsIXd', '-x'] From python-checkins at python.org Sun Feb 10 11:02:17 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 11:02:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Import_shutil_?= =?utf-8?q?for_restore=5Ftest=5Fsupport=5FTESTFN=28=29=2E?= Message-ID: <3Z3lzY2bRmzSb8@mail.python.org> http://hg.python.org/cpython/rev/3379d9efda6b changeset: 82117:3379d9efda6b branch: 2.7 parent: 82114:3afa4c8eee1e user: Serhiy Storchaka date: Sun Feb 10 12:01:31 2013 +0200 summary: Import shutil for restore_test_support_TESTFN(). files: Lib/test/regrtest.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/regrtest.py b/Lib/test/regrtest.py --- a/Lib/test/regrtest.py +++ b/Lib/test/regrtest.py @@ -158,6 +158,7 @@ import os import random import re +import shutil import sys import time import traceback @@ -943,7 +944,6 @@ return FAILED, test_time def cleanup_test_droppings(testname, verbose): - import shutil import stat import gc -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 11:27:47 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 11:27:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzY5NzU6?= =?utf-8?q?_os=2Epath=2Erealpath=28=29_now_correctly_resolves_multiple_nes?= =?utf-8?q?ted_symlinks?= Message-ID: <3Z3mXz0ZS9zPy0@mail.python.org> http://hg.python.org/cpython/rev/6ec6dbf787f4 changeset: 82118:6ec6dbf787f4 branch: 2.7 user: Serhiy Storchaka date: Sun Feb 10 12:21:49 2013 +0200 summary: Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. files: Lib/posixpath.py | 75 +++++++++++++------------ Lib/test/test_posixpath.py | 49 ++++++++++++++++ Misc/NEWS | 3 + 3 files changed, 92 insertions(+), 35 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -364,46 +364,51 @@ def realpath(filename): """Return the canonical path of the specified filename, eliminating any symbolic links encountered in the path.""" - if isabs(filename): - bits = ['/'] + filename.split('/')[1:] - else: - bits = [''] + filename.split('/') + path, ok = _joinrealpath('', filename, {}) + return abspath(path) - for i in range(2, len(bits)+1): - component = join(*bits[0:i]) - # Resolve symbolic links. - if islink(component): - resolved = _resolve_link(component) - if resolved is None: - # Infinite loop -- return original component + rest of the path - return abspath(join(*([component] + bits[i:]))) +# Join two paths, normalizing ang eliminating any symbolic links +# encountered in the second path. +def _joinrealpath(path, rest, seen): + if isabs(rest): + rest = rest[1:] + path = sep + + while rest: + name, _, rest = rest.partition(sep) + if not name or name == curdir: + # current dir + continue + if name == pardir: + # parent dir + if path: + path = dirname(path) else: - newpath = join(*([resolved] + bits[i:])) - return realpath(newpath) + path = name + continue + newpath = join(path, name) + if not islink(newpath): + path = newpath + continue + # Resolve the symbolic link + if newpath in seen: + # Already seen this path + path = seen[newpath] + if path is not None: + # use cached value + continue + # The symlink is not resolved, so we must have a symlink loop. + # Return already resolved part + rest of the path unchanged. + return join(newpath, rest), False + seen[newpath] = None # not resolved symlink + path, ok = _joinrealpath(path, os.readlink(newpath), seen) + if not ok: + return join(path, rest), False + seen[newpath] = path # resolved symlink - return abspath(filename) + return path, True -def _resolve_link(path): - """Internal helper function. Takes a path and follows symlinks - until we either arrive at something that isn't a symlink, or - encounter a path we've seen before (meaning that there's a loop). - """ - paths_seen = set() - while islink(path): - if path in paths_seen: - # Already seen this path, so we must have a symlink loop - return None - paths_seen.add(path) - # Resolve where the link points to - resolved = os.readlink(path) - if not isabs(resolved): - dir = dirname(path) - path = normpath(join(dir, resolved)) - else: - path = normpath(resolved) - return path - supports_unicode_filenames = (sys.platform == 'darwin') def relpath(path, start=curdir): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -236,6 +236,22 @@ self.assertEqual(realpath(ABSTFN+"1"), ABSTFN+"1") self.assertEqual(realpath(ABSTFN+"2"), ABSTFN+"2") + self.assertEqual(realpath(ABSTFN+"1/x"), ABSTFN+"1/x") + self.assertEqual(realpath(ABSTFN+"1/.."), dirname(ABSTFN)) + self.assertEqual(realpath(ABSTFN+"1/../x"), dirname(ABSTFN) + "/x") + os.symlink(ABSTFN+"x", ABSTFN+"y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "y"), + ABSTFN + "y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "1"), + ABSTFN + "1") + + os.symlink(basename(ABSTFN) + "a/b", ABSTFN+"a") + self.assertEqual(realpath(ABSTFN+"a"), ABSTFN+"a/b") + + os.symlink("../" + basename(dirname(ABSTFN)) + "/" + + basename(ABSTFN) + "c", ABSTFN+"c") + self.assertEqual(realpath(ABSTFN+"c"), ABSTFN+"c") + # Test using relative path as well. os.chdir(dirname(ABSTFN)) self.assertEqual(realpath(basename(ABSTFN)), ABSTFN) @@ -244,6 +260,39 @@ test_support.unlink(ABSTFN) test_support.unlink(ABSTFN+"1") test_support.unlink(ABSTFN+"2") + test_support.unlink(ABSTFN+"y") + test_support.unlink(ABSTFN+"c") + + def test_realpath_repeated_indirect_symlinks(self): + # Issue #6975. + try: + os.mkdir(ABSTFN) + os.symlink('../' + basename(ABSTFN), ABSTFN + '/self') + os.symlink('self/self/self', ABSTFN + '/link') + self.assertEqual(realpath(ABSTFN + '/link'), ABSTFN) + finally: + test_support.unlink(ABSTFN + '/self') + test_support.unlink(ABSTFN + '/link') + safe_rmdir(ABSTFN) + + def test_realpath_deep_recursion(self): + depth = 10 + old_path = abspath('.') + try: + os.mkdir(ABSTFN) + for i in range(depth): + os.symlink('/'.join(['%d' % i] * 10), ABSTFN + '/%d' % (i + 1)) + os.symlink('.', ABSTFN + '/0') + self.assertEqual(realpath(ABSTFN + '/%d' % depth), ABSTFN) + + # Test using relative path as well. + os.chdir(ABSTFN) + self.assertEqual(realpath('%d' % depth), ABSTFN) + finally: + os.chdir(old_path) + for i in range(depth + 1): + test_support.unlink(ABSTFN + '/%d' % i) + safe_rmdir(ABSTFN) def test_realpath_resolve_parents(self): # We also need to resolve any symlinks in the parents of a relative diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,9 @@ Library ------- +- Issue #6975: os.path.realpath() now correctly resolves multiple nested + symlinks on POSIX platforms. + - Issue #17156: pygettext.py now correctly escapes non-ascii characters. - Issue #7358: cStringIO.StringIO now supports writing to and reading from -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 11:27:48 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 11:27:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzY5NzU6?= =?utf-8?q?_os=2Epath=2Erealpath=28=29_now_correctly_resolves_multiple_nes?= =?utf-8?q?ted_symlinks?= Message-ID: <3Z3mY04wGZzQ8K@mail.python.org> http://hg.python.org/cpython/rev/c5f4fa02fc86 changeset: 82119:c5f4fa02fc86 branch: 3.2 parent: 82110:38830281d43b user: Serhiy Storchaka date: Sun Feb 10 12:22:07 2013 +0200 summary: Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. files: Lib/posixpath.py | 84 ++++++++++++++----------- Lib/test/test_posixpath.py | 55 +++++++++++++++++ Misc/NEWS | 3 + 3 files changed, 104 insertions(+), 38 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -390,52 +390,60 @@ def realpath(filename): """Return the canonical path of the specified filename, eliminating any symbolic links encountered in the path.""" - if isinstance(filename, bytes): + path, ok = _joinrealpath(filename[:0], filename, {}) + return abspath(path) + +# Join two paths, normalizing ang eliminating any symbolic links +# encountered in the second path. +def _joinrealpath(path, rest, seen): + if isinstance(path, bytes): sep = b'/' - empty = b'' + curdir = b'.' + pardir = b'..' else: sep = '/' - empty = '' - if isabs(filename): - bits = [sep] + filename.split(sep)[1:] - else: - bits = [empty] + filename.split(sep) + curdir = '.' + pardir = '..' - for i in range(2, len(bits)+1): - component = join(*bits[0:i]) - # Resolve symbolic links. - if islink(component): - resolved = _resolve_link(component) - if resolved is None: - # Infinite loop -- return original component + rest of the path - return abspath(join(*([component] + bits[i:]))) + if isabs(rest): + rest = rest[1:] + path = sep + + while rest: + name, _, rest = rest.partition(sep) + if not name or name == curdir: + # current dir + continue + if name == pardir: + # parent dir + if path: + path = dirname(path) else: - newpath = join(*([resolved] + bits[i:])) - return realpath(newpath) + path = name + continue + newpath = join(path, name) + if not islink(newpath): + path = newpath + continue + # Resolve the symbolic link + if newpath in seen: + # Already seen this path + path = seen[newpath] + if path is not None: + # use cached value + continue + # The symlink is not resolved, so we must have a symlink loop. + # Return already resolved part + rest of the path unchanged. + return join(newpath, rest), False + seen[newpath] = None # not resolved symlink + path, ok = _joinrealpath(path, os.readlink(newpath), seen) + if not ok: + return join(path, rest), False + seen[newpath] = path # resolved symlink - return abspath(filename) + return path, True -def _resolve_link(path): - """Internal helper function. Takes a path and follows symlinks - until we either arrive at something that isn't a symlink, or - encounter a path we've seen before (meaning that there's a loop). - """ - paths_seen = set() - while islink(path): - if path in paths_seen: - # Already seen this path, so we must have a symlink loop - return None - paths_seen.add(path) - # Resolve where the link points to - resolved = os.readlink(path) - if not isabs(resolved): - dir = dirname(path) - path = normpath(join(dir, resolved)) - else: - path = normpath(resolved) - return path - supports_unicode_filenames = (sys.platform == 'darwin') def relpath(path, start=None): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -375,6 +375,22 @@ self.assertEqual(realpath(ABSTFN+"1"), ABSTFN+"1") self.assertEqual(realpath(ABSTFN+"2"), ABSTFN+"2") + self.assertEqual(realpath(ABSTFN+"1/x"), ABSTFN+"1/x") + self.assertEqual(realpath(ABSTFN+"1/.."), dirname(ABSTFN)) + self.assertEqual(realpath(ABSTFN+"1/../x"), dirname(ABSTFN) + "/x") + os.symlink(ABSTFN+"x", ABSTFN+"y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "y"), + ABSTFN + "y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "1"), + ABSTFN + "1") + + os.symlink(basename(ABSTFN) + "a/b", ABSTFN+"a") + self.assertEqual(realpath(ABSTFN+"a"), ABSTFN+"a/b") + + os.symlink("../" + basename(dirname(ABSTFN)) + "/" + + basename(ABSTFN) + "c", ABSTFN+"c") + self.assertEqual(realpath(ABSTFN+"c"), ABSTFN+"c") + # Test using relative path as well. os.chdir(dirname(ABSTFN)) self.assertEqual(realpath(basename(ABSTFN)), ABSTFN) @@ -383,6 +399,45 @@ support.unlink(ABSTFN) support.unlink(ABSTFN+"1") support.unlink(ABSTFN+"2") + support.unlink(ABSTFN+"y") + support.unlink(ABSTFN+"c") + + @unittest.skipUnless(hasattr(os, "symlink"), + "Missing symlink implementation") + @skip_if_ABSTFN_contains_backslash + def test_realpath_repeated_indirect_symlinks(self): + # Issue #6975. + try: + os.mkdir(ABSTFN) + os.symlink('../' + basename(ABSTFN), ABSTFN + '/self') + os.symlink('self/self/self', ABSTFN + '/link') + self.assertEqual(realpath(ABSTFN + '/link'), ABSTFN) + finally: + support.unlink(ABSTFN + '/self') + support.unlink(ABSTFN + '/link') + safe_rmdir(ABSTFN) + + @unittest.skipUnless(hasattr(os, "symlink"), + "Missing symlink implementation") + @skip_if_ABSTFN_contains_backslash + def test_realpath_deep_recursion(self): + depth = 10 + old_path = abspath('.') + try: + os.mkdir(ABSTFN) + for i in range(depth): + os.symlink('/'.join(['%d' % i] * 10), ABSTFN + '/%d' % (i + 1)) + os.symlink('.', ABSTFN + '/0') + self.assertEqual(realpath(ABSTFN + '/%d' % depth), ABSTFN) + + # Test using relative path as well. + os.chdir(ABSTFN) + self.assertEqual(realpath('%d' % depth), ABSTFN) + finally: + os.chdir(old_path) + for i in range(depth + 1): + support.unlink(ABSTFN + '/%d' % i) + safe_rmdir(ABSTFN) @unittest.skipUnless(hasattr(os, "symlink"), "Missing symlink implementation") diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -218,6 +218,9 @@ Library ------- +- Issue #6975: os.path.realpath() now correctly resolves multiple nested + symlinks on POSIX platforms. + - Issue #17156: pygettext.py now uses an encoding of source file and correctly writes and escapes non-ascii characters. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 11:27:50 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 11:27:50 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=236975=3A_os=2Epath=2Erealpath=28=29_now_correctly_reso?= =?utf-8?q?lves_multiple_nested_symlinks?= Message-ID: <3Z3mY21jRjzQKm@mail.python.org> http://hg.python.org/cpython/rev/bfe9526606e2 changeset: 82120:bfe9526606e2 branch: 3.3 parent: 82115:5a410900e2b6 parent: 82119:c5f4fa02fc86 user: Serhiy Storchaka date: Sun Feb 10 12:23:10 2013 +0200 summary: Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. files: Lib/posixpath.py | 84 ++++++++++++++----------- Lib/test/test_posixpath.py | 55 +++++++++++++++++ Misc/NEWS | 3 + 3 files changed, 104 insertions(+), 38 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -391,52 +391,60 @@ def realpath(filename): """Return the canonical path of the specified filename, eliminating any symbolic links encountered in the path.""" - if isinstance(filename, bytes): + path, ok = _joinrealpath(filename[:0], filename, {}) + return abspath(path) + +# Join two paths, normalizing ang eliminating any symbolic links +# encountered in the second path. +def _joinrealpath(path, rest, seen): + if isinstance(path, bytes): sep = b'/' - empty = b'' + curdir = b'.' + pardir = b'..' else: sep = '/' - empty = '' - if isabs(filename): - bits = [sep] + filename.split(sep)[1:] - else: - bits = [empty] + filename.split(sep) + curdir = '.' + pardir = '..' - for i in range(2, len(bits)+1): - component = join(*bits[0:i]) - # Resolve symbolic links. - if islink(component): - resolved = _resolve_link(component) - if resolved is None: - # Infinite loop -- return original component + rest of the path - return abspath(join(*([component] + bits[i:]))) + if isabs(rest): + rest = rest[1:] + path = sep + + while rest: + name, _, rest = rest.partition(sep) + if not name or name == curdir: + # current dir + continue + if name == pardir: + # parent dir + if path: + path = dirname(path) else: - newpath = join(*([resolved] + bits[i:])) - return realpath(newpath) + path = name + continue + newpath = join(path, name) + if not islink(newpath): + path = newpath + continue + # Resolve the symbolic link + if newpath in seen: + # Already seen this path + path = seen[newpath] + if path is not None: + # use cached value + continue + # The symlink is not resolved, so we must have a symlink loop. + # Return already resolved part + rest of the path unchanged. + return join(newpath, rest), False + seen[newpath] = None # not resolved symlink + path, ok = _joinrealpath(path, os.readlink(newpath), seen) + if not ok: + return join(path, rest), False + seen[newpath] = path # resolved symlink - return abspath(filename) + return path, True -def _resolve_link(path): - """Internal helper function. Takes a path and follows symlinks - until we either arrive at something that isn't a symlink, or - encounter a path we've seen before (meaning that there's a loop). - """ - paths_seen = set() - while islink(path): - if path in paths_seen: - # Already seen this path, so we must have a symlink loop - return None - paths_seen.add(path) - # Resolve where the link points to - resolved = os.readlink(path) - if not isabs(resolved): - dir = dirname(path) - path = normpath(join(dir, resolved)) - else: - path = normpath(resolved) - return path - supports_unicode_filenames = (sys.platform == 'darwin') def relpath(path, start=None): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -377,6 +377,22 @@ self.assertEqual(realpath(ABSTFN+"1"), ABSTFN+"1") self.assertEqual(realpath(ABSTFN+"2"), ABSTFN+"2") + self.assertEqual(realpath(ABSTFN+"1/x"), ABSTFN+"1/x") + self.assertEqual(realpath(ABSTFN+"1/.."), dirname(ABSTFN)) + self.assertEqual(realpath(ABSTFN+"1/../x"), dirname(ABSTFN) + "/x") + os.symlink(ABSTFN+"x", ABSTFN+"y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "y"), + ABSTFN + "y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "1"), + ABSTFN + "1") + + os.symlink(basename(ABSTFN) + "a/b", ABSTFN+"a") + self.assertEqual(realpath(ABSTFN+"a"), ABSTFN+"a/b") + + os.symlink("../" + basename(dirname(ABSTFN)) + "/" + + basename(ABSTFN) + "c", ABSTFN+"c") + self.assertEqual(realpath(ABSTFN+"c"), ABSTFN+"c") + # Test using relative path as well. os.chdir(dirname(ABSTFN)) self.assertEqual(realpath(basename(ABSTFN)), ABSTFN) @@ -385,6 +401,45 @@ support.unlink(ABSTFN) support.unlink(ABSTFN+"1") support.unlink(ABSTFN+"2") + support.unlink(ABSTFN+"y") + support.unlink(ABSTFN+"c") + + @unittest.skipUnless(hasattr(os, "symlink"), + "Missing symlink implementation") + @skip_if_ABSTFN_contains_backslash + def test_realpath_repeated_indirect_symlinks(self): + # Issue #6975. + try: + os.mkdir(ABSTFN) + os.symlink('../' + basename(ABSTFN), ABSTFN + '/self') + os.symlink('self/self/self', ABSTFN + '/link') + self.assertEqual(realpath(ABSTFN + '/link'), ABSTFN) + finally: + support.unlink(ABSTFN + '/self') + support.unlink(ABSTFN + '/link') + safe_rmdir(ABSTFN) + + @unittest.skipUnless(hasattr(os, "symlink"), + "Missing symlink implementation") + @skip_if_ABSTFN_contains_backslash + def test_realpath_deep_recursion(self): + depth = 10 + old_path = abspath('.') + try: + os.mkdir(ABSTFN) + for i in range(depth): + os.symlink('/'.join(['%d' % i] * 10), ABSTFN + '/%d' % (i + 1)) + os.symlink('.', ABSTFN + '/0') + self.assertEqual(realpath(ABSTFN + '/%d' % depth), ABSTFN) + + # Test using relative path as well. + os.chdir(ABSTFN) + self.assertEqual(realpath('%d' % depth), ABSTFN) + finally: + os.chdir(old_path) + for i in range(depth + 1): + support.unlink(ABSTFN + '/%d' % i) + safe_rmdir(ABSTFN) @unittest.skipUnless(hasattr(os, "symlink"), "Missing symlink implementation") diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -172,6 +172,9 @@ Library ------- +- Issue #6975: os.path.realpath() now correctly resolves multiple nested + symlinks on POSIX platforms. + - Issue #17156: pygettext.py now uses an encoding of source file and correctly writes and escapes non-ascii characters. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 11:27:51 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 11:27:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=236975=3A_os=2Epath=2Erealpath=28=29_now_correctl?= =?utf-8?q?y_resolves_multiple_nested_symlinks?= Message-ID: <3Z3mY35WL3zQFQ@mail.python.org> http://hg.python.org/cpython/rev/f42cabe6ccb5 changeset: 82121:f42cabe6ccb5 parent: 82116:bd6a2fcc7711 parent: 82120:bfe9526606e2 user: Serhiy Storchaka date: Sun Feb 10 12:24:06 2013 +0200 summary: Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. files: Lib/posixpath.py | 84 ++++++++++++++----------- Lib/test/test_posixpath.py | 55 +++++++++++++++++ Misc/NEWS | 3 + 3 files changed, 104 insertions(+), 38 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -363,52 +363,60 @@ def realpath(filename): """Return the canonical path of the specified filename, eliminating any symbolic links encountered in the path.""" - if isinstance(filename, bytes): + path, ok = _joinrealpath(filename[:0], filename, {}) + return abspath(path) + +# Join two paths, normalizing ang eliminating any symbolic links +# encountered in the second path. +def _joinrealpath(path, rest, seen): + if isinstance(path, bytes): sep = b'/' - empty = b'' + curdir = b'.' + pardir = b'..' else: sep = '/' - empty = '' - if isabs(filename): - bits = [sep] + filename.split(sep)[1:] - else: - bits = [empty] + filename.split(sep) + curdir = '.' + pardir = '..' - for i in range(2, len(bits)+1): - component = join(*bits[0:i]) - # Resolve symbolic links. - if islink(component): - resolved = _resolve_link(component) - if resolved is None: - # Infinite loop -- return original component + rest of the path - return abspath(join(*([component] + bits[i:]))) + if isabs(rest): + rest = rest[1:] + path = sep + + while rest: + name, _, rest = rest.partition(sep) + if not name or name == curdir: + # current dir + continue + if name == pardir: + # parent dir + if path: + path = dirname(path) else: - newpath = join(*([resolved] + bits[i:])) - return realpath(newpath) + path = name + continue + newpath = join(path, name) + if not islink(newpath): + path = newpath + continue + # Resolve the symbolic link + if newpath in seen: + # Already seen this path + path = seen[newpath] + if path is not None: + # use cached value + continue + # The symlink is not resolved, so we must have a symlink loop. + # Return already resolved part + rest of the path unchanged. + return join(newpath, rest), False + seen[newpath] = None # not resolved symlink + path, ok = _joinrealpath(path, os.readlink(newpath), seen) + if not ok: + return join(path, rest), False + seen[newpath] = path # resolved symlink - return abspath(filename) + return path, True -def _resolve_link(path): - """Internal helper function. Takes a path and follows symlinks - until we either arrive at something that isn't a symlink, or - encounter a path we've seen before (meaning that there's a loop). - """ - paths_seen = set() - while islink(path): - if path in paths_seen: - # Already seen this path, so we must have a symlink loop - return None - paths_seen.add(path) - # Resolve where the link points to - resolved = os.readlink(path) - if not isabs(resolved): - dir = dirname(path) - path = normpath(join(dir, resolved)) - else: - path = normpath(resolved) - return path - supports_unicode_filenames = (sys.platform == 'darwin') def relpath(path, start=None): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -320,6 +320,22 @@ self.assertEqual(realpath(ABSTFN+"1"), ABSTFN+"1") self.assertEqual(realpath(ABSTFN+"2"), ABSTFN+"2") + self.assertEqual(realpath(ABSTFN+"1/x"), ABSTFN+"1/x") + self.assertEqual(realpath(ABSTFN+"1/.."), dirname(ABSTFN)) + self.assertEqual(realpath(ABSTFN+"1/../x"), dirname(ABSTFN) + "/x") + os.symlink(ABSTFN+"x", ABSTFN+"y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "y"), + ABSTFN + "y") + self.assertEqual(realpath(ABSTFN+"1/../" + basename(ABSTFN) + "1"), + ABSTFN + "1") + + os.symlink(basename(ABSTFN) + "a/b", ABSTFN+"a") + self.assertEqual(realpath(ABSTFN+"a"), ABSTFN+"a/b") + + os.symlink("../" + basename(dirname(ABSTFN)) + "/" + + basename(ABSTFN) + "c", ABSTFN+"c") + self.assertEqual(realpath(ABSTFN+"c"), ABSTFN+"c") + # Test using relative path as well. os.chdir(dirname(ABSTFN)) self.assertEqual(realpath(basename(ABSTFN)), ABSTFN) @@ -328,6 +344,45 @@ support.unlink(ABSTFN) support.unlink(ABSTFN+"1") support.unlink(ABSTFN+"2") + support.unlink(ABSTFN+"y") + support.unlink(ABSTFN+"c") + + @unittest.skipUnless(hasattr(os, "symlink"), + "Missing symlink implementation") + @skip_if_ABSTFN_contains_backslash + def test_realpath_repeated_indirect_symlinks(self): + # Issue #6975. + try: + os.mkdir(ABSTFN) + os.symlink('../' + basename(ABSTFN), ABSTFN + '/self') + os.symlink('self/self/self', ABSTFN + '/link') + self.assertEqual(realpath(ABSTFN + '/link'), ABSTFN) + finally: + support.unlink(ABSTFN + '/self') + support.unlink(ABSTFN + '/link') + safe_rmdir(ABSTFN) + + @unittest.skipUnless(hasattr(os, "symlink"), + "Missing symlink implementation") + @skip_if_ABSTFN_contains_backslash + def test_realpath_deep_recursion(self): + depth = 10 + old_path = abspath('.') + try: + os.mkdir(ABSTFN) + for i in range(depth): + os.symlink('/'.join(['%d' % i] * 10), ABSTFN + '/%d' % (i + 1)) + os.symlink('.', ABSTFN + '/0') + self.assertEqual(realpath(ABSTFN + '/%d' % depth), ABSTFN) + + # Test using relative path as well. + os.chdir(ABSTFN) + self.assertEqual(realpath('%d' % depth), ABSTFN) + finally: + os.chdir(old_path) + for i in range(depth + 1): + support.unlink(ABSTFN + '/%d' % i) + safe_rmdir(ABSTFN) @unittest.skipUnless(hasattr(os, "symlink"), "Missing symlink implementation") diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -244,6 +244,9 @@ Library ------- +- Issue #6975: os.path.realpath() now correctly resolves multiple nested + symlinks on POSIX platforms. + - Issue #13773: sqlite3.connect() gets a new `uri` parameter to pass the filename as a URI, allowing to pass custom options. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 12:48:00 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 10 Feb 2013 12:48:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Reference_implementation_for_?= =?utf-8?q?PEP_422?= Message-ID: <3Z3pKX3JqMzPnW@mail.python.org> http://hg.python.org/peps/rev/dc9405a1eb06 changeset: 4729:dc9405a1eb06 user: Nick Coghlan date: Sun Feb 10 21:47:22 2013 +1000 summary: Reference implementation for PEP 422 files: pep-0422.txt | 11 ++++++++++- 1 files changed, 10 insertions(+), 1 deletions(-) diff --git a/pep-0422.txt b/pep-0422.txt --- a/pep-0422.txt +++ b/pep-0422.txt @@ -325,6 +325,12 @@ ``super()``), and could not make use of those features themselves. +Reference Implementation +======================== + +Daniel Urban has posted a reference implementation to the `issue tracker`_. + + References ========== @@ -337,12 +343,15 @@ .. _Zope's ExtensionClass: http://docs.zope.org/zope_secrets/extensionclass.html +.. _issue tracker: + http://bugs.python.org/issue17044 + Copyright ========= This document has been placed in the public domain. - + .. Local Variables: mode: indented-text -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 10 13:14:17 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 10 Feb 2013 13:14:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Add_Daniel_as_PEP_422_co-auth?= =?utf-8?q?or=2C_misc_fixes?= Message-ID: <3Z3pvs1p8JzRWQ@mail.python.org> http://hg.python.org/peps/rev/8103042eaa56 changeset: 4730:8103042eaa56 user: Nick Coghlan date: Sun Feb 10 22:13:58 2013 +1000 summary: Add Daniel as PEP 422 co-author, misc fixes files: pep-0422.txt | 37 ++++++++++++++++++++++++++----------- 1 files changed, 26 insertions(+), 11 deletions(-) diff --git a/pep-0422.txt b/pep-0422.txt --- a/pep-0422.txt +++ b/pep-0422.txt @@ -2,7 +2,8 @@ Title: Simple class initialisation hook Version: $Revision$ Last-Modified: $Date$ -Author: Nick Coghlan +Author: Nick Coghlan , + Daniel Urban Status: Draft Type: Standards Track Content-Type: text/x-rst @@ -19,7 +20,7 @@ by setting the ``__metaclass__`` attribute in the class body. While doing this implicitly from called code required the use of an implementation detail (specifically, ``sys._getframes()``), it could also be done explicitly in a -fully supported fashion (for example, by passing ``locals()`` to an +fully supported fashion (for example, by passing ``locals()`` to a function that calculated a suitable ``__metaclass__`` value) There is currently no corresponding mechanism in Python 3 that allows the @@ -44,7 +45,7 @@ While in many cases these two meanings end up referring to one and the same object, there are two situations where that is not the case: -* If the metaclass hint refers to a subclass of ``type``, then it is +* If the metaclass hint refers to an instance of ``type``, then it is considered as a candidate metaclass along with the metaclasses of all of the parents of the class being defined. If a more appropriate metaclass is found amongst the candidates, then it will be used instead of the one @@ -114,7 +115,7 @@ # This is invoked after the class is created, but before any # explicit decorators are called # The usual super() mechanisms are used to correctly support - # multiple inheritance. The decorator style invocation helps + # multiple inheritance. The class decorator style signature helps # ensure that invoking the parent class is as simple as possible. If present on the created object, this new hook will be called by the class @@ -125,10 +126,17 @@ If a metaclass wishes to block class initialisation for some reason, it must arrange for ``cls.__init_class__`` to trigger ``AttributeError``. +Note, that when ``__init_class__`` is called, the name of the class is not +bound to the new class object yet. As a consequence, the two argument form +of ``super()`` cannot be used to call methods (e.g., ``super(Example, cls)`` +wouldn't work in the example above). However, the zero argument form of +``super()`` works as expected, since the ``__class__`` reference is already +initialised. + This general proposal is not a new idea (it was first suggested for inclusion in the language definition `more than 10 years ago`_, and a similar mechanism has long been supported by `Zope's ExtensionClass`_), -but I believe the situation has changed sufficiently in recent years that +but the situation has changed sufficiently in recent years that the idea is worth reconsidering. @@ -156,7 +164,7 @@ class object) clearly distinct in your mind. Even when you know the rules, it's still easy to make a mistake if you're not being extremely careful. An earlier version of this PEP actually included such a mistake: it -stated "instance of type" for a constraint that is actually "subclass of +stated "subclass of type" for a constraint that is actually "instance of type". Understanding the proposed class initialisation hook only requires @@ -278,17 +286,24 @@ Using the current version of the PEP, the scheme originally proposed could be implemented as:: - class DynamicDecorators: + class DynamicDecorators(Base): @classmethod def __init_class__(cls): - super(DynamicDecorators, cls).__init_class__() + # Process any classes later in the MRO + try: + mro_chain = super().__init_class__ + except AttributeError: + pass + else: + mro_chain() + # Process any __decorators__ attributes in the MRO for entry in reversed(cls.mro()): decorators = entry.__dict__.get("__decorators__", ()) for deco in reversed(decorators): cls = deco(cls) -Any subclasses of this type would automatically have the contents of any -``__decorators__`` attributes processed and invoked. +Any subclasses of ``DynamicDecorators`` would then automatically have the +contents of any ``__decorators__`` attributes processed and invoked. The mechanism in the current PEP is considered superior, as many issues to do with ordering and the same decorator being invoked multiple times @@ -328,7 +343,7 @@ Reference Implementation ======================== -Daniel Urban has posted a reference implementation to the `issue tracker`_. +The reference implementation has been posted to the `issue tracker`_. References -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 10 13:25:55 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 10 Feb 2013 13:25:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Update_PEP_422_post_history?= Message-ID: <3Z3q9H52BNzSMw@mail.python.org> http://hg.python.org/peps/rev/0ecd91688454 changeset: 4731:0ecd91688454 user: Nick Coghlan date: Sun Feb 10 22:25:46 2013 +1000 summary: Update PEP 422 post history files: pep-0422.txt | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/pep-0422.txt b/pep-0422.txt --- a/pep-0422.txt +++ b/pep-0422.txt @@ -9,7 +9,7 @@ Content-Type: text/x-rst Created: 5-Jun-2012 Python-Version: 3.4 -Post-History: 5-Jun-2012 +Post-History: 5-Jun-2012, 10-Feb-2012 Abstract -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 10 13:38:01 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 13:38:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE0NzA1?= =?utf-8?q?48=3A_XMLGenerator_now_works_with_UTF-16_and_UTF-32_encodings?= =?utf-8?q?=2E?= Message-ID: <3Z3qRF22D6zPX5@mail.python.org> http://hg.python.org/cpython/rev/010b455de0e0 changeset: 82122:010b455de0e0 branch: 2.7 parent: 82118:6ec6dbf787f4 user: Serhiy Storchaka date: Sun Feb 10 14:26:08 2013 +0200 summary: Issue #1470548: XMLGenerator now works with UTF-16 and UTF-32 encodings. files: Lib/test/test_sax.py | 87 ++++++++++++++++++++++++---- Lib/xml/sax/saxutils.py | 82 ++++++++++++++++----------- Misc/NEWS | 2 + 3 files changed, 124 insertions(+), 47 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -14,6 +14,7 @@ from xml.sax.handler import feature_namespaces from xml.sax.xmlreader import InputSource, AttributesImpl, AttributesNSImpl from cStringIO import StringIO +import io import os.path import shutil import test.test_support as support @@ -170,9 +171,9 @@ start = '\n' -class XmlgenTest(unittest.TestCase): +class XmlgenTest: def test_xmlgen_basic(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() gen.startElement("doc", {}) @@ -182,7 +183,7 @@ self.assertEqual(result.getvalue(), start + "") def test_xmlgen_content(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -194,7 +195,7 @@ self.assertEqual(result.getvalue(), start + "huhei") def test_xmlgen_pi(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -206,7 +207,7 @@ self.assertEqual(result.getvalue(), start + "") def test_xmlgen_content_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -219,7 +220,7 @@ start + "<huhei&") def test_xmlgen_attr_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -238,8 +239,41 @@ "" "")) + def test_xmlgen_encoding(self): + encodings = ('iso-8859-15', 'utf-8', + 'utf-16be', 'utf-16le', + 'utf-32be', 'utf-32le') + for encoding in encodings: + result = self.ioclass() + gen = XMLGenerator(result, encoding=encoding) + + gen.startDocument() + gen.startElement("doc", {"a": u'\u20ac'}) + gen.characters(u"\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), ( + u'\n' + u'\u20ac' % encoding + ).encode(encoding, 'xmlcharrefreplace')) + + def test_xmlgen_unencodable(self): + result = self.ioclass() + gen = XMLGenerator(result, encoding='ascii') + + gen.startDocument() + gen.startElement("doc", {"a": u'\u20ac'}) + gen.characters(u"\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), + '\n' + '') + def test_xmlgen_ignorable(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -251,7 +285,7 @@ self.assertEqual(result.getvalue(), start + " ") def test_xmlgen_ns(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -269,7 +303,7 @@ ns_uri)) def test_1463026_1(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -280,7 +314,7 @@ self.assertEqual(result.getvalue(), start+'') def test_1463026_2(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -293,7 +327,7 @@ self.assertEqual(result.getvalue(), start+'') def test_1463026_3(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -321,7 +355,7 @@ parser = make_parser() parser.setFeature(feature_namespaces, True) - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) parser.setContentHandler(gen) parser.parse(test_xml) @@ -340,7 +374,7 @@ # # This test demonstrates the bug by direct manipulation of the # XMLGenerator. - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -360,6 +394,29 @@ 'Hello' '')) + def test_no_close_file(self): + result = self.ioclass() + def func(out): + gen = XMLGenerator(out) + gen.startDocument() + gen.startElement("doc", {}) + func(result) + self.assertFalse(result.closed) + +class StringXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = StringIO + +class BytesIOXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = io.BytesIO + +class WriterXmlgenTest(XmlgenTest, unittest.TestCase): + class ioclass(list): + write = list.append + closed = False + + def getvalue(self): + return b''.join(self) + class XMLFilterBaseTest(unittest.TestCase): def test_filter_basic(self): @@ -804,7 +861,9 @@ def test_main(): run_unittest(MakeParserTest, SaxutilsTest, - XmlgenTest, + StringXmlgenTest, + BytesIOXmlgenTest, + WriterXmlgenTest, ExpatReaderTest, ErrorReportingTest, XmlReaderTest) diff --git a/Lib/xml/sax/saxutils.py b/Lib/xml/sax/saxutils.py --- a/Lib/xml/sax/saxutils.py +++ b/Lib/xml/sax/saxutils.py @@ -4,6 +4,7 @@ """ import os, urlparse, urllib, types +import io import sys import handler import xmlreader @@ -13,15 +14,6 @@ except AttributeError: _StringTypes = [types.StringType] -# See whether the xmlcharrefreplace error handler is -# supported -try: - from codecs import xmlcharrefreplace_errors - _error_handling = "xmlcharrefreplace" - del xmlcharrefreplace_errors -except ImportError: - _error_handling = "strict" - def __dict_replace(s, d): """Replace substrings of a string using a dictionary.""" for key, value in d.items(): @@ -82,25 +74,46 @@ return data +def _gettextwriter(out, encoding): + if out is None: + import sys + out = sys.stdout + + if isinstance(out, io.RawIOBase): + buffer = io.BufferedIOBase(out) + # Keep the original file open when the TextIOWrapper is + # destroyed + buffer.close = lambda: None + else: + # This is to handle passed objects that aren't in the + # IOBase hierarchy, but just have a write method + buffer = io.BufferedIOBase() + buffer.writable = lambda: True + buffer.write = out.write + try: + # TextIOWrapper uses this methods to determine + # if BOM (for UTF-16, etc) should be added + buffer.seekable = out.seekable + buffer.tell = out.tell + except AttributeError: + pass + # wrap a binary writer with TextIOWrapper + return io.TextIOWrapper(buffer, encoding=encoding, + errors='xmlcharrefreplace', + newline='\n') + class XMLGenerator(handler.ContentHandler): def __init__(self, out=None, encoding="iso-8859-1"): - if out is None: - import sys - out = sys.stdout handler.ContentHandler.__init__(self) - self._out = out + out = _gettextwriter(out, encoding) + self._write = out.write + self._flush = out.flush self._ns_contexts = [{}] # contains uri -> prefix dicts self._current_context = self._ns_contexts[-1] self._undeclared_ns_maps = [] self._encoding = encoding - def _write(self, text): - if isinstance(text, str): - self._out.write(text) - else: - self._out.write(text.encode(self._encoding, _error_handling)) - def _qname(self, name): """Builds a qualified name from a (ns_url, localname) pair""" if name[0]: @@ -121,9 +134,12 @@ # ContentHandler methods def startDocument(self): - self._write('\n' % + self._write(u'\n' % self._encoding) + def endDocument(self): + self._flush() + def startPrefixMapping(self, prefix, uri): self._ns_contexts.append(self._current_context.copy()) self._current_context[uri] = prefix @@ -134,39 +150,39 @@ del self._ns_contexts[-1] def startElement(self, name, attrs): - self._write('<' + name) + self._write(u'<' + name) for (name, value) in attrs.items(): - self._write(' %s=%s' % (name, quoteattr(value))) - self._write('>') + self._write(u' %s=%s' % (name, quoteattr(value))) + self._write(u'>') def endElement(self, name): - self._write('' % name) + self._write(u'' % name) def startElementNS(self, name, qname, attrs): - self._write('<' + self._qname(name)) + self._write(u'<' + self._qname(name)) for prefix, uri in self._undeclared_ns_maps: if prefix: - self._out.write(' xmlns:%s="%s"' % (prefix, uri)) + self._write(u' xmlns:%s="%s"' % (prefix, uri)) else: - self._out.write(' xmlns="%s"' % uri) + self._write(u' xmlns="%s"' % uri) self._undeclared_ns_maps = [] for (name, value) in attrs.items(): - self._write(' %s=%s' % (self._qname(name), quoteattr(value))) - self._write('>') + self._write(u' %s=%s' % (self._qname(name), quoteattr(value))) + self._write(u'>') def endElementNS(self, name, qname): - self._write('' % self._qname(name)) + self._write(u'' % self._qname(name)) def characters(self, content): - self._write(escape(content)) + self._write(escape(unicode(content))) def ignorableWhitespace(self, content): - self._write(content) + self._write(unicode(content)) def processingInstruction(self, target, data): - self._write('' % (target, data)) + self._write(u'' % (target, data)) class XMLFilterBase(xmlreader.XMLReader): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,8 @@ Library ------- +- Issue #1470548: XMLGenerator now works with UTF-16 and UTF-32 encodings. + - Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 13:38:03 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 13:38:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE0NzA1?= =?utf-8?q?48=3A_XMLGenerator_now_works_with_binary_output_streams=2E?= Message-ID: <3Z3qRH0Tp3zSQH@mail.python.org> http://hg.python.org/cpython/rev/66f92f76b2ce changeset: 82123:66f92f76b2ce branch: 3.2 parent: 82119:c5f4fa02fc86 user: Serhiy Storchaka date: Sun Feb 10 14:29:52 2013 +0200 summary: Issue #1470548: XMLGenerator now works with binary output streams. files: Lib/test/test_sax.py | 215 ++++++++++++++++++--------- Lib/xml/sax/saxutils.py | 67 +++++-- Misc/NEWS | 2 + 3 files changed, 192 insertions(+), 92 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -13,7 +13,7 @@ from xml.sax.expatreader import create_parser from xml.sax.handler import feature_namespaces from xml.sax.xmlreader import InputSource, AttributesImpl, AttributesNSImpl -from io import StringIO +from io import BytesIO, StringIO import os.path import shutil from test import support @@ -173,31 +173,29 @@ # ===== XMLGenerator -start = '\n' - -class XmlgenTest(unittest.TestCase): +class XmlgenTest: def test_xmlgen_basic(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() gen.startElement("doc", {}) gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), self.xml("")) def test_xmlgen_basic_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() gen.startElement("doc", {}) gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), self.xml("")) def test_xmlgen_content(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -206,10 +204,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "huhei") + self.assertEqual(result.getvalue(), self.xml("huhei")) def test_xmlgen_content_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -218,10 +216,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "huhei") + self.assertEqual(result.getvalue(), self.xml("huhei")) def test_xmlgen_pi(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -230,10 +228,11 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), + self.xml("")) def test_xmlgen_content_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -243,10 +242,10 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start + "<huhei&") + self.xml("<huhei&")) def test_xmlgen_attr_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -260,13 +259,43 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + - ("" - "" - "")) + self.assertEqual(result.getvalue(), self.xml( + "" + "" + "")) + + def test_xmlgen_encoding(self): + encodings = ('iso-8859-15', 'utf-8', 'utf-8-sig', + 'utf-16', 'utf-16be', 'utf-16le', + 'utf-32', 'utf-32be', 'utf-32le') + for encoding in encodings: + result = self.ioclass() + gen = XMLGenerator(result, encoding=encoding) + + gen.startDocument() + gen.startElement("doc", {"a": '\u20ac'}) + gen.characters("\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('\u20ac', encoding=encoding)) + + def test_xmlgen_unencodable(self): + result = self.ioclass() + gen = XMLGenerator(result, encoding='ascii') + + gen.startDocument() + gen.startElement("doc", {"a": '\u20ac'}) + gen.characters("\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('', encoding='ascii')) def test_xmlgen_ignorable(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -275,10 +304,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + " ") + self.assertEqual(result.getvalue(), self.xml(" ")) def test_xmlgen_ignorable_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -287,10 +316,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + " ") + self.assertEqual(result.getvalue(), self.xml(" ")) def test_xmlgen_ns(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -303,12 +332,12 @@ gen.endPrefixMapping("ns1") gen.endDocument() - self.assertEqual(result.getvalue(), start + \ - ('' % + self.assertEqual(result.getvalue(), self.xml( + '' % ns_uri)) def test_xmlgen_ns_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -321,12 +350,12 @@ gen.endPrefixMapping("ns1") gen.endDocument() - self.assertEqual(result.getvalue(), start + \ - ('' % + self.assertEqual(result.getvalue(), self.xml( + '' % ns_uri)) def test_1463026_1(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -334,10 +363,10 @@ gen.endElementNS((None, 'a'), 'a') gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_1_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -345,10 +374,10 @@ gen.endElementNS((None, 'a'), 'a') gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_2(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -358,10 +387,10 @@ gen.endPrefixMapping(None) gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_2_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -371,10 +400,10 @@ gen.endPrefixMapping(None) gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_3(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -385,10 +414,10 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start+'') + self.xml('')) def test_1463026_3_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -399,7 +428,7 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start+'') + self.xml('')) def test_5027_1(self): # The xml prefix (as in xml:lang below) is reserved and bound by @@ -416,13 +445,13 @@ parser = make_parser() parser.setFeature(feature_namespaces, True) - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) parser.setContentHandler(gen) parser.parse(test_xml) self.assertEqual(result.getvalue(), - start + ( + self.xml( '' 'Hello' '')) @@ -435,7 +464,7 @@ # # This test demonstrates the bug by direct manipulation of the # XMLGenerator. - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -450,15 +479,57 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start + ( + self.xml( '' 'Hello' '')) + def test_no_close_file(self): + result = self.ioclass() + def func(out): + gen = XMLGenerator(out) + gen.startDocument() + gen.startElement("doc", {}) + func(result) + self.assertFalse(result.closed) + +class StringXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = StringIO + + def xml(self, doc, encoding='iso-8859-1'): + return '\n%s' % (encoding, doc) + + test_xmlgen_unencodable = None + +class BytesXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = BytesIO + + def xml(self, doc, encoding='iso-8859-1'): + return ('\n%s' % + (encoding, doc)).encode(encoding, 'xmlcharrefreplace') + +class WriterXmlgenTest(BytesXmlgenTest): + class ioclass(list): + write = list.append + closed = False + + def seekable(self): + return True + + def tell(self): + # return 0 at start and not 0 after start + return len(self) + + def getvalue(self): + return b''.join(self) + + +start = b'\n' + class XMLFilterBaseTest(unittest.TestCase): def test_filter_basic(self): - result = StringIO() + result = BytesIO() gen = XMLGenerator(result) filter = XMLFilterBase() filter.setContentHandler(gen) @@ -470,7 +541,7 @@ filter.endElement("doc") filter.endDocument() - self.assertEqual(result.getvalue(), start + "content ") + self.assertEqual(result.getvalue(), start + b"content ") # =========================================================================== # @@ -478,7 +549,7 @@ # # =========================================================================== -with open(TEST_XMLFILE_OUT) as f: +with open(TEST_XMLFILE_OUT, 'rb') as f: xml_test_out = f.read() class ExpatReaderTest(XmlTestBase): @@ -487,11 +558,11 @@ def test_expat_file(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) - with open(TEST_XMLFILE) as f: + with open(TEST_XMLFILE, 'rb') as f: parser.parse(f) self.assertEqual(result.getvalue(), xml_test_out) @@ -503,7 +574,7 @@ self.addCleanup(support.unlink, fname) parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -547,13 +618,13 @@ def resolveEntity(self, publicId, systemId): inpsrc = InputSource() - inpsrc.setByteStream(StringIO("")) + inpsrc.setByteStream(BytesIO(b"")) return inpsrc def test_expat_entityresolver(self): parser = create_parser() parser.setEntityResolver(self.TestEntityResolver()) - result = StringIO() + result = BytesIO() parser.setContentHandler(XMLGenerator(result)) parser.feed('") + b"") # ===== Attributes support @@ -632,7 +703,7 @@ def test_expat_inpsource_filename(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -642,7 +713,7 @@ def test_expat_inpsource_sysid(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -657,7 +728,7 @@ self.addCleanup(support.unlink, fname) parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -667,12 +738,12 @@ def test_expat_inpsource_stream(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) inpsrc = InputSource() - with open(TEST_XMLFILE) as f: + with open(TEST_XMLFILE, 'rb') as f: inpsrc.setByteStream(f) parser.parse(inpsrc) @@ -681,7 +752,7 @@ # ===== IncrementalParser support def test_expat_incremental(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -690,10 +761,10 @@ parser.feed("") parser.close() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), start + b"") def test_expat_incremental_reset(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -701,7 +772,7 @@ parser.feed("") parser.feed("text") - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) parser.reset() @@ -711,12 +782,12 @@ parser.feed("") parser.close() - self.assertEqual(result.getvalue(), start + "text") + self.assertEqual(result.getvalue(), start + b"text") # ===== Locator support def test_expat_locator_noinfo(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -730,7 +801,7 @@ self.assertEqual(parser.getLineNumber(), 1) def test_expat_locator_withinfo(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -745,7 +816,7 @@ shutil.copyfile(TEST_XMLFILE, fname) self.addCleanup(support.unlink, fname) - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -766,7 +837,7 @@ parser = create_parser() parser.setContentHandler(ContentHandler()) # do nothing source = InputSource() - source.setByteStream(StringIO("")) #ill-formed + source.setByteStream(BytesIO(b"")) #ill-formed name = "a file name" source.setSystemId(name) try: @@ -857,7 +928,9 @@ def test_main(): run_unittest(MakeParserTest, SaxutilsTest, - XmlgenTest, + StringXmlgenTest, + BytesXmlgenTest, + WriterXmlgenTest, ExpatReaderTest, ErrorReportingTest, XmlReaderTest) diff --git a/Lib/xml/sax/saxutils.py b/Lib/xml/sax/saxutils.py --- a/Lib/xml/sax/saxutils.py +++ b/Lib/xml/sax/saxutils.py @@ -4,18 +4,10 @@ """ import os, urllib.parse, urllib.request +import io from . import handler from . import xmlreader -# See whether the xmlcharrefreplace error handler is -# supported -try: - from codecs import xmlcharrefreplace_errors - _error_handling = "xmlcharrefreplace" - del xmlcharrefreplace_errors -except ImportError: - _error_handling = "strict" - def __dict_replace(s, d): """Replace substrings of a string using a dictionary.""" for key, value in d.items(): @@ -76,14 +68,50 @@ return data +def _gettextwriter(out, encoding): + if out is None: + import sys + return sys.stdout + + if isinstance(out, io.TextIOBase): + # use a text writer as is + return out + + # wrap a binary writer with TextIOWrapper + if isinstance(out, io.RawIOBase): + # Keep the original file open when the TextIOWrapper is + # destroyed + class _wrapper: + __class__ = out.__class__ + def __getattr__(self, name): + return getattr(out, name) + buffer = _wrapper() + buffer.close = lambda: None + else: + # This is to handle passed objects that aren't in the + # IOBase hierarchy, but just have a write method + buffer = io.BufferedIOBase() + buffer.writable = lambda: True + buffer.write = out.write + try: + # TextIOWrapper uses this methods to determine + # if BOM (for UTF-16, etc) should be added + buffer.seekable = out.seekable + buffer.tell = out.tell + except AttributeError: + pass + return io.TextIOWrapper(buffer, encoding=encoding, + errors='xmlcharrefreplace', + newline='\n', + write_through=True) + class XMLGenerator(handler.ContentHandler): def __init__(self, out=None, encoding="iso-8859-1", short_empty_elements=False): - if out is None: - import sys - out = sys.stdout handler.ContentHandler.__init__(self) - self._out = out + out = _gettextwriter(out, encoding) + self._write = out.write + self._flush = out.flush self._ns_contexts = [{}] # contains uri -> prefix dicts self._current_context = self._ns_contexts[-1] self._undeclared_ns_maps = [] @@ -91,12 +119,6 @@ self._short_empty_elements = short_empty_elements self._pending_start_element = False - def _write(self, text): - if isinstance(text, str): - self._out.write(text) - else: - self._out.write(text.encode(self._encoding, _error_handling)) - def _qname(self, name): """Builds a qualified name from a (ns_url, localname) pair""" if name[0]: @@ -125,6 +147,9 @@ self._write('\n' % self._encoding) + def endDocument(self): + self._flush() + def startPrefixMapping(self, prefix, uri): self._ns_contexts.append(self._current_context.copy()) self._current_context[uri] = prefix @@ -157,9 +182,9 @@ for prefix, uri in self._undeclared_ns_maps: if prefix: - self._out.write(' xmlns:%s="%s"' % (prefix, uri)) + self._write(' xmlns:%s="%s"' % (prefix, uri)) else: - self._out.write(' xmlns="%s"' % uri) + self._write(' xmlns="%s"' % uri) self._undeclared_ns_maps = [] for (name, value) in attrs.items(): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -218,6 +218,8 @@ Library ------- +- Issue #1470548: XMLGenerator now works with binary output streams. + - Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 13:38:04 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 13:38:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=231470548=3A_XMLGenerator_now_works_with_binary_output_?= =?utf-8?q?streams=2E?= Message-ID: <3Z3qRJ5hRWzPXx@mail.python.org> http://hg.python.org/cpython/rev/03b878d636cf changeset: 82124:03b878d636cf branch: 3.3 parent: 82120:bfe9526606e2 parent: 82123:66f92f76b2ce user: Serhiy Storchaka date: Sun Feb 10 14:31:07 2013 +0200 summary: Issue #1470548: XMLGenerator now works with binary output streams. files: Lib/test/test_sax.py | 215 ++++++++++++++++++--------- Lib/xml/sax/saxutils.py | 67 +++++-- Misc/NEWS | 2 + 3 files changed, 192 insertions(+), 92 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -13,7 +13,7 @@ from xml.sax.expatreader import create_parser from xml.sax.handler import feature_namespaces from xml.sax.xmlreader import InputSource, AttributesImpl, AttributesNSImpl -from io import StringIO +from io import BytesIO, StringIO import os.path import shutil from test import support @@ -173,31 +173,29 @@ # ===== XMLGenerator -start = '\n' - -class XmlgenTest(unittest.TestCase): +class XmlgenTest: def test_xmlgen_basic(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() gen.startElement("doc", {}) gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), self.xml("")) def test_xmlgen_basic_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() gen.startElement("doc", {}) gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), self.xml("")) def test_xmlgen_content(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -206,10 +204,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "huhei") + self.assertEqual(result.getvalue(), self.xml("huhei")) def test_xmlgen_content_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -218,10 +216,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "huhei") + self.assertEqual(result.getvalue(), self.xml("huhei")) def test_xmlgen_pi(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -230,10 +228,11 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), + self.xml("")) def test_xmlgen_content_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -243,10 +242,10 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start + "<huhei&") + self.xml("<huhei&")) def test_xmlgen_attr_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -260,13 +259,43 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + - ("" - "" - "")) + self.assertEqual(result.getvalue(), self.xml( + "" + "" + "")) + + def test_xmlgen_encoding(self): + encodings = ('iso-8859-15', 'utf-8', 'utf-8-sig', + 'utf-16', 'utf-16be', 'utf-16le', + 'utf-32', 'utf-32be', 'utf-32le') + for encoding in encodings: + result = self.ioclass() + gen = XMLGenerator(result, encoding=encoding) + + gen.startDocument() + gen.startElement("doc", {"a": '\u20ac'}) + gen.characters("\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('\u20ac', encoding=encoding)) + + def test_xmlgen_unencodable(self): + result = self.ioclass() + gen = XMLGenerator(result, encoding='ascii') + + gen.startDocument() + gen.startElement("doc", {"a": '\u20ac'}) + gen.characters("\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('', encoding='ascii')) def test_xmlgen_ignorable(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -275,10 +304,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + " ") + self.assertEqual(result.getvalue(), self.xml(" ")) def test_xmlgen_ignorable_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -287,10 +316,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + " ") + self.assertEqual(result.getvalue(), self.xml(" ")) def test_xmlgen_ns(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -303,12 +332,12 @@ gen.endPrefixMapping("ns1") gen.endDocument() - self.assertEqual(result.getvalue(), start + \ - ('' % + self.assertEqual(result.getvalue(), self.xml( + '' % ns_uri)) def test_xmlgen_ns_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -321,12 +350,12 @@ gen.endPrefixMapping("ns1") gen.endDocument() - self.assertEqual(result.getvalue(), start + \ - ('' % + self.assertEqual(result.getvalue(), self.xml( + '' % ns_uri)) def test_1463026_1(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -334,10 +363,10 @@ gen.endElementNS((None, 'a'), 'a') gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_1_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -345,10 +374,10 @@ gen.endElementNS((None, 'a'), 'a') gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_2(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -358,10 +387,10 @@ gen.endPrefixMapping(None) gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_2_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -371,10 +400,10 @@ gen.endPrefixMapping(None) gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_3(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -385,10 +414,10 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start+'') + self.xml('')) def test_1463026_3_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -399,7 +428,7 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start+'') + self.xml('')) def test_5027_1(self): # The xml prefix (as in xml:lang below) is reserved and bound by @@ -416,13 +445,13 @@ parser = make_parser() parser.setFeature(feature_namespaces, True) - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) parser.setContentHandler(gen) parser.parse(test_xml) self.assertEqual(result.getvalue(), - start + ( + self.xml( '' 'Hello' '')) @@ -435,7 +464,7 @@ # # This test demonstrates the bug by direct manipulation of the # XMLGenerator. - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -450,15 +479,57 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start + ( + self.xml( '' 'Hello' '')) + def test_no_close_file(self): + result = self.ioclass() + def func(out): + gen = XMLGenerator(out) + gen.startDocument() + gen.startElement("doc", {}) + func(result) + self.assertFalse(result.closed) + +class StringXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = StringIO + + def xml(self, doc, encoding='iso-8859-1'): + return '\n%s' % (encoding, doc) + + test_xmlgen_unencodable = None + +class BytesXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = BytesIO + + def xml(self, doc, encoding='iso-8859-1'): + return ('\n%s' % + (encoding, doc)).encode(encoding, 'xmlcharrefreplace') + +class WriterXmlgenTest(BytesXmlgenTest): + class ioclass(list): + write = list.append + closed = False + + def seekable(self): + return True + + def tell(self): + # return 0 at start and not 0 after start + return len(self) + + def getvalue(self): + return b''.join(self) + + +start = b'\n' + class XMLFilterBaseTest(unittest.TestCase): def test_filter_basic(self): - result = StringIO() + result = BytesIO() gen = XMLGenerator(result) filter = XMLFilterBase() filter.setContentHandler(gen) @@ -470,7 +541,7 @@ filter.endElement("doc") filter.endDocument() - self.assertEqual(result.getvalue(), start + "content ") + self.assertEqual(result.getvalue(), start + b"content ") # =========================================================================== # @@ -478,7 +549,7 @@ # # =========================================================================== -with open(TEST_XMLFILE_OUT) as f: +with open(TEST_XMLFILE_OUT, 'rb') as f: xml_test_out = f.read() class ExpatReaderTest(XmlTestBase): @@ -487,11 +558,11 @@ def test_expat_file(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) - with open(TEST_XMLFILE) as f: + with open(TEST_XMLFILE, 'rb') as f: parser.parse(f) self.assertEqual(result.getvalue(), xml_test_out) @@ -503,7 +574,7 @@ self.addCleanup(support.unlink, fname) parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -547,13 +618,13 @@ def resolveEntity(self, publicId, systemId): inpsrc = InputSource() - inpsrc.setByteStream(StringIO("")) + inpsrc.setByteStream(BytesIO(b"")) return inpsrc def test_expat_entityresolver(self): parser = create_parser() parser.setEntityResolver(self.TestEntityResolver()) - result = StringIO() + result = BytesIO() parser.setContentHandler(XMLGenerator(result)) parser.feed('") + b"") # ===== Attributes support @@ -632,7 +703,7 @@ def test_expat_inpsource_filename(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -642,7 +713,7 @@ def test_expat_inpsource_sysid(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -657,7 +728,7 @@ self.addCleanup(support.unlink, fname) parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -667,12 +738,12 @@ def test_expat_inpsource_stream(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) inpsrc = InputSource() - with open(TEST_XMLFILE) as f: + with open(TEST_XMLFILE, 'rb') as f: inpsrc.setByteStream(f) parser.parse(inpsrc) @@ -681,7 +752,7 @@ # ===== IncrementalParser support def test_expat_incremental(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -690,10 +761,10 @@ parser.feed("") parser.close() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), start + b"") def test_expat_incremental_reset(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -701,7 +772,7 @@ parser.feed("") parser.feed("text") - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) parser.reset() @@ -711,12 +782,12 @@ parser.feed("") parser.close() - self.assertEqual(result.getvalue(), start + "text") + self.assertEqual(result.getvalue(), start + b"text") # ===== Locator support def test_expat_locator_noinfo(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -730,7 +801,7 @@ self.assertEqual(parser.getLineNumber(), 1) def test_expat_locator_withinfo(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -745,7 +816,7 @@ shutil.copyfile(TEST_XMLFILE, fname) self.addCleanup(support.unlink, fname) - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -766,7 +837,7 @@ parser = create_parser() parser.setContentHandler(ContentHandler()) # do nothing source = InputSource() - source.setByteStream(StringIO("")) #ill-formed + source.setByteStream(BytesIO(b"")) #ill-formed name = "a file name" source.setSystemId(name) try: @@ -857,7 +928,9 @@ def test_main(): run_unittest(MakeParserTest, SaxutilsTest, - XmlgenTest, + StringXmlgenTest, + BytesXmlgenTest, + WriterXmlgenTest, ExpatReaderTest, ErrorReportingTest, XmlReaderTest) diff --git a/Lib/xml/sax/saxutils.py b/Lib/xml/sax/saxutils.py --- a/Lib/xml/sax/saxutils.py +++ b/Lib/xml/sax/saxutils.py @@ -4,18 +4,10 @@ """ import os, urllib.parse, urllib.request +import io from . import handler from . import xmlreader -# See whether the xmlcharrefreplace error handler is -# supported -try: - from codecs import xmlcharrefreplace_errors - _error_handling = "xmlcharrefreplace" - del xmlcharrefreplace_errors -except ImportError: - _error_handling = "strict" - def __dict_replace(s, d): """Replace substrings of a string using a dictionary.""" for key, value in d.items(): @@ -76,14 +68,50 @@ return data +def _gettextwriter(out, encoding): + if out is None: + import sys + return sys.stdout + + if isinstance(out, io.TextIOBase): + # use a text writer as is + return out + + # wrap a binary writer with TextIOWrapper + if isinstance(out, io.RawIOBase): + # Keep the original file open when the TextIOWrapper is + # destroyed + class _wrapper: + __class__ = out.__class__ + def __getattr__(self, name): + return getattr(out, name) + buffer = _wrapper() + buffer.close = lambda: None + else: + # This is to handle passed objects that aren't in the + # IOBase hierarchy, but just have a write method + buffer = io.BufferedIOBase() + buffer.writable = lambda: True + buffer.write = out.write + try: + # TextIOWrapper uses this methods to determine + # if BOM (for UTF-16, etc) should be added + buffer.seekable = out.seekable + buffer.tell = out.tell + except AttributeError: + pass + return io.TextIOWrapper(buffer, encoding=encoding, + errors='xmlcharrefreplace', + newline='\n', + write_through=True) + class XMLGenerator(handler.ContentHandler): def __init__(self, out=None, encoding="iso-8859-1", short_empty_elements=False): - if out is None: - import sys - out = sys.stdout handler.ContentHandler.__init__(self) - self._out = out + out = _gettextwriter(out, encoding) + self._write = out.write + self._flush = out.flush self._ns_contexts = [{}] # contains uri -> prefix dicts self._current_context = self._ns_contexts[-1] self._undeclared_ns_maps = [] @@ -91,12 +119,6 @@ self._short_empty_elements = short_empty_elements self._pending_start_element = False - def _write(self, text): - if isinstance(text, str): - self._out.write(text) - else: - self._out.write(text.encode(self._encoding, _error_handling)) - def _qname(self, name): """Builds a qualified name from a (ns_url, localname) pair""" if name[0]: @@ -125,6 +147,9 @@ self._write('\n' % self._encoding) + def endDocument(self): + self._flush() + def startPrefixMapping(self, prefix, uri): self._ns_contexts.append(self._current_context.copy()) self._current_context[uri] = prefix @@ -157,9 +182,9 @@ for prefix, uri in self._undeclared_ns_maps: if prefix: - self._out.write(' xmlns:%s="%s"' % (prefix, uri)) + self._write(' xmlns:%s="%s"' % (prefix, uri)) else: - self._out.write(' xmlns="%s"' % uri) + self._write(' xmlns="%s"' % uri) self._undeclared_ns_maps = [] for (name, value) in attrs.items(): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -172,6 +172,8 @@ Library ------- +- Issue #1470548: XMLGenerator now works with binary output streams. + - Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 13:38:06 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 13:38:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=231470548=3A_XMLGenerator_now_works_with_binary_o?= =?utf-8?q?utput_streams=2E?= Message-ID: <3Z3qRL3p85zSPg@mail.python.org> http://hg.python.org/cpython/rev/12d75ca12ae7 changeset: 82125:12d75ca12ae7 parent: 82121:f42cabe6ccb5 parent: 82124:03b878d636cf user: Serhiy Storchaka date: Sun Feb 10 14:34:53 2013 +0200 summary: Issue #1470548: XMLGenerator now works with binary output streams. files: Lib/test/test_sax.py | 215 ++++++++++++++++++--------- Lib/xml/sax/saxutils.py | 67 +++++-- Misc/NEWS | 2 + 3 files changed, 192 insertions(+), 92 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -13,7 +13,7 @@ from xml.sax.expatreader import create_parser from xml.sax.handler import feature_namespaces from xml.sax.xmlreader import InputSource, AttributesImpl, AttributesNSImpl -from io import StringIO +from io import BytesIO, StringIO import os.path import shutil from test import support @@ -173,31 +173,29 @@ # ===== XMLGenerator -start = '\n' - -class XmlgenTest(unittest.TestCase): +class XmlgenTest: def test_xmlgen_basic(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() gen.startElement("doc", {}) gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), self.xml("")) def test_xmlgen_basic_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() gen.startElement("doc", {}) gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), self.xml("")) def test_xmlgen_content(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -206,10 +204,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "huhei") + self.assertEqual(result.getvalue(), self.xml("huhei")) def test_xmlgen_content_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -218,10 +216,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "huhei") + self.assertEqual(result.getvalue(), self.xml("huhei")) def test_xmlgen_pi(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -230,10 +228,11 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), + self.xml("")) def test_xmlgen_content_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -243,10 +242,10 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start + "<huhei&") + self.xml("<huhei&")) def test_xmlgen_attr_escape(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -260,13 +259,43 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + - ("" - "" - "")) + self.assertEqual(result.getvalue(), self.xml( + "" + "" + "")) + + def test_xmlgen_encoding(self): + encodings = ('iso-8859-15', 'utf-8', 'utf-8-sig', + 'utf-16', 'utf-16be', 'utf-16le', + 'utf-32', 'utf-32be', 'utf-32le') + for encoding in encodings: + result = self.ioclass() + gen = XMLGenerator(result, encoding=encoding) + + gen.startDocument() + gen.startElement("doc", {"a": '\u20ac'}) + gen.characters("\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('\u20ac', encoding=encoding)) + + def test_xmlgen_unencodable(self): + result = self.ioclass() + gen = XMLGenerator(result, encoding='ascii') + + gen.startDocument() + gen.startElement("doc", {"a": '\u20ac'}) + gen.characters("\u20ac") + gen.endElement("doc") + gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('', encoding='ascii')) def test_xmlgen_ignorable(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -275,10 +304,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + " ") + self.assertEqual(result.getvalue(), self.xml(" ")) def test_xmlgen_ignorable_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -287,10 +316,10 @@ gen.endElement("doc") gen.endDocument() - self.assertEqual(result.getvalue(), start + " ") + self.assertEqual(result.getvalue(), self.xml(" ")) def test_xmlgen_ns(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -303,12 +332,12 @@ gen.endPrefixMapping("ns1") gen.endDocument() - self.assertEqual(result.getvalue(), start + \ - ('' % + self.assertEqual(result.getvalue(), self.xml( + '' % ns_uri)) def test_xmlgen_ns_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -321,12 +350,12 @@ gen.endPrefixMapping("ns1") gen.endDocument() - self.assertEqual(result.getvalue(), start + \ - ('' % + self.assertEqual(result.getvalue(), self.xml( + '' % ns_uri)) def test_1463026_1(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -334,10 +363,10 @@ gen.endElementNS((None, 'a'), 'a') gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_1_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -345,10 +374,10 @@ gen.endElementNS((None, 'a'), 'a') gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_2(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -358,10 +387,10 @@ gen.endPrefixMapping(None) gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_2_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -371,10 +400,10 @@ gen.endPrefixMapping(None) gen.endDocument() - self.assertEqual(result.getvalue(), start+'') + self.assertEqual(result.getvalue(), self.xml('')) def test_1463026_3(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -385,10 +414,10 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start+'') + self.xml('')) def test_1463026_3_empty(self): - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result, short_empty_elements=True) gen.startDocument() @@ -399,7 +428,7 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start+'') + self.xml('')) def test_5027_1(self): # The xml prefix (as in xml:lang below) is reserved and bound by @@ -416,13 +445,13 @@ parser = make_parser() parser.setFeature(feature_namespaces, True) - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) parser.setContentHandler(gen) parser.parse(test_xml) self.assertEqual(result.getvalue(), - start + ( + self.xml( '' 'Hello' '')) @@ -435,7 +464,7 @@ # # This test demonstrates the bug by direct manipulation of the # XMLGenerator. - result = StringIO() + result = self.ioclass() gen = XMLGenerator(result) gen.startDocument() @@ -450,15 +479,57 @@ gen.endDocument() self.assertEqual(result.getvalue(), - start + ( + self.xml( '' 'Hello' '')) + def test_no_close_file(self): + result = self.ioclass() + def func(out): + gen = XMLGenerator(out) + gen.startDocument() + gen.startElement("doc", {}) + func(result) + self.assertFalse(result.closed) + +class StringXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = StringIO + + def xml(self, doc, encoding='iso-8859-1'): + return '\n%s' % (encoding, doc) + + test_xmlgen_unencodable = None + +class BytesXmlgenTest(XmlgenTest, unittest.TestCase): + ioclass = BytesIO + + def xml(self, doc, encoding='iso-8859-1'): + return ('\n%s' % + (encoding, doc)).encode(encoding, 'xmlcharrefreplace') + +class WriterXmlgenTest(BytesXmlgenTest): + class ioclass(list): + write = list.append + closed = False + + def seekable(self): + return True + + def tell(self): + # return 0 at start and not 0 after start + return len(self) + + def getvalue(self): + return b''.join(self) + + +start = b'\n' + class XMLFilterBaseTest(unittest.TestCase): def test_filter_basic(self): - result = StringIO() + result = BytesIO() gen = XMLGenerator(result) filter = XMLFilterBase() filter.setContentHandler(gen) @@ -470,7 +541,7 @@ filter.endElement("doc") filter.endDocument() - self.assertEqual(result.getvalue(), start + "content ") + self.assertEqual(result.getvalue(), start + b"content ") # =========================================================================== # @@ -478,7 +549,7 @@ # # =========================================================================== -with open(TEST_XMLFILE_OUT) as f: +with open(TEST_XMLFILE_OUT, 'rb') as f: xml_test_out = f.read() class ExpatReaderTest(XmlTestBase): @@ -487,11 +558,11 @@ def test_expat_file(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) - with open(TEST_XMLFILE) as f: + with open(TEST_XMLFILE, 'rb') as f: parser.parse(f) self.assertEqual(result.getvalue(), xml_test_out) @@ -503,7 +574,7 @@ self.addCleanup(support.unlink, fname) parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -547,13 +618,13 @@ def resolveEntity(self, publicId, systemId): inpsrc = InputSource() - inpsrc.setByteStream(StringIO("")) + inpsrc.setByteStream(BytesIO(b"")) return inpsrc def test_expat_entityresolver(self): parser = create_parser() parser.setEntityResolver(self.TestEntityResolver()) - result = StringIO() + result = BytesIO() parser.setContentHandler(XMLGenerator(result)) parser.feed('") + b"") # ===== Attributes support @@ -632,7 +703,7 @@ def test_expat_inpsource_filename(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -642,7 +713,7 @@ def test_expat_inpsource_sysid(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -657,7 +728,7 @@ self.addCleanup(support.unlink, fname) parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) @@ -667,12 +738,12 @@ def test_expat_inpsource_stream(self): parser = create_parser() - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) inpsrc = InputSource() - with open(TEST_XMLFILE) as f: + with open(TEST_XMLFILE, 'rb') as f: inpsrc.setByteStream(f) parser.parse(inpsrc) @@ -681,7 +752,7 @@ # ===== IncrementalParser support def test_expat_incremental(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -690,10 +761,10 @@ parser.feed("") parser.close() - self.assertEqual(result.getvalue(), start + "") + self.assertEqual(result.getvalue(), start + b"") def test_expat_incremental_reset(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -701,7 +772,7 @@ parser.feed("") parser.feed("text") - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser.setContentHandler(xmlgen) parser.reset() @@ -711,12 +782,12 @@ parser.feed("") parser.close() - self.assertEqual(result.getvalue(), start + "text") + self.assertEqual(result.getvalue(), start + b"text") # ===== Locator support def test_expat_locator_noinfo(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -730,7 +801,7 @@ self.assertEqual(parser.getLineNumber(), 1) def test_expat_locator_withinfo(self): - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -745,7 +816,7 @@ shutil.copyfile(TEST_XMLFILE, fname) self.addCleanup(support.unlink, fname) - result = StringIO() + result = BytesIO() xmlgen = XMLGenerator(result) parser = create_parser() parser.setContentHandler(xmlgen) @@ -766,7 +837,7 @@ parser = create_parser() parser.setContentHandler(ContentHandler()) # do nothing source = InputSource() - source.setByteStream(StringIO("")) #ill-formed + source.setByteStream(BytesIO(b"")) #ill-formed name = "a file name" source.setSystemId(name) try: @@ -857,7 +928,9 @@ def test_main(): run_unittest(MakeParserTest, SaxutilsTest, - XmlgenTest, + StringXmlgenTest, + BytesXmlgenTest, + WriterXmlgenTest, ExpatReaderTest, ErrorReportingTest, XmlReaderTest) diff --git a/Lib/xml/sax/saxutils.py b/Lib/xml/sax/saxutils.py --- a/Lib/xml/sax/saxutils.py +++ b/Lib/xml/sax/saxutils.py @@ -4,18 +4,10 @@ """ import os, urllib.parse, urllib.request +import io from . import handler from . import xmlreader -# See whether the xmlcharrefreplace error handler is -# supported -try: - from codecs import xmlcharrefreplace_errors - _error_handling = "xmlcharrefreplace" - del xmlcharrefreplace_errors -except ImportError: - _error_handling = "strict" - def __dict_replace(s, d): """Replace substrings of a string using a dictionary.""" for key, value in d.items(): @@ -76,14 +68,50 @@ return data +def _gettextwriter(out, encoding): + if out is None: + import sys + return sys.stdout + + if isinstance(out, io.TextIOBase): + # use a text writer as is + return out + + # wrap a binary writer with TextIOWrapper + if isinstance(out, io.RawIOBase): + # Keep the original file open when the TextIOWrapper is + # destroyed + class _wrapper: + __class__ = out.__class__ + def __getattr__(self, name): + return getattr(out, name) + buffer = _wrapper() + buffer.close = lambda: None + else: + # This is to handle passed objects that aren't in the + # IOBase hierarchy, but just have a write method + buffer = io.BufferedIOBase() + buffer.writable = lambda: True + buffer.write = out.write + try: + # TextIOWrapper uses this methods to determine + # if BOM (for UTF-16, etc) should be added + buffer.seekable = out.seekable + buffer.tell = out.tell + except AttributeError: + pass + return io.TextIOWrapper(buffer, encoding=encoding, + errors='xmlcharrefreplace', + newline='\n', + write_through=True) + class XMLGenerator(handler.ContentHandler): def __init__(self, out=None, encoding="iso-8859-1", short_empty_elements=False): - if out is None: - import sys - out = sys.stdout handler.ContentHandler.__init__(self) - self._out = out + out = _gettextwriter(out, encoding) + self._write = out.write + self._flush = out.flush self._ns_contexts = [{}] # contains uri -> prefix dicts self._current_context = self._ns_contexts[-1] self._undeclared_ns_maps = [] @@ -91,12 +119,6 @@ self._short_empty_elements = short_empty_elements self._pending_start_element = False - def _write(self, text): - if isinstance(text, str): - self._out.write(text) - else: - self._out.write(text.encode(self._encoding, _error_handling)) - def _qname(self, name): """Builds a qualified name from a (ns_url, localname) pair""" if name[0]: @@ -125,6 +147,9 @@ self._write('\n' % self._encoding) + def endDocument(self): + self._flush() + def startPrefixMapping(self, prefix, uri): self._ns_contexts.append(self._current_context.copy()) self._current_context[uri] = prefix @@ -157,9 +182,9 @@ for prefix, uri in self._undeclared_ns_maps: if prefix: - self._out.write(' xmlns:%s="%s"' % (prefix, uri)) + self._write(' xmlns:%s="%s"' % (prefix, uri)) else: - self._out.write(' xmlns="%s"' % uri) + self._write(' xmlns="%s"' % uri) self._undeclared_ns_maps = [] for (name, value) in attrs.items(): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -244,6 +244,8 @@ Library ------- +- Issue #1470548: XMLGenerator now works with binary output streams. + - Issue #6975: os.path.realpath() now correctly resolves multiple nested symlinks on POSIX platforms. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 13:46:01 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 13:46:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fix_a_test_for?= =?utf-8?q?_SpooledTemporaryFile_=28added_in_issue_=2310355=29=2E?= Message-ID: <3Z3qcT0CYlzPwc@mail.python.org> http://hg.python.org/cpython/rev/6e9210a092cf changeset: 82126:6e9210a092cf branch: 3.2 parent: 82123:66f92f76b2ce user: Serhiy Storchaka date: Sun Feb 10 14:43:46 2013 +0200 summary: Fix a test for SpooledTemporaryFile (added in issue #10355). files: Lib/test/test_tempfile.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -854,7 +854,7 @@ self.assertTrue(f._rolled) self.assertEqual(f.mode, 'w+') self.assertIsNotNone(f.name) - self.assertEqual(f.newlines, '\n') + self.assertEqual(f.newlines, os.linesep) self.assertIsNotNone(f.encoding) def test_text_newline_and_encoding(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 13:46:02 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 13:46:02 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_a_test_for_SpooledTemporaryFile_=28added_in_issue_=2310355?= =?utf-8?b?KS4=?= Message-ID: <3Z3qcV2tpMzPWM@mail.python.org> http://hg.python.org/cpython/rev/b5074ed74ec3 changeset: 82127:b5074ed74ec3 branch: 3.3 parent: 82124:03b878d636cf parent: 82126:6e9210a092cf user: Serhiy Storchaka date: Sun Feb 10 14:44:14 2013 +0200 summary: Fix a test for SpooledTemporaryFile (added in issue #10355). files: Lib/test/test_tempfile.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -792,7 +792,7 @@ self.assertTrue(f._rolled) self.assertEqual(f.mode, 'w+') self.assertIsNotNone(f.name) - self.assertEqual(f.newlines, '\n') + self.assertEqual(f.newlines, os.linesep) self.assertIsNotNone(f.encoding) def test_text_newline_and_encoding(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 13:46:03 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 13:46:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_a_test_for_SpooledTemporaryFile_=28added_in_issue_?= =?utf-8?b?IzEwMzU1KS4=?= Message-ID: <3Z3qcW5yvFzPkW@mail.python.org> http://hg.python.org/cpython/rev/029011429f80 changeset: 82128:029011429f80 parent: 82125:12d75ca12ae7 parent: 82127:b5074ed74ec3 user: Serhiy Storchaka date: Sun Feb 10 14:44:43 2013 +0200 summary: Fix a test for SpooledTemporaryFile (added in issue #10355). files: Lib/test/test_tempfile.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -792,7 +792,7 @@ self.assertTrue(f._rolled) self.assertEqual(f.mode, 'w+') self.assertIsNotNone(f.name) - self.assertEqual(f.newlines, '\n') + self.assertEqual(f.newlines, os.linesep) self.assertIsNotNone(f.encoding) def test_text_newline_and_encoding(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 15:17:42 2013 From: python-checkins at python.org (mark.dickinson) Date: Sun, 10 Feb 2013 15:17:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MTQ5?= =?utf-8?q?=3A_Fix_random=2Evonmisesvariate_to_always_return_results_in_?= =?utf-8?b?WzAs?= Message-ID: <3Z3sfG45f7zQDQ@mail.python.org> http://hg.python.org/cpython/rev/6a3d18cede49 changeset: 82129:6a3d18cede49 branch: 2.7 parent: 82122:010b455de0e0 user: Mark Dickinson date: Sun Feb 10 14:13:40 2013 +0000 summary: Issue #17149: Fix random.vonmisesvariate to always return results in [0, 2*math.pi]. files: Lib/random.py | 4 ++-- Lib/test/test_random.py | 14 ++++++++++++++ Misc/NEWS | 3 +++ 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -475,9 +475,9 @@ u3 = random() if u3 > 0.5: - theta = (mu % TWOPI) + _acos(f) + theta = (mu + _acos(f)) % TWOPI else: - theta = (mu % TWOPI) - _acos(f) + theta = (mu - _acos(f)) % TWOPI return theta diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -533,6 +533,20 @@ self.assertAlmostEqual(s1/N, mu, 2) self.assertAlmostEqual(s2/(N-1), sigmasqrd, 2) + def test_von_mises_range(self): + # Issue 17149: von mises variates were not consistently in the + # range [0, 2*PI]. + g = random.Random() + N = 100 + for mu in 0.0, 0.1, 3.1, 6.2: + for kappa in 0.0, 2.3, 500.0: + for _ in range(N): + sample = g.vonmisesvariate(mu, kappa) + self.assertTrue( + 0 <= sample <= random.TWOPI, + msg=("vonmisesvariate({}, {}) produced a result {} out" + " of range [0, 2*pi]").format(mu, kappa, sample)) + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,9 @@ Library ------- +- Issue #17149: Fix random.vonmisesvariate to always return results in + the range [0, 2*math.pi]. + - Issue #1470548: XMLGenerator now works with UTF-16 and UTF-32 encodings. - Issue #6975: os.path.realpath() now correctly resolves multiple nested -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 15:17:43 2013 From: python-checkins at python.org (mark.dickinson) Date: Sun, 10 Feb 2013 15:17:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTQ5?= =?utf-8?q?=3A_Fix_random=2Evonmisesvariate_to_always_return_results_in_?= =?utf-8?b?WzAs?= Message-ID: <3Z3sfH6rxMzQDQ@mail.python.org> http://hg.python.org/cpython/rev/41e97652a9f9 changeset: 82130:41e97652a9f9 branch: 3.2 parent: 82126:6e9210a092cf user: Mark Dickinson date: Sun Feb 10 14:16:10 2013 +0000 summary: Issue #17149: Fix random.vonmisesvariate to always return results in [0, 2*math.pi]. files: Lib/random.py | 4 ++-- Lib/test/test_random.py | 14 ++++++++++++++ Misc/NEWS | 3 +++ 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -449,9 +449,9 @@ u3 = random() if u3 > 0.5: - theta = (mu % TWOPI) + _acos(f) + theta = (mu + _acos(f)) % TWOPI else: - theta = (mu % TWOPI) - _acos(f) + theta = (mu - _acos(f)) % TWOPI return theta diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -475,6 +475,20 @@ self.assertAlmostEqual(s1/N, mu, places=2) self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2) + def test_von_mises_range(self): + # Issue 17149: von mises variates were not consistently in the + # range [0, 2*PI]. + g = random.Random() + N = 100 + for mu in 0.0, 0.1, 3.1, 6.2: + for kappa in 0.0, 2.3, 500.0: + for _ in range(N): + sample = g.vonmisesvariate(mu, kappa) + self.assertTrue( + 0 <= sample <= random.TWOPI, + msg=("vonmisesvariate({}, {}) produced a result {} out" + " of range [0, 2*pi]").format(mu, kappa, sample)) + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -218,6 +218,9 @@ Library ------- +- Issue #17149: Fix random.vonmisesvariate to always return results in + [0, 2*math.pi]. + - Issue #1470548: XMLGenerator now works with binary output streams. - Issue #6975: os.path.realpath() now correctly resolves multiple nested -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 15:17:45 2013 From: python-checkins at python.org (mark.dickinson) Date: Sun, 10 Feb 2013 15:17:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317149=3A_merge_fix_from_3=2E2=2E?= Message-ID: <3Z3sfK2Y7dzSMw@mail.python.org> http://hg.python.org/cpython/rev/e9b4f2927412 changeset: 82131:e9b4f2927412 branch: 3.3 parent: 82127:b5074ed74ec3 parent: 82130:41e97652a9f9 user: Mark Dickinson date: Sun Feb 10 14:16:56 2013 +0000 summary: Issue #17149: merge fix from 3.2. files: Lib/random.py | 4 ++-- Lib/test/test_random.py | 14 ++++++++++++++ Misc/NEWS | 3 +++ 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -449,9 +449,9 @@ u3 = random() if u3 > 0.5: - theta = (mu % TWOPI) + _acos(f) + theta = (mu + _acos(f)) % TWOPI else: - theta = (mu % TWOPI) - _acos(f) + theta = (mu - _acos(f)) % TWOPI return theta diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -479,6 +479,20 @@ self.assertAlmostEqual(s1/N, mu, places=2) self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2) + def test_von_mises_range(self): + # Issue 17149: von mises variates were not consistently in the + # range [0, 2*PI]. + g = random.Random() + N = 100 + for mu in 0.0, 0.1, 3.1, 6.2: + for kappa in 0.0, 2.3, 500.0: + for _ in range(N): + sample = g.vonmisesvariate(mu, kappa) + self.assertTrue( + 0 <= sample <= random.TWOPI, + msg=("vonmisesvariate({}, {}) produced a result {} out" + " of range [0, 2*pi]").format(mu, kappa, sample)) + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -172,6 +172,9 @@ Library ------- +- Issue #17149: Fix random.vonmisesvariate to always return results in + [0, 2*math.pi]. + - Issue #1470548: XMLGenerator now works with binary output streams. - Issue #6975: os.path.realpath() now correctly resolves multiple nested -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 15:17:46 2013 From: python-checkins at python.org (mark.dickinson) Date: Sun, 10 Feb 2013 15:17:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317149=3A_merge_fix_from_3=2E3=2E?= Message-ID: <3Z3sfL5TySzSMw@mail.python.org> http://hg.python.org/cpython/rev/2704e11da558 changeset: 82132:2704e11da558 parent: 82128:029011429f80 parent: 82131:e9b4f2927412 user: Mark Dickinson date: Sun Feb 10 14:17:20 2013 +0000 summary: Issue #17149: merge fix from 3.3. files: Lib/random.py | 4 ++-- Lib/test/test_random.py | 14 ++++++++++++++ Misc/NEWS | 3 +++ 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -450,9 +450,9 @@ u3 = random() if u3 > 0.5: - theta = (mu % TWOPI) + _acos(f) + theta = (mu + _acos(f)) % TWOPI else: - theta = (mu % TWOPI) - _acos(f) + theta = (mu - _acos(f)) % TWOPI return theta diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -512,6 +512,20 @@ self.assertAlmostEqual(s1/N, mu, places=2) self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2) + def test_von_mises_range(self): + # Issue 17149: von mises variates were not consistently in the + # range [0, 2*PI]. + g = random.Random() + N = 100 + for mu in 0.0, 0.1, 3.1, 6.2: + for kappa in 0.0, 2.3, 500.0: + for _ in range(N): + sample = g.vonmisesvariate(mu, kappa) + self.assertTrue( + 0 <= sample <= random.TWOPI, + msg=("vonmisesvariate({}, {}) produced a result {} out" + " of range [0, 2*pi]").format(mu, kappa, sample)) + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -244,6 +244,9 @@ Library ------- +- Issue #17149: Fix random.vonmisesvariate to always return results in + [0, 2*math.pi]. + - Issue #1470548: XMLGenerator now works with binary output streams. - Issue #6975: os.path.realpath() now correctly resolves multiple nested -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 15:32:37 2013 From: python-checkins at python.org (benjamin.peterson) Date: Sun, 10 Feb 2013 15:32:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_evaluate_positional_defaul?= =?utf-8?q?ts_before_keyword-only_defaults_=28closes_=2316967=29?= Message-ID: <3Z3szT5WK3zSc5@mail.python.org> http://hg.python.org/cpython/rev/d296cf1600a8 changeset: 82133:d296cf1600a8 parent: 82128:029011429f80 user: Benjamin Peterson date: Sun Feb 10 09:29:59 2013 -0500 summary: evaluate positional defaults before keyword-only defaults (closes #16967) files: Doc/reference/compound_stmts.rst | 17 +- Lib/importlib/_bootstrap.py | 4 +- Lib/test/test_keywordonlyarg.py | 8 + Misc/NEWS | 3 + Python/ceval.c | 34 +- Python/compile.c | 4 +- Python/importlib.h | 238 +++++++++--------- 7 files changed, 161 insertions(+), 147 deletions(-) diff --git a/Doc/reference/compound_stmts.rst b/Doc/reference/compound_stmts.rst --- a/Doc/reference/compound_stmts.rst +++ b/Doc/reference/compound_stmts.rst @@ -493,14 +493,15 @@ value, all following parameters up until the "``*``" must also have a default value --- this is a syntactic restriction that is not expressed by the grammar. -**Default parameter values are evaluated when the function definition is -executed.** This means that the expression is evaluated once, when the function -is defined, and that the same "pre-computed" value is used for each call. This -is especially important to understand when a default parameter is a mutable -object, such as a list or a dictionary: if the function modifies the object -(e.g. by appending an item to a list), the default value is in effect modified. -This is generally not what was intended. A way around this is to use ``None`` -as the default, and explicitly test for it in the body of the function, e.g.:: +**Default parameter values are evaluated from left to right when the function +definition is executed.** This means that the expression is evaluated once, when +the function is defined, and that the same "pre-computed" value is used for each +call. This is especially important to understand when a default parameter is a +mutable object, such as a list or a dictionary: if the function modifies the +object (e.g. by appending an item to a list), the default value is in effect +modified. This is generally not what was intended. A way around this is to use +``None`` as the default, and explicitly test for it in the body of the function, +e.g.:: def whats_on_the_telly(penguin=None): if penguin is None: diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -396,13 +396,15 @@ 3210 (added size modulo 2**32 to the pyc header) Python 3.3a1 3220 (changed PEP 380 implementation) Python 3.3a4 3230 (revert changes to implicit __class__ closure) + Python 3.4a1 3240 (evaluate positional default arguments before + keyword-only defaults) MAGIC must change whenever the bytecode emitted by the compiler may no longer be understood by older implementations of the eval loop (usually due to the addition of new opcodes). """ -_RAW_MAGIC_NUMBER = 3230 | ord('\r') << 16 | ord('\n') << 24 +_RAW_MAGIC_NUMBER = 3240 | ord('\r') << 16 | ord('\n') << 24 _MAGIC_BYTES = bytes(_RAW_MAGIC_NUMBER >> n & 0xff for n in range(0, 25, 8)) _PYCACHE = '__pycache__' diff --git a/Lib/test/test_keywordonlyarg.py b/Lib/test/test_keywordonlyarg.py --- a/Lib/test/test_keywordonlyarg.py +++ b/Lib/test/test_keywordonlyarg.py @@ -176,6 +176,14 @@ return __a self.assertEqual(X().f(), 42) + def test_default_evaluation_order(self): + # See issue 16967 + a = 42 + with self.assertRaises(NameError) as err: + def f(v=a, x=b, *, y=c, z=d): + pass + self.assertEqual(str(err.exception), "global name 'b' is not defined") + def test_main(): run_unittest(KeywordOnlyArgTestCase) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #16967: In function definition, evaluate positional defaults before + keyword-only defaults. + - Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) in the interpreter. diff --git a/Python/ceval.c b/Python/ceval.c --- a/Python/ceval.c +++ b/Python/ceval.c @@ -2901,23 +2901,6 @@ } /* XXX Maybe this should be a separate opcode? */ - if (posdefaults > 0) { - PyObject *defs = PyTuple_New(posdefaults); - if (defs == NULL) { - Py_DECREF(func); - goto error; - } - while (--posdefaults >= 0) - PyTuple_SET_ITEM(defs, posdefaults, POP()); - if (PyFunction_SetDefaults(func, defs) != 0) { - /* Can't happen unless - PyFunction_SetDefaults changes. */ - Py_DECREF(defs); - Py_DECREF(func); - goto error; - } - Py_DECREF(defs); - } if (kwdefaults > 0) { PyObject *defs = PyDict_New(); if (defs == NULL) { @@ -2945,6 +2928,23 @@ } Py_DECREF(defs); } + if (posdefaults > 0) { + PyObject *defs = PyTuple_New(posdefaults); + if (defs == NULL) { + Py_DECREF(func); + goto error; + } + while (--posdefaults >= 0) + PyTuple_SET_ITEM(defs, posdefaults, POP()); + if (PyFunction_SetDefaults(func, defs) != 0) { + /* Can't happen unless + PyFunction_SetDefaults changes. */ + Py_DECREF(defs); + Py_DECREF(func); + goto error; + } + Py_DECREF(defs); + } PUSH(func); DISPATCH(); } diff --git a/Python/compile.c b/Python/compile.c --- a/Python/compile.c +++ b/Python/compile.c @@ -1565,6 +1565,8 @@ if (!compiler_decorators(c, decos)) return 0; + if (args->defaults) + VISIT_SEQ(c, expr, args->defaults); if (args->kwonlyargs) { int res = compiler_visit_kwonlydefaults(c, args->kwonlyargs, args->kw_defaults); @@ -1572,8 +1574,6 @@ return 0; kw_default_count = res; } - if (args->defaults) - VISIT_SEQ(c, expr, args->defaults); num_annotations = compiler_visit_annotations(c, args, returns); if (num_annotations < 0) return 0; diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 15:32:39 2013 From: python-checkins at python.org (benjamin.peterson) Date: Sun, 10 Feb 2013 15:32:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_default_-=3E_default?= =?utf-8?q?=29=3A_merge_heads?= Message-ID: <3Z3szW1CqBzSMw@mail.python.org> http://hg.python.org/cpython/rev/3c41e7943769 changeset: 82134:3c41e7943769 parent: 82133:d296cf1600a8 parent: 82132:2704e11da558 user: Benjamin Peterson date: Sun Feb 10 09:32:22 2013 -0500 summary: merge heads files: Lib/random.py | 4 ++-- Lib/test/test_random.py | 14 ++++++++++++++ Misc/NEWS | 3 +++ 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -450,9 +450,9 @@ u3 = random() if u3 > 0.5: - theta = (mu % TWOPI) + _acos(f) + theta = (mu + _acos(f)) % TWOPI else: - theta = (mu % TWOPI) - _acos(f) + theta = (mu - _acos(f)) % TWOPI return theta diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -512,6 +512,20 @@ self.assertAlmostEqual(s1/N, mu, places=2) self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2) + def test_von_mises_range(self): + # Issue 17149: von mises variates were not consistently in the + # range [0, 2*PI]. + g = random.Random() + N = 100 + for mu in 0.0, 0.1, 3.1, 6.2: + for kappa in 0.0, 2.3, 500.0: + for _ in range(N): + sample = g.vonmisesvariate(mu, kappa) + self.assertTrue( + 0 <= sample <= random.TWOPI, + msg=("vonmisesvariate({}, {}) produced a result {} out" + " of range [0, 2*pi]").format(mu, kappa, sample)) + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -247,6 +247,9 @@ Library ------- +- Issue #17149: Fix random.vonmisesvariate to always return results in + [0, 2*math.pi]. + - Issue #1470548: XMLGenerator now works with binary output streams. - Issue #6975: os.path.realpath() now correctly resolves multiple nested -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 15:48:30 2013 From: python-checkins at python.org (benjamin.peterson) Date: Sun, 10 Feb 2013 15:48:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_evaluate_lambda_keyword-on?= =?utf-8?q?ly_defaults_after_positional_defaults_=28=2316967_again=29?= Message-ID: <3Z3tKp1WDXzQ28@mail.python.org> http://hg.python.org/cpython/rev/6917402c6191 changeset: 82135:6917402c6191 user: Benjamin Peterson date: Sun Feb 10 09:48:22 2013 -0500 summary: evaluate lambda keyword-only defaults after positional defaults (#16967 again) files: Lib/importlib/_bootstrap.py | 4 ++-- Lib/test/test_keywordonlyarg.py | 4 ++++ Python/compile.c | 4 ++-- Python/importlib.h | 2 +- 4 files changed, 9 insertions(+), 5 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -396,7 +396,7 @@ 3210 (added size modulo 2**32 to the pyc header) Python 3.3a1 3220 (changed PEP 380 implementation) Python 3.3a4 3230 (revert changes to implicit __class__ closure) - Python 3.4a1 3240 (evaluate positional default arguments before + Python 3.4a1 3250 (evaluate positional default arguments before keyword-only defaults) MAGIC must change whenever the bytecode emitted by the compiler may no @@ -404,7 +404,7 @@ due to the addition of new opcodes). """ -_RAW_MAGIC_NUMBER = 3240 | ord('\r') << 16 | ord('\n') << 24 +_RAW_MAGIC_NUMBER = 3250 | ord('\r') << 16 | ord('\n') << 24 _MAGIC_BYTES = bytes(_RAW_MAGIC_NUMBER >> n & 0xff for n in range(0, 25, 8)) _PYCACHE = '__pycache__' diff --git a/Lib/test/test_keywordonlyarg.py b/Lib/test/test_keywordonlyarg.py --- a/Lib/test/test_keywordonlyarg.py +++ b/Lib/test/test_keywordonlyarg.py @@ -183,6 +183,10 @@ def f(v=a, x=b, *, y=c, z=d): pass self.assertEqual(str(err.exception), "global name 'b' is not defined") + with self.assertRaises(NameError) as err: + f = lambda v=a, x=b, *, y=c, z=d: None + self.assertEqual(str(err.exception), "global name 'b' is not defined") + def test_main(): run_unittest(KeywordOnlyArgTestCase) diff --git a/Python/compile.c b/Python/compile.c --- a/Python/compile.c +++ b/Python/compile.c @@ -1794,14 +1794,14 @@ return 0; } + if (args->defaults) + VISIT_SEQ(c, expr, args->defaults); if (args->kwonlyargs) { int res = compiler_visit_kwonlydefaults(c, args->kwonlyargs, args->kw_defaults); if (res < 0) return 0; kw_default_count = res; } - if (args->defaults) - VISIT_SEQ(c, expr, args->defaults); if (!compiler_enter_scope(c, name, COMPILER_SCOPE_FUNCTION, (void *)e, e->lineno)) return 0; diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 16:45:33 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 16:45:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzEyOTgz?= =?utf-8?q?=3A_Bytes_literals_with_invalid_=5Cx_escape_now_raise_a_SyntaxE?= =?utf-8?q?rror?= Message-ID: <3Z3vbd4mryzPfT@mail.python.org> http://hg.python.org/cpython/rev/305210a08fc9 changeset: 82136:305210a08fc9 branch: 3.2 parent: 82130:41e97652a9f9 user: Serhiy Storchaka date: Sun Feb 10 17:36:00 2013 +0200 summary: Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError and a full traceback including line number. files: Lib/test/test_strlit.py | 36 +++++++++++++++++++++++++++++ Misc/NEWS | 3 ++ Objects/bytesobject.c | 5 ++- Python/ast.c | 18 ++++++++----- 4 files changed, 53 insertions(+), 9 deletions(-) diff --git a/Lib/test/test_strlit.py b/Lib/test/test_strlit.py --- a/Lib/test/test_strlit.py +++ b/Lib/test/test_strlit.py @@ -50,6 +50,10 @@ assert ord(f) == 0x1881 g = r'\u1881' assert list(map(ord, g)) == [92, 117, 49, 56, 56, 49] +h = '\U0001d120' +assert ord(h) == 0x1d120 +i = r'\U0001d120' +assert list(map(ord, i)) == [92, 85, 48, 48, 48, 49, 100, 49, 50, 48] """ @@ -82,6 +86,24 @@ self.assertEqual(eval(""" '\x81' """), chr(0x81)) self.assertEqual(eval(r""" '\u1881' """), chr(0x1881)) self.assertEqual(eval(""" '\u1881' """), chr(0x1881)) + self.assertEqual(eval(r""" '\U0001d120' """), chr(0x1d120)) + self.assertEqual(eval(""" '\U0001d120' """), chr(0x1d120)) + + def test_eval_str_incomplete(self): + self.assertRaises(SyntaxError, eval, r""" '\x' """) + self.assertRaises(SyntaxError, eval, r""" '\x0' """) + self.assertRaises(SyntaxError, eval, r""" '\u' """) + self.assertRaises(SyntaxError, eval, r""" '\u0' """) + self.assertRaises(SyntaxError, eval, r""" '\u00' """) + self.assertRaises(SyntaxError, eval, r""" '\u000' """) + self.assertRaises(SyntaxError, eval, r""" '\U' """) + self.assertRaises(SyntaxError, eval, r""" '\U0' """) + self.assertRaises(SyntaxError, eval, r""" '\U00' """) + self.assertRaises(SyntaxError, eval, r""" '\U000' """) + self.assertRaises(SyntaxError, eval, r""" '\U0000' """) + self.assertRaises(SyntaxError, eval, r""" '\U00000' """) + self.assertRaises(SyntaxError, eval, r""" '\U000000' """) + self.assertRaises(SyntaxError, eval, r""" '\U0000000' """) def test_eval_str_raw(self): self.assertEqual(eval(""" r'x' """), 'x') @@ -91,6 +113,8 @@ self.assertEqual(eval(""" r'\x81' """), chr(0x81)) self.assertEqual(eval(r""" r'\u1881' """), '\\' + 'u1881') self.assertEqual(eval(""" r'\u1881' """), chr(0x1881)) + self.assertEqual(eval(r""" r'\U0001d120' """), '\\' + 'U0001d120') + self.assertEqual(eval(""" r'\U0001d120' """), chr(0x1d120)) def test_eval_bytes_normal(self): self.assertEqual(eval(""" b'x' """), b'x') @@ -100,6 +124,12 @@ self.assertRaises(SyntaxError, eval, """ b'\x81' """) self.assertEqual(eval(r""" b'\u1881' """), b'\\' + b'u1881') self.assertRaises(SyntaxError, eval, """ b'\u1881' """) + self.assertEqual(eval(r""" b'\U0001d120' """), b'\\' + b'U0001d120') + self.assertRaises(SyntaxError, eval, """ b'\U0001d120' """) + + def test_eval_bytes_incomplete(self): + self.assertRaises(SyntaxError, eval, r""" b'\x' """) + self.assertRaises(SyntaxError, eval, r""" b'\x0' """) def test_eval_bytes_raw(self): self.assertEqual(eval(""" br'x' """), b'x') @@ -109,6 +139,12 @@ self.assertRaises(SyntaxError, eval, """ br'\x81' """) self.assertEqual(eval(r""" br'\u1881' """), b"\\" + b"u1881") self.assertRaises(SyntaxError, eval, """ br'\u1881' """) + self.assertEqual(eval(r""" br'\U0001d120' """), b"\\" + b"U0001d120") + self.assertRaises(SyntaxError, eval, """ br'\U0001d120' """) + self.assertRaises(SyntaxError, eval, """ rb'' """) + self.assertRaises(SyntaxError, eval, """ bb'' """) + self.assertRaises(SyntaxError, eval, """ rr'' """) + self.assertRaises(SyntaxError, eval, """ brr'' """) def check_encoding(self, encoding, extra=""): modname = "xx_" + encoding.replace("-", "_") diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError + and a full traceback including line number. + - Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) in the interpreter. diff --git a/Objects/bytesobject.c b/Objects/bytesobject.c --- a/Objects/bytesobject.c +++ b/Objects/bytesobject.c @@ -469,8 +469,9 @@ break; } if (!errors || strcmp(errors, "strict") == 0) { - PyErr_SetString(PyExc_ValueError, - "invalid \\x escape"); + PyErr_Format(PyExc_ValueError, + "invalid \\x escape at position %d", + s - 2 - (end - len)); goto failed; } if (strcmp(errors, "replace") == 0) { diff --git a/Python/ast.c b/Python/ast.c --- a/Python/ast.c +++ b/Python/ast.c @@ -1368,20 +1368,24 @@ case STRING: { PyObject *str = parsestrplus(c, n, &bytesmode); if (!str) { - if (PyErr_ExceptionMatches(PyExc_UnicodeError)) { + const char *errtype = NULL; + if (PyErr_ExceptionMatches(PyExc_UnicodeError)) + errtype = "unicode error"; + else if (PyErr_ExceptionMatches(PyExc_ValueError)) + errtype = "value error"; + if (errtype) { + char buf[128]; PyObject *type, *value, *tback, *errstr; PyErr_Fetch(&type, &value, &tback); errstr = PyObject_Str(value); if (errstr) { - char *s = ""; - char buf[128]; - s = _PyUnicode_AsString(errstr); - PyOS_snprintf(buf, sizeof(buf), "(unicode error) %s", s); - ast_error(n, buf); + char *s = _PyUnicode_AsString(errstr); + PyOS_snprintf(buf, sizeof(buf), "(%s) %s", errtype, s); Py_DECREF(errstr); } else { - ast_error(n, "(unicode error) unknown error"); + PyOS_snprintf(buf, sizeof(buf), "(%s) unknown error", errtype); } + ast_error(n, buf); Py_DECREF(type); Py_DECREF(value); Py_XDECREF(tback); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 16:45:35 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 16:45:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2312983=3A_Bytes_literals_with_invalid_=5Cx_escape_now_?= =?utf-8?q?raise_a_SyntaxError?= Message-ID: <3Z3vbg1fb0zQGJ@mail.python.org> http://hg.python.org/cpython/rev/d5b731446a91 changeset: 82137:d5b731446a91 branch: 3.3 parent: 82131:e9b4f2927412 parent: 82136:305210a08fc9 user: Serhiy Storchaka date: Sun Feb 10 17:42:01 2013 +0200 summary: Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError and a full traceback including line number. files: Lib/test/test_strlit.py | 34 +++++++++++++++++++++++++++++ Misc/NEWS | 3 ++ Objects/bytesobject.c | 5 ++- Python/ast.c | 18 +++++++++----- 4 files changed, 51 insertions(+), 9 deletions(-) diff --git a/Lib/test/test_strlit.py b/Lib/test/test_strlit.py --- a/Lib/test/test_strlit.py +++ b/Lib/test/test_strlit.py @@ -50,6 +50,10 @@ assert ord(f) == 0x1881 g = r'\u1881' assert list(map(ord, g)) == [92, 117, 49, 56, 56, 49] +h = '\U0001d120' +assert ord(h) == 0x1d120 +i = r'\U0001d120' +assert list(map(ord, i)) == [92, 85, 48, 48, 48, 49, 100, 49, 50, 48] """ @@ -82,6 +86,24 @@ self.assertEqual(eval(""" '\x81' """), chr(0x81)) self.assertEqual(eval(r""" '\u1881' """), chr(0x1881)) self.assertEqual(eval(""" '\u1881' """), chr(0x1881)) + self.assertEqual(eval(r""" '\U0001d120' """), chr(0x1d120)) + self.assertEqual(eval(""" '\U0001d120' """), chr(0x1d120)) + + def test_eval_str_incomplete(self): + self.assertRaises(SyntaxError, eval, r""" '\x' """) + self.assertRaises(SyntaxError, eval, r""" '\x0' """) + self.assertRaises(SyntaxError, eval, r""" '\u' """) + self.assertRaises(SyntaxError, eval, r""" '\u0' """) + self.assertRaises(SyntaxError, eval, r""" '\u00' """) + self.assertRaises(SyntaxError, eval, r""" '\u000' """) + self.assertRaises(SyntaxError, eval, r""" '\U' """) + self.assertRaises(SyntaxError, eval, r""" '\U0' """) + self.assertRaises(SyntaxError, eval, r""" '\U00' """) + self.assertRaises(SyntaxError, eval, r""" '\U000' """) + self.assertRaises(SyntaxError, eval, r""" '\U0000' """) + self.assertRaises(SyntaxError, eval, r""" '\U00000' """) + self.assertRaises(SyntaxError, eval, r""" '\U000000' """) + self.assertRaises(SyntaxError, eval, r""" '\U0000000' """) def test_eval_str_raw(self): self.assertEqual(eval(""" r'x' """), 'x') @@ -91,6 +113,8 @@ self.assertEqual(eval(""" r'\x81' """), chr(0x81)) self.assertEqual(eval(r""" r'\u1881' """), '\\' + 'u1881') self.assertEqual(eval(""" r'\u1881' """), chr(0x1881)) + self.assertEqual(eval(r""" r'\U0001d120' """), '\\' + 'U0001d120') + self.assertEqual(eval(""" r'\U0001d120' """), chr(0x1d120)) def test_eval_bytes_normal(self): self.assertEqual(eval(""" b'x' """), b'x') @@ -100,6 +124,12 @@ self.assertRaises(SyntaxError, eval, """ b'\x81' """) self.assertEqual(eval(r""" b'\u1881' """), b'\\' + b'u1881') self.assertRaises(SyntaxError, eval, """ b'\u1881' """) + self.assertEqual(eval(r""" b'\U0001d120' """), b'\\' + b'U0001d120') + self.assertRaises(SyntaxError, eval, """ b'\U0001d120' """) + + def test_eval_bytes_incomplete(self): + self.assertRaises(SyntaxError, eval, r""" b'\x' """) + self.assertRaises(SyntaxError, eval, r""" b'\x0' """) def test_eval_bytes_raw(self): self.assertEqual(eval(""" br'x' """), b'x') @@ -116,6 +146,10 @@ self.assertEqual(eval(r""" rb'\u1881' """), b"\\" + b"u1881") self.assertRaises(SyntaxError, eval, """ br'\u1881' """) self.assertRaises(SyntaxError, eval, """ rb'\u1881' """) + self.assertEqual(eval(r""" br'\U0001d120' """), b"\\" + b"U0001d120") + self.assertEqual(eval(r""" rb'\U0001d120' """), b"\\" + b"U0001d120") + self.assertRaises(SyntaxError, eval, """ br'\U0001d120' """) + self.assertRaises(SyntaxError, eval, """ rb'\U0001d120' """) self.assertRaises(SyntaxError, eval, """ bb'' """) self.assertRaises(SyntaxError, eval, """ rr'' """) self.assertRaises(SyntaxError, eval, """ brr'' """) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError + and a full traceback including line number. + - Issue #17173: Remove uses of locale-dependent C functions (isalpha() etc.) in the interpreter. diff --git a/Objects/bytesobject.c b/Objects/bytesobject.c --- a/Objects/bytesobject.c +++ b/Objects/bytesobject.c @@ -465,8 +465,9 @@ break; } if (!errors || strcmp(errors, "strict") == 0) { - PyErr_SetString(PyExc_ValueError, - "invalid \\x escape"); + PyErr_Format(PyExc_ValueError, + "invalid \\x escape at position %d", + s - 2 - (end - len)); goto failed; } if (strcmp(errors, "replace") == 0) { diff --git a/Python/ast.c b/Python/ast.c --- a/Python/ast.c +++ b/Python/ast.c @@ -1829,20 +1829,24 @@ case STRING: { PyObject *str = parsestrplus(c, n, &bytesmode); if (!str) { - if (PyErr_ExceptionMatches(PyExc_UnicodeError)) { + const char *errtype = NULL; + if (PyErr_ExceptionMatches(PyExc_UnicodeError)) + errtype = "unicode error"; + else if (PyErr_ExceptionMatches(PyExc_ValueError)) + errtype = "value error"; + if (errtype) { + char buf[128]; PyObject *type, *value, *tback, *errstr; PyErr_Fetch(&type, &value, &tback); errstr = PyObject_Str(value); if (errstr) { - char *s = ""; - char buf[128]; - s = _PyUnicode_AsString(errstr); - PyOS_snprintf(buf, sizeof(buf), "(unicode error) %s", s); - ast_error(c, n, buf); + char *s = _PyUnicode_AsString(errstr); + PyOS_snprintf(buf, sizeof(buf), "(%s) %s", errtype, s); Py_DECREF(errstr); } else { - ast_error(c, n, "(unicode error) unknown error"); + PyOS_snprintf(buf, sizeof(buf), "(%s) unknown error", errtype); } + ast_error(c, n, buf); Py_DECREF(type); Py_DECREF(value); Py_XDECREF(tback); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 16:45:36 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 16:45:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2312983=3A_Bytes_literals_with_invalid_=5Cx_escap?= =?utf-8?q?e_now_raise_a_SyntaxError?= Message-ID: <3Z3vbh64vczQCl@mail.python.org> http://hg.python.org/cpython/rev/fe410292cba6 changeset: 82138:fe410292cba6 parent: 82135:6917402c6191 parent: 82137:d5b731446a91 user: Serhiy Storchaka date: Sun Feb 10 17:43:25 2013 +0200 summary: Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError and a full traceback including line number. files: Lib/test/test_strlit.py | 34 +++++++++++++++++++++++++++++ Misc/NEWS | 3 ++ Objects/bytesobject.c | 5 ++- Python/ast.c | 18 +++++++++----- 4 files changed, 51 insertions(+), 9 deletions(-) diff --git a/Lib/test/test_strlit.py b/Lib/test/test_strlit.py --- a/Lib/test/test_strlit.py +++ b/Lib/test/test_strlit.py @@ -50,6 +50,10 @@ assert ord(f) == 0x1881 g = r'\u1881' assert list(map(ord, g)) == [92, 117, 49, 56, 56, 49] +h = '\U0001d120' +assert ord(h) == 0x1d120 +i = r'\U0001d120' +assert list(map(ord, i)) == [92, 85, 48, 48, 48, 49, 100, 49, 50, 48] """ @@ -82,6 +86,24 @@ self.assertEqual(eval(""" '\x81' """), chr(0x81)) self.assertEqual(eval(r""" '\u1881' """), chr(0x1881)) self.assertEqual(eval(""" '\u1881' """), chr(0x1881)) + self.assertEqual(eval(r""" '\U0001d120' """), chr(0x1d120)) + self.assertEqual(eval(""" '\U0001d120' """), chr(0x1d120)) + + def test_eval_str_incomplete(self): + self.assertRaises(SyntaxError, eval, r""" '\x' """) + self.assertRaises(SyntaxError, eval, r""" '\x0' """) + self.assertRaises(SyntaxError, eval, r""" '\u' """) + self.assertRaises(SyntaxError, eval, r""" '\u0' """) + self.assertRaises(SyntaxError, eval, r""" '\u00' """) + self.assertRaises(SyntaxError, eval, r""" '\u000' """) + self.assertRaises(SyntaxError, eval, r""" '\U' """) + self.assertRaises(SyntaxError, eval, r""" '\U0' """) + self.assertRaises(SyntaxError, eval, r""" '\U00' """) + self.assertRaises(SyntaxError, eval, r""" '\U000' """) + self.assertRaises(SyntaxError, eval, r""" '\U0000' """) + self.assertRaises(SyntaxError, eval, r""" '\U00000' """) + self.assertRaises(SyntaxError, eval, r""" '\U000000' """) + self.assertRaises(SyntaxError, eval, r""" '\U0000000' """) def test_eval_str_raw(self): self.assertEqual(eval(""" r'x' """), 'x') @@ -91,6 +113,8 @@ self.assertEqual(eval(""" r'\x81' """), chr(0x81)) self.assertEqual(eval(r""" r'\u1881' """), '\\' + 'u1881') self.assertEqual(eval(""" r'\u1881' """), chr(0x1881)) + self.assertEqual(eval(r""" r'\U0001d120' """), '\\' + 'U0001d120') + self.assertEqual(eval(""" r'\U0001d120' """), chr(0x1d120)) def test_eval_bytes_normal(self): self.assertEqual(eval(""" b'x' """), b'x') @@ -100,6 +124,12 @@ self.assertRaises(SyntaxError, eval, """ b'\x81' """) self.assertEqual(eval(r""" b'\u1881' """), b'\\' + b'u1881') self.assertRaises(SyntaxError, eval, """ b'\u1881' """) + self.assertEqual(eval(r""" b'\U0001d120' """), b'\\' + b'U0001d120') + self.assertRaises(SyntaxError, eval, """ b'\U0001d120' """) + + def test_eval_bytes_incomplete(self): + self.assertRaises(SyntaxError, eval, r""" b'\x' """) + self.assertRaises(SyntaxError, eval, r""" b'\x0' """) def test_eval_bytes_raw(self): self.assertEqual(eval(""" br'x' """), b'x') @@ -116,6 +146,10 @@ self.assertEqual(eval(r""" rb'\u1881' """), b"\\" + b"u1881") self.assertRaises(SyntaxError, eval, """ br'\u1881' """) self.assertRaises(SyntaxError, eval, """ rb'\u1881' """) + self.assertEqual(eval(r""" br'\U0001d120' """), b"\\" + b"U0001d120") + self.assertEqual(eval(r""" rb'\U0001d120' """), b"\\" + b"U0001d120") + self.assertRaises(SyntaxError, eval, """ br'\U0001d120' """) + self.assertRaises(SyntaxError, eval, """ rb'\U0001d120' """) self.assertRaises(SyntaxError, eval, """ bb'' """) self.assertRaises(SyntaxError, eval, """ rr'' """) self.assertRaises(SyntaxError, eval, """ brr'' """) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError + and a full traceback including line number. + - Issue #16967: In function definition, evaluate positional defaults before keyword-only defaults. diff --git a/Objects/bytesobject.c b/Objects/bytesobject.c --- a/Objects/bytesobject.c +++ b/Objects/bytesobject.c @@ -474,8 +474,9 @@ break; } if (!errors || strcmp(errors, "strict") == 0) { - PyErr_SetString(PyExc_ValueError, - "invalid \\x escape"); + PyErr_Format(PyExc_ValueError, + "invalid \\x escape at position %d", + s - 2 - (end - len)); goto failed; } if (strcmp(errors, "replace") == 0) { diff --git a/Python/ast.c b/Python/ast.c --- a/Python/ast.c +++ b/Python/ast.c @@ -1843,20 +1843,24 @@ case STRING: { PyObject *str = parsestrplus(c, n, &bytesmode); if (!str) { - if (PyErr_ExceptionMatches(PyExc_UnicodeError)) { + const char *errtype = NULL; + if (PyErr_ExceptionMatches(PyExc_UnicodeError)) + errtype = "unicode error"; + else if (PyErr_ExceptionMatches(PyExc_ValueError)) + errtype = "value error"; + if (errtype) { + char buf[128]; PyObject *type, *value, *tback, *errstr; PyErr_Fetch(&type, &value, &tback); errstr = PyObject_Str(value); if (errstr) { - char *s = ""; - char buf[128]; - s = _PyUnicode_AsString(errstr); - PyOS_snprintf(buf, sizeof(buf), "(unicode error) %s", s); - ast_error(c, n, buf); + char *s = _PyUnicode_AsString(errstr); + PyOS_snprintf(buf, sizeof(buf), "(%s) %s", errtype, s); Py_DECREF(errstr); } else { - ast_error(c, n, "(unicode error) unknown error"); + PyOS_snprintf(buf, sizeof(buf), "(%s) unknown error", errtype); } + ast_error(c, n, buf); Py_DECREF(type); Py_DECREF(value); Py_XDECREF(tback); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 18:32:46 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 18:32:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MTQx?= =?utf-8?q?=3A_random=2Evonmisesvariate=28=29_no_more_hangs_for_large_kapp?= =?utf-8?b?YXMu?= Message-ID: <3Z3xzL1ByCzSQB@mail.python.org> http://hg.python.org/cpython/rev/0f9113e1b541 changeset: 82139:0f9113e1b541 branch: 2.7 parent: 82129:6a3d18cede49 user: Serhiy Storchaka date: Sun Feb 10 19:27:37 2013 +0200 summary: Issue #17141: random.vonmisesvariate() no more hangs for large kappas. files: Lib/random.py | 16 +++++------- Lib/test/test_random.py | 34 +++++++++++++++++++++++++++- Misc/NEWS | 2 + 3 files changed, 41 insertions(+), 11 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -457,22 +457,20 @@ if kappa <= 1e-6: return TWOPI * random() - a = 1.0 + _sqrt(1.0 + 4.0 * kappa * kappa) - b = (a - _sqrt(2.0 * a))/(2.0 * kappa) - r = (1.0 + b * b)/(2.0 * b) + s = 0.5 / kappa + r = s + _sqrt(1.0 + s * s) while 1: u1 = random() + z = _cos(_pi * u1) - z = _cos(_pi * u1) - f = (1.0 + r * z)/(r + z) - c = kappa * (r - f) - + d = z / (r + z) u2 = random() - - if u2 < c * (2.0 - c) or u2 <= c * _exp(1.0 - c): + if u2 < 1.0 - d * d or u2 <= (1.0 - d) * _exp(d): break + q = 1.0 / r + f = (q + z) / (1.0 + q * z) u3 = random() if u3 > 0.5: theta = (mu + _acos(f)) % TWOPI diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -494,6 +494,7 @@ g.random = x[:].pop; g.paretovariate(1.0) g.random = x[:].pop; g.expovariate(1.0) g.random = x[:].pop; g.weibullvariate(1.0, 1.0) + g.random = x[:].pop; g.vonmisesvariate(1.0, 1.0) g.random = x[:].pop; g.normalvariate(0.0, 1.0) g.random = x[:].pop; g.gauss(0.0, 1.0) g.random = x[:].pop; g.lognormvariate(0.0, 1.0) @@ -514,6 +515,7 @@ (g.uniform, (1.0,10.0), (10.0+1.0)/2, (10.0-1.0)**2/12), (g.triangular, (0.0, 1.0, 1.0/3.0), 4.0/9.0, 7.0/9.0/18.0), (g.expovariate, (1.5,), 1/1.5, 1/1.5**2), + (g.vonmisesvariate, (1.23, 0), pi, pi**2/3), (g.paretovariate, (5.0,), 5.0/(5.0-1), 5.0/((5.0-1)**2*(5.0-2))), (g.weibullvariate, (1.0, 3.0), gamma(1+1/3.0), @@ -530,8 +532,30 @@ s1 += e s2 += (e - mu) ** 2 N = len(y) - self.assertAlmostEqual(s1/N, mu, 2) - self.assertAlmostEqual(s2/(N-1), sigmasqrd, 2) + self.assertAlmostEqual(s1/N, mu, places=2, + msg='%s%r' % (variate.__name__, args)) + self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2, + msg='%s%r' % (variate.__name__, args)) + + def test_constant(self): + g = random.Random() + N = 100 + for variate, args, expected in [ + (g.uniform, (10.0, 10.0), 10.0), + (g.triangular, (10.0, 10.0), 10.0), + #(g.triangular, (10.0, 10.0, 10.0), 10.0), + (g.expovariate, (float('inf'),), 0.0), + (g.vonmisesvariate, (3.0, float('inf')), 3.0), + (g.gauss, (10.0, 0.0), 10.0), + (g.lognormvariate, (0.0, 0.0), 1.0), + (g.lognormvariate, (-float('inf'), 0.0), 0.0), + (g.normalvariate, (10.0, 0.0), 10.0), + (g.paretovariate, (float('inf'),), 1.0), + (g.weibullvariate, (10.0, float('inf')), 10.0), + (g.weibullvariate, (0.0, 10.0), 0.0), + ]: + for i in range(N): + self.assertEqual(variate(*args), expected) def test_von_mises_range(self): # Issue 17149: von mises variates were not consistently in the @@ -547,6 +571,12 @@ msg=("vonmisesvariate({}, {}) produced a result {} out" " of range [0, 2*pi]").format(mu, kappa, sample)) + def test_von_mises_large_kappa(self): + # Issue #17141: vonmisesvariate() was hang for large kappas + random.vonmisesvariate(0, 1e15) + random.vonmisesvariate(0, 1e100) + + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,8 @@ Library ------- +- Issue #17141: random.vonmisesvariate() no more hangs for large kappas. + - Issue #17149: Fix random.vonmisesvariate to always return results in the range [0, 2*math.pi]. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 18:32:47 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 18:32:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MTQx?= =?utf-8?q?=3A_random=2Evonmisesvariate=28=29_no_more_hangs_for_large_kapp?= =?utf-8?b?YXMu?= Message-ID: <3Z3xzM5RMSzSPt@mail.python.org> http://hg.python.org/cpython/rev/d94b73c95646 changeset: 82140:d94b73c95646 branch: 3.2 parent: 82136:305210a08fc9 user: Serhiy Storchaka date: Sun Feb 10 19:28:56 2013 +0200 summary: Issue #17141: random.vonmisesvariate() no more hangs for large kappas. files: Lib/random.py | 16 +++++------- Lib/test/test_random.py | 34 +++++++++++++++++++++++++++- Misc/NEWS | 2 + 3 files changed, 41 insertions(+), 11 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -431,22 +431,20 @@ if kappa <= 1e-6: return TWOPI * random() - a = 1.0 + _sqrt(1.0 + 4.0 * kappa * kappa) - b = (a - _sqrt(2.0 * a))/(2.0 * kappa) - r = (1.0 + b * b)/(2.0 * b) + s = 0.5 / kappa + r = s + _sqrt(1.0 + s * s) while 1: u1 = random() + z = _cos(_pi * u1) - z = _cos(_pi * u1) - f = (1.0 + r * z)/(r + z) - c = kappa * (r - f) - + d = z / (r + z) u2 = random() - - if u2 < c * (2.0 - c) or u2 <= c * _exp(1.0 - c): + if u2 < 1.0 - d * d or u2 <= (1.0 - d) * _exp(d): break + q = 1.0 / r + f = (q + z) / (1.0 + q * z) u3 = random() if u3 > 0.5: theta = (mu + _acos(f)) % TWOPI diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -436,6 +436,7 @@ g.random = x[:].pop; g.paretovariate(1.0) g.random = x[:].pop; g.expovariate(1.0) g.random = x[:].pop; g.weibullvariate(1.0, 1.0) + g.random = x[:].pop; g.vonmisesvariate(1.0, 1.0) g.random = x[:].pop; g.normalvariate(0.0, 1.0) g.random = x[:].pop; g.gauss(0.0, 1.0) g.random = x[:].pop; g.lognormvariate(0.0, 1.0) @@ -456,6 +457,7 @@ (g.uniform, (1.0,10.0), (10.0+1.0)/2, (10.0-1.0)**2/12), (g.triangular, (0.0, 1.0, 1.0/3.0), 4.0/9.0, 7.0/9.0/18.0), (g.expovariate, (1.5,), 1/1.5, 1/1.5**2), + (g.vonmisesvariate, (1.23, 0), pi, pi**2/3), (g.paretovariate, (5.0,), 5.0/(5.0-1), 5.0/((5.0-1)**2*(5.0-2))), (g.weibullvariate, (1.0, 3.0), gamma(1+1/3.0), @@ -472,8 +474,30 @@ s1 += e s2 += (e - mu) ** 2 N = len(y) - self.assertAlmostEqual(s1/N, mu, places=2) - self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2) + self.assertAlmostEqual(s1/N, mu, places=2, + msg='%s%r' % (variate.__name__, args)) + self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2, + msg='%s%r' % (variate.__name__, args)) + + def test_constant(self): + g = random.Random() + N = 100 + for variate, args, expected in [ + (g.uniform, (10.0, 10.0), 10.0), + (g.triangular, (10.0, 10.0), 10.0), + #(g.triangular, (10.0, 10.0, 10.0), 10.0), + (g.expovariate, (float('inf'),), 0.0), + (g.vonmisesvariate, (3.0, float('inf')), 3.0), + (g.gauss, (10.0, 0.0), 10.0), + (g.lognormvariate, (0.0, 0.0), 1.0), + (g.lognormvariate, (-float('inf'), 0.0), 0.0), + (g.normalvariate, (10.0, 0.0), 10.0), + (g.paretovariate, (float('inf'),), 1.0), + (g.weibullvariate, (10.0, float('inf')), 10.0), + (g.weibullvariate, (0.0, 10.0), 0.0), + ]: + for i in range(N): + self.assertEqual(variate(*args), expected) def test_von_mises_range(self): # Issue 17149: von mises variates were not consistently in the @@ -489,6 +513,12 @@ msg=("vonmisesvariate({}, {}) produced a result {} out" " of range [0, 2*pi]").format(mu, kappa, sample)) + def test_von_mises_large_kappa(self): + # Issue #17141: vonmisesvariate() was hang for large kappas + random.vonmisesvariate(0, 1e15) + random.vonmisesvariate(0, 1e100) + + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -221,6 +221,8 @@ Library ------- +- Issue #17141: random.vonmisesvariate() no more hangs for large kappas. + - Issue #17149: Fix random.vonmisesvariate to always return results in [0, 2*math.pi]. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 18:32:49 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 18:32:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317141=3A_random=2Evonmisesvariate=28=29_no_more_hangs?= =?utf-8?q?_for_large_kappas=2E?= Message-ID: <3Z3xzP2Kr3zScY@mail.python.org> http://hg.python.org/cpython/rev/bdd993847ad0 changeset: 82141:bdd993847ad0 branch: 3.3 parent: 82137:d5b731446a91 parent: 82140:d94b73c95646 user: Serhiy Storchaka date: Sun Feb 10 19:29:20 2013 +0200 summary: Issue #17141: random.vonmisesvariate() no more hangs for large kappas. files: Lib/random.py | 16 +++++------- Lib/test/test_random.py | 34 +++++++++++++++++++++++++++- Misc/NEWS | 2 + 3 files changed, 41 insertions(+), 11 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -431,22 +431,20 @@ if kappa <= 1e-6: return TWOPI * random() - a = 1.0 + _sqrt(1.0 + 4.0 * kappa * kappa) - b = (a - _sqrt(2.0 * a))/(2.0 * kappa) - r = (1.0 + b * b)/(2.0 * b) + s = 0.5 / kappa + r = s + _sqrt(1.0 + s * s) while 1: u1 = random() + z = _cos(_pi * u1) - z = _cos(_pi * u1) - f = (1.0 + r * z)/(r + z) - c = kappa * (r - f) - + d = z / (r + z) u2 = random() - - if u2 < c * (2.0 - c) or u2 <= c * _exp(1.0 - c): + if u2 < 1.0 - d * d or u2 <= (1.0 - d) * _exp(d): break + q = 1.0 / r + f = (q + z) / (1.0 + q * z) u3 = random() if u3 > 0.5: theta = (mu + _acos(f)) % TWOPI diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -440,6 +440,7 @@ g.random = x[:].pop; g.paretovariate(1.0) g.random = x[:].pop; g.expovariate(1.0) g.random = x[:].pop; g.weibullvariate(1.0, 1.0) + g.random = x[:].pop; g.vonmisesvariate(1.0, 1.0) g.random = x[:].pop; g.normalvariate(0.0, 1.0) g.random = x[:].pop; g.gauss(0.0, 1.0) g.random = x[:].pop; g.lognormvariate(0.0, 1.0) @@ -460,6 +461,7 @@ (g.uniform, (1.0,10.0), (10.0+1.0)/2, (10.0-1.0)**2/12), (g.triangular, (0.0, 1.0, 1.0/3.0), 4.0/9.0, 7.0/9.0/18.0), (g.expovariate, (1.5,), 1/1.5, 1/1.5**2), + (g.vonmisesvariate, (1.23, 0), pi, pi**2/3), (g.paretovariate, (5.0,), 5.0/(5.0-1), 5.0/((5.0-1)**2*(5.0-2))), (g.weibullvariate, (1.0, 3.0), gamma(1+1/3.0), @@ -476,8 +478,30 @@ s1 += e s2 += (e - mu) ** 2 N = len(y) - self.assertAlmostEqual(s1/N, mu, places=2) - self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2) + self.assertAlmostEqual(s1/N, mu, places=2, + msg='%s%r' % (variate.__name__, args)) + self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2, + msg='%s%r' % (variate.__name__, args)) + + def test_constant(self): + g = random.Random() + N = 100 + for variate, args, expected in [ + (g.uniform, (10.0, 10.0), 10.0), + (g.triangular, (10.0, 10.0), 10.0), + #(g.triangular, (10.0, 10.0, 10.0), 10.0), + (g.expovariate, (float('inf'),), 0.0), + (g.vonmisesvariate, (3.0, float('inf')), 3.0), + (g.gauss, (10.0, 0.0), 10.0), + (g.lognormvariate, (0.0, 0.0), 1.0), + (g.lognormvariate, (-float('inf'), 0.0), 0.0), + (g.normalvariate, (10.0, 0.0), 10.0), + (g.paretovariate, (float('inf'),), 1.0), + (g.weibullvariate, (10.0, float('inf')), 10.0), + (g.weibullvariate, (0.0, 10.0), 0.0), + ]: + for i in range(N): + self.assertEqual(variate(*args), expected) def test_von_mises_range(self): # Issue 17149: von mises variates were not consistently in the @@ -493,6 +517,12 @@ msg=("vonmisesvariate({}, {}) produced a result {} out" " of range [0, 2*pi]").format(mu, kappa, sample)) + def test_von_mises_large_kappa(self): + # Issue #17141: vonmisesvariate() was hang for large kappas + random.vonmisesvariate(0, 1e15) + random.vonmisesvariate(0, 1e100) + + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -175,6 +175,8 @@ Library ------- +- Issue #17141: random.vonmisesvariate() no more hangs for large kappas. + - Issue #17149: Fix random.vonmisesvariate to always return results in [0, 2*math.pi]. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 18:32:50 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 18:32:50 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317141=3A_random=2Evonmisesvariate=28=29_no_more?= =?utf-8?q?_hangs_for_large_kappas=2E?= Message-ID: <3Z3xzQ6nZPzScf@mail.python.org> http://hg.python.org/cpython/rev/407625051c45 changeset: 82142:407625051c45 parent: 82138:fe410292cba6 parent: 82141:bdd993847ad0 user: Serhiy Storchaka date: Sun Feb 10 19:29:54 2013 +0200 summary: Issue #17141: random.vonmisesvariate() no more hangs for large kappas. files: Lib/random.py | 16 +++++------- Lib/test/test_random.py | 34 +++++++++++++++++++++++++++- Misc/NEWS | 2 + 3 files changed, 41 insertions(+), 11 deletions(-) diff --git a/Lib/random.py b/Lib/random.py --- a/Lib/random.py +++ b/Lib/random.py @@ -432,22 +432,20 @@ if kappa <= 1e-6: return TWOPI * random() - a = 1.0 + _sqrt(1.0 + 4.0 * kappa * kappa) - b = (a - _sqrt(2.0 * a))/(2.0 * kappa) - r = (1.0 + b * b)/(2.0 * b) + s = 0.5 / kappa + r = s + _sqrt(1.0 + s * s) while 1: u1 = random() + z = _cos(_pi * u1) - z = _cos(_pi * u1) - f = (1.0 + r * z)/(r + z) - c = kappa * (r - f) - + d = z / (r + z) u2 = random() - - if u2 < c * (2.0 - c) or u2 <= c * _exp(1.0 - c): + if u2 < 1.0 - d * d or u2 <= (1.0 - d) * _exp(d): break + q = 1.0 / r + f = (q + z) / (1.0 + q * z) u3 = random() if u3 > 0.5: theta = (mu + _acos(f)) % TWOPI diff --git a/Lib/test/test_random.py b/Lib/test/test_random.py --- a/Lib/test/test_random.py +++ b/Lib/test/test_random.py @@ -473,6 +473,7 @@ g.random = x[:].pop; g.paretovariate(1.0) g.random = x[:].pop; g.expovariate(1.0) g.random = x[:].pop; g.weibullvariate(1.0, 1.0) + g.random = x[:].pop; g.vonmisesvariate(1.0, 1.0) g.random = x[:].pop; g.normalvariate(0.0, 1.0) g.random = x[:].pop; g.gauss(0.0, 1.0) g.random = x[:].pop; g.lognormvariate(0.0, 1.0) @@ -493,6 +494,7 @@ (g.uniform, (1.0,10.0), (10.0+1.0)/2, (10.0-1.0)**2/12), (g.triangular, (0.0, 1.0, 1.0/3.0), 4.0/9.0, 7.0/9.0/18.0), (g.expovariate, (1.5,), 1/1.5, 1/1.5**2), + (g.vonmisesvariate, (1.23, 0), pi, pi**2/3), (g.paretovariate, (5.0,), 5.0/(5.0-1), 5.0/((5.0-1)**2*(5.0-2))), (g.weibullvariate, (1.0, 3.0), gamma(1+1/3.0), @@ -509,8 +511,30 @@ s1 += e s2 += (e - mu) ** 2 N = len(y) - self.assertAlmostEqual(s1/N, mu, places=2) - self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2) + self.assertAlmostEqual(s1/N, mu, places=2, + msg='%s%r' % (variate.__name__, args)) + self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2, + msg='%s%r' % (variate.__name__, args)) + + def test_constant(self): + g = random.Random() + N = 100 + for variate, args, expected in [ + (g.uniform, (10.0, 10.0), 10.0), + (g.triangular, (10.0, 10.0), 10.0), + #(g.triangular, (10.0, 10.0, 10.0), 10.0), + (g.expovariate, (float('inf'),), 0.0), + (g.vonmisesvariate, (3.0, float('inf')), 3.0), + (g.gauss, (10.0, 0.0), 10.0), + (g.lognormvariate, (0.0, 0.0), 1.0), + (g.lognormvariate, (-float('inf'), 0.0), 0.0), + (g.normalvariate, (10.0, 0.0), 10.0), + (g.paretovariate, (float('inf'),), 1.0), + (g.weibullvariate, (10.0, float('inf')), 10.0), + (g.weibullvariate, (0.0, 10.0), 0.0), + ]: + for i in range(N): + self.assertEqual(variate(*args), expected) def test_von_mises_range(self): # Issue 17149: von mises variates were not consistently in the @@ -526,6 +550,12 @@ msg=("vonmisesvariate({}, {}) produced a result {} out" " of range [0, 2*pi]").format(mu, kappa, sample)) + def test_von_mises_large_kappa(self): + # Issue #17141: vonmisesvariate() was hang for large kappas + random.vonmisesvariate(0, 1e15) + random.vonmisesvariate(0, 1e100) + + class TestModule(unittest.TestCase): def testMagicConstants(self): self.assertAlmostEqual(random.NV_MAGICCONST, 1.71552776992141) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -250,6 +250,8 @@ Library ------- +- Issue #17141: random.vonmisesvariate() no more hangs for large kappas. + - Issue #17149: Fix random.vonmisesvariate to always return results in [0, 2*math.pi]. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 18:58:31 2013 From: python-checkins at python.org (mark.dickinson) Date: Sun, 10 Feb 2013 18:58:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_Add_self_to_experts_for_r?= =?utf-8?q?andom_module=2E?= Message-ID: <3Z3yY31VlWzPjX@mail.python.org> http://hg.python.org/devguide/rev/475cdab4e464 changeset: 597:475cdab4e464 user: Mark Dickinson date: Sun Feb 10 17:58:14 2013 +0000 summary: Add self to experts for random module. files: experts.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/experts.rst b/experts.rst --- a/experts.rst +++ b/experts.rst @@ -180,7 +180,7 @@ pydoc queue rhettinger quopri -random rhettinger +random rhettinger, mark.dickinson re effbot (inactive), pitrou, ezio.melotti readline reprlib -- Repository URL: http://hg.python.org/devguide From python-checkins at python.org Sun Feb 10 19:32:33 2013 From: python-checkins at python.org (antoine.pitrou) Date: Sun, 10 Feb 2013 19:32:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2317165=3A_fix_a_ba?= =?utf-8?q?re_import_in_=5Fstrptime=2Epy=2E?= Message-ID: <3Z3zJK1jwbzRJp@mail.python.org> http://hg.python.org/cpython/rev/1557d25b0f6e changeset: 82143:1557d25b0f6e user: Antoine Pitrou date: Sun Feb 10 19:29:17 2013 +0100 summary: Issue #17165: fix a bare import in _strptime.py. Patch by Berker Peksag. files: Lib/_strptime.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/_strptime.py b/Lib/_strptime.py --- a/Lib/_strptime.py +++ b/Lib/_strptime.py @@ -21,7 +21,7 @@ timezone as datetime_timezone) try: from _thread import allocate_lock as _thread_allocate_lock -except: +except ImportError: from _dummy_thread import allocate_lock as _thread_allocate_lock __all__ = [] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 19:43:33 2013 From: python-checkins at python.org (mark.dickinson) Date: Sun, 10 Feb 2013 19:43:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Fix_ReST_role_markup=3A_?= =?utf-8?b?Om1ldGhvZDogLT4gOm1ldGg6?= Message-ID: <3Z3zY11jnZzScY@mail.python.org> http://hg.python.org/cpython/rev/44817c9e8ef7 changeset: 82144:44817c9e8ef7 user: Mark Dickinson date: Sun Feb 10 18:43:16 2013 +0000 summary: Fix ReST role markup: :method: -> :meth: files: Doc/library/socket.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst --- a/Doc/library/socket.rst +++ b/Doc/library/socket.rst @@ -1350,7 +1350,7 @@ socket.socket(socket.AF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) After binding (:const:`CAN_RAW`) or connecting (:const:`CAN_BCM`) the socket, you -can use the :method:`socket.send`, and the :method:`socket.recv` operations (and +can use the :meth:`socket.send`, and the :meth:`socket.recv` operations (and their counterparts) on the socket object as usual. This example might require special priviledge:: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 20:34:46 2013 From: python-checkins at python.org (brett.cannon) Date: Sun, 10 Feb 2013 20:34:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Update_from_Lennart?= Message-ID: <3Z40h63PzTzPl8@mail.python.org> http://hg.python.org/peps/rev/fe7cd22d1064 changeset: 4732:fe7cd22d1064 user: Brett Cannon date: Sun Feb 10 14:34:41 2013 -0500 summary: Update from Lennart files: pep-0431.txt | 28 ++++++++++++---------------- 1 files changed, 12 insertions(+), 16 deletions(-) diff --git a/pep-0431.txt b/pep-0431.txt --- a/pep-0431.txt +++ b/pep-0431.txt @@ -8,7 +8,7 @@ Type: Standards Track Content-Type: text/x-rst Created: 11-Dec-2012 -Post-History: 11-Dec-2012, 28-Dec-2012 +Post-History: 11-Dec-2012, 28-Dec-2012, 28-Jan-2013 Abstract @@ -94,7 +94,7 @@ When changing over from daylight savings time (DST) the clock is turned back one hour. This means that the times during that hour happens twice, once -without DST and then once with DST. Similarly, when changing to daylight +with DST and then once without DST. Similarly, when changing to daylight savings time, one hour goes missing. The current time zone API can not differentiate between the two ambiguous @@ -156,10 +156,10 @@ function, one new exception and four new collections. In addition to this, several methods on the datetime object gets a new ``is_dst`` parameter. -New class ``DstTzInfo`` -^^^^^^^^^^^^^^^^^^^^^^^^ +New class ``dsttimezone`` +^^^^^^^^^^^^^^^^^^^^^^^^^ -This class provides a concrete implementation of the ``zoneinfo`` base +This class provides a concrete implementation of the ``tzinfo`` base class that implements DST support. @@ -176,10 +176,10 @@ database which should be used. If not specified, the function will look for databases in the following order: -1. Use the database in ``/usr/share/zoneinfo``, if it exists. +1. Check if the `tzdata-update` module is installed, and then use that + database. -2. Check if the `tzdata-update` module is installed, and then use that - database. +2. Use the database in ``/usr/share/zoneinfo``, if it exists. 3. Use the Python-provided database in ``Lib/tzdata``. @@ -206,7 +206,7 @@ ``False`` will specify that the given datetime should be interpreted as not happening during daylight savings time, i.e. that the time specified is after -the change from DST. +the change from DST. This is default to preserve existing behavior. ``True`` will specify that the given datetime should be interpreted as happening during daylight savings time, i.e. that the time specified is before the change @@ -224,7 +224,7 @@ This exception is a subclass of KeyError and raised when giving a time zone specification that can't be found:: - >>> datetime.Timezone('Europe/New_York') + >>> datetime.zoneinfo('Europe/New_York') Traceback (most recent call last): ... UnknownTimeZoneError: There is no time zone called 'Europe/New_York' @@ -250,8 +250,8 @@ * ``NonExistentTimeError`` - This exception is raised when giving a datetime specification that is ambiguous - while setting ``is_dst`` to None:: + This exception is raised when giving a datetime specification for a time that due to + daylight saving does not exist, while setting ``is_dst`` to None:: >>> datetime(2012, 3, 25, 2, 0, tzinfo=zoneinfo('Europe/Stockholm'), is_dst=None) >>> @@ -266,13 +266,9 @@ * ``all_timezones`` is the exhaustive list of the time zone names that can be used, listed alphabethically. -* ``all_timezones_set`` is a set of the time zones in ``all_timezones``. - * ``common_timezones`` is a list of useful, current time zones, listed alphabethically. -* ``common_timezones_set`` is a set of the time zones in ``common_timezones``. - The ``tzdata-update``-package ----------------------------- -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 10 21:04:08 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 21:04:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzQ1OTE6?= =?utf-8?q?_Uid_and_gid_values_larger_than_2**31_are_supported_now=2E?= Message-ID: <3Z41L04FKnzSdF@mail.python.org> http://hg.python.org/cpython/rev/b322655a4a88 changeset: 82145:b322655a4a88 branch: 3.3 parent: 82141:bdd993847ad0 user: Serhiy Storchaka date: Sun Feb 10 21:56:49 2013 +0200 summary: Issue #4591: Uid and gid values larger than 2**31 are supported now. files: Lib/test/test_posix.py | 29 ++- Makefile.pre.in | 8 + Misc/NEWS | 2 + Modules/grpmodule.c | 17 +- Modules/posixmodule.c | 327 ++++++++++++++++++---------- Modules/posixmodule.h | 25 ++ Modules/pwdmodule.c | 16 +- Modules/signalmodule.c | 5 +- 8 files changed, 291 insertions(+), 138 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -404,10 +404,20 @@ else: self.assertTrue(stat.S_ISFIFO(posix.stat(support.TESTFN).st_mode)) - def _test_all_chown_common(self, chown_func, first_param): + def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" + def check_stat(): + if stat_func is not None: + stat = stat_func(first_param) + self.assertEqual(stat.st_uid, os.getuid()) + self.assertEqual(stat.st_gid, os.getgid()) # test a successful chown call chown_func(first_param, os.getuid(), os.getgid()) + check_stat() + chown_func(first_param, -1, os.getgid()) + check_stat() + chown_func(first_param, os.getuid(), -1) + check_stat() if os.getuid() == 0: try: @@ -427,8 +437,12 @@ "behavior") else: # non-root cannot chown to root, raises OSError - self.assertRaises(OSError, chown_func, - first_param, 0, 0) + self.assertRaises(OSError, chown_func, first_param, 0, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, 0, -1) + check_stat() @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): @@ -438,7 +452,8 @@ # re-create the file support.create_empty_file(support.TESTFN) - self._test_all_chown_common(posix.chown, support.TESTFN) + self._test_all_chown_common(posix.chown, support.TESTFN, + getattr(posix, 'stat', None)) @unittest.skipUnless(hasattr(posix, 'fchown'), "test needs os.fchown()") def test_fchown(self): @@ -448,7 +463,8 @@ test_file = open(support.TESTFN, 'w') try: fd = test_file.fileno() - self._test_all_chown_common(posix.fchown, fd) + self._test_all_chown_common(posix.fchown, fd, + getattr(posix, 'fstat', None)) finally: test_file.close() @@ -457,7 +473,8 @@ os.unlink(support.TESTFN) # create a symlink os.symlink(_DUMMY_SYMLINK, support.TESTFN) - self._test_all_chown_common(posix.lchown, support.TESTFN) + self._test_all_chown_common(posix.lchown, support.TESTFN, + getattr(posix, 'lstat', None)) def test_chdir(self): if hasattr(posix, 'chdir'): diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -639,6 +639,14 @@ Modules/_testembed.o: $(srcdir)/Modules/_testembed.c $(MAINCC) -c $(PY_CORE_CFLAGS) -o $@ $(srcdir)/Modules/_testembed.c +Modules/posixmodule.o: $(srcdir)/Modules/posixmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/grpmodule.o: $(srcdir)/Modules/grpmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/pwdmodule.o: $(srcdir)/Modules/pwdmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/signalmodule.o: $(srcdir)/Modules/signalmodule.c $(srcdir)/Modules/posixmodule.h + Python/dynload_shlib.o: $(srcdir)/Python/dynload_shlib.c Makefile $(CC) -c $(PY_CORE_CFLAGS) \ -DSOABI='"$(SOABI)"' \ diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -175,6 +175,8 @@ Library ------- +- Issue #4591: Uid and gid values larger than 2**31 are supported now. + - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. - Issue #17149: Fix random.vonmisesvariate to always return results in diff --git a/Modules/grpmodule.c b/Modules/grpmodule.c --- a/Modules/grpmodule.c +++ b/Modules/grpmodule.c @@ -2,8 +2,8 @@ /* UNIX group file access module */ #include "Python.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_group_type_fields[] = { @@ -69,7 +69,7 @@ Py_INCREF(Py_None); } #endif - SET(setIndex++, PyLong_FromLong((long) p->gr_gid)); + SET(setIndex++, _PyLong_FromGid(p->gr_gid)); SET(setIndex++, w); #undef SET @@ -85,17 +85,24 @@ grp_getgrgid(PyObject *self, PyObject *pyo_id) { PyObject *py_int_id; - unsigned int gid; + gid_t gid; struct group *p; py_int_id = PyNumber_Long(pyo_id); if (!py_int_id) return NULL; - gid = PyLong_AS_LONG(py_int_id); + if (!_Py_Gid_Converter(py_int_id, &gid)) { + Py_DECREF(py_int_id); + return NULL; + } Py_DECREF(py_int_id); if ((p = getgrgid(gid)) == NULL) { - PyErr_Format(PyExc_KeyError, "getgrgid(): gid not found: %d", gid); + PyObject *gid_obj = _PyLong_FromGid(gid); + if (gid_obj == NULL) + return NULL; + PyErr_Format(PyExc_KeyError, "getgrgid(): gid not found: %S", gid_obj); + Py_DECREF(gid_obj); return NULL; } return mkgrent(p); diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -26,6 +26,9 @@ #define PY_SSIZE_T_CLEAN #include "Python.h" +#ifndef MS_WINDOWS +#include "posixmodule.h" +#endif #if defined(__VMS) # error "PEP 11: VMS is now unsupported, code will be removed in Python 3.4" @@ -413,6 +416,121 @@ #endif +#ifndef MS_WINDOWS +PyObject * +_PyLong_FromUid(uid_t uid) +{ + if (uid == (uid_t)-1) + return PyLong_FromLong(-1); + return PyLong_FromUnsignedLong(uid); +} + +PyObject * +_PyLong_FromGid(gid_t gid) +{ + if (gid == (gid_t)-1) + return PyLong_FromLong(-1); + return PyLong_FromUnsignedLong(gid); +} + +int +_Py_Uid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(uid_t *)p = (uid_t)-1; + } + else { + /* unsigned uid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((uid_t)uresult == (uid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(uid_t) < sizeof(long) && + (unsigned long)(uid_t)uresult != uresult) + goto OverflowUp; + *(uid_t *)p = (uid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "user id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "user id is greater than maximum"); + return 0; +} + +int +_Py_Gid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(gid_t *)p = (gid_t)-1; + } + else { + /* unsigned gid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((gid_t)uresult == (gid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(gid_t) < sizeof(long) && + (unsigned long)(gid_t)uresult != uresult) + goto OverflowUp; + *(gid_t *)p = (gid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "group id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "group id is greater than maximum"); + return 0; +} +#endif /* MS_WINDOWS */ + + #ifdef AT_FDCWD /* * Why the (int) cast? Solaris 10 defines AT_FDCWD as 0xffd19553 (-3041965); @@ -2166,8 +2284,13 @@ PyStructSequence_SET_ITEM(v, 2, PyLong_FromLong((long)st->st_dev)); #endif PyStructSequence_SET_ITEM(v, 3, PyLong_FromLong((long)st->st_nlink)); - PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong((long)st->st_uid)); - PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong((long)st->st_gid)); +#if defined(MS_WINDOWS) + PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong(0)); + PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong(0)); +#else + PyStructSequence_SET_ITEM(v, 4, _PyLong_FromUid(st->st_uid)); + PyStructSequence_SET_ITEM(v, 5, _PyLong_FromGid(st->st_gid)); +#endif #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 6, PyLong_FromLongLong((PY_LONG_LONG)st->st_size)); @@ -2972,7 +3095,6 @@ posix_chown(PyObject *self, PyObject *args, PyObject *kwargs) { path_t path; - long uid_l, gid_l; uid_t uid; gid_t gid; int dir_fd = DEFAULT_DIR_FD; @@ -2986,9 +3108,10 @@ #ifdef HAVE_FCHOWN path.allow_fd = 1; #endif - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O&ll|$O&p:chown", keywords, + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O&O&O&|$O&p:chown", keywords, path_converter, &path, - &uid_l, &gid_l, + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid, #ifdef HAVE_FCHOWNAT dir_fd_converter, &dir_fd, #else @@ -3019,8 +3142,6 @@ #endif Py_BEGIN_ALLOW_THREADS - uid = (uid_t)uid_l; - gid = (uid_t)gid_l; #ifdef HAVE_FCHOWN if (path.fd != -1) result = fchown(path.fd, uid, gid); @@ -3064,12 +3185,15 @@ posix_fchown(PyObject *self, PyObject *args) { int fd; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "ill:fchown", &fd, &uid, &gid)) + if (!PyArg_ParseTuple(args, "iO&O&:fchown", &fd, + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; Py_BEGIN_ALLOW_THREADS - res = fchown(fd, (uid_t) uid, (gid_t) gid); + res = fchown(fd, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error(); @@ -3089,15 +3213,17 @@ { PyObject *opath; char *path; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "O&ll:lchown", + if (!PyArg_ParseTuple(args, "O&O&O&:lchown", PyUnicode_FSConverter, &opath, - &uid, &gid)) + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; path = PyBytes_AsString(opath); Py_BEGIN_ALLOW_THREADS - res = lchown(path, (uid_t) uid, (gid_t) gid); + res = lchown(path, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error_with_allocated_filename(opath); @@ -6030,7 +6156,7 @@ static PyObject * posix_getegid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getegid()); + return _PyLong_FromGid(getegid()); } #endif @@ -6043,7 +6169,7 @@ static PyObject * posix_geteuid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)geteuid()); + return _PyLong_FromUid(geteuid()); } #endif @@ -6056,7 +6182,7 @@ static PyObject * posix_getgid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getgid()); + return _PyLong_FromGid(getgid()); } #endif @@ -6098,8 +6224,14 @@ #endif ngroups = MAX_GROUPS; - if (!PyArg_ParseTuple(args, "si", &user, &basegid)) - return NULL; +#ifdef __APPLE__ + if (!PyArg_ParseTuple(args, "si:getgrouplist", &user, &basegid)) + return NULL; +#else + if (!PyArg_ParseTuple(args, "sO&:getgrouplist", &user, + _Py_Gid_Converter, &basegid)) + return NULL; +#endif #ifdef __APPLE__ groups = PyMem_Malloc(ngroups * sizeof(int)); @@ -6121,7 +6253,11 @@ } for (i = 0; i < ngroups; i++) { +#ifdef __APPLE__ PyObject *o = PyLong_FromUnsignedLong((unsigned long)groups[i]); +#else + PyObject *o = _PyLong_FromGid(groups[i]); +#endif if (o == NULL) { Py_DECREF(list); PyMem_Del(groups); @@ -6195,7 +6331,7 @@ if (result != NULL) { int i; for (i = 0; i < n; ++i) { - PyObject *o = PyLong_FromLong((long)alt_grouplist[i]); + PyObject *o = _PyLong_FromGid(alt_grouplist[i]); if (o == NULL) { Py_DECREF(result); result = NULL; @@ -6226,14 +6362,25 @@ PyObject *oname; char *username; int res; - long gid; - - if (!PyArg_ParseTuple(args, "O&l:initgroups", - PyUnicode_FSConverter, &oname, &gid)) +#ifdef __APPLE__ + int gid; +#else + gid_t gid; +#endif + +#ifdef __APPLE__ + if (!PyArg_ParseTuple(args, "O&i:initgroups", + PyUnicode_FSConverter, &oname, + &gid)) +#else + if (!PyArg_ParseTuple(args, "O&O&:initgroups", + PyUnicode_FSConverter, &oname, + _Py_Gid_Converter, &gid)) +#endif return NULL; username = PyBytes_AS_STRING(oname); - res = initgroups(username, (gid_t) gid); + res = initgroups(username, gid); Py_DECREF(oname); if (res == -1) return PyErr_SetFromErrno(PyExc_OSError); @@ -6408,7 +6555,7 @@ static PyObject * posix_getuid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getuid()); + return _PyLong_FromUid(getuid()); } #endif @@ -6548,15 +6695,9 @@ static PyObject * posix_setuid(PyObject *self, PyObject *args) { - long uid_arg; uid_t uid; - if (!PyArg_ParseTuple(args, "l:setuid", &uid_arg)) - return NULL; - uid = uid_arg; - if (uid != uid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setuid", _Py_Uid_Converter, &uid)) + return NULL; if (setuid(uid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -6573,15 +6714,9 @@ static PyObject * posix_seteuid (PyObject *self, PyObject *args) { - long euid_arg; uid_t euid; - if (!PyArg_ParseTuple(args, "l", &euid_arg)) - return NULL; - euid = euid_arg; - if (euid != euid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:seteuid", _Py_Uid_Converter, &euid)) + return NULL; if (seteuid(euid) < 0) { return posix_error(); } else { @@ -6599,15 +6734,9 @@ static PyObject * posix_setegid (PyObject *self, PyObject *args) { - long egid_arg; gid_t egid; - if (!PyArg_ParseTuple(args, "l", &egid_arg)) - return NULL; - egid = egid_arg; - if (egid != egid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setegid", _Py_Gid_Converter, &egid)) + return NULL; if (setegid(egid) < 0) { return posix_error(); } else { @@ -6625,23 +6754,11 @@ static PyObject * posix_setreuid (PyObject *self, PyObject *args) { - long ruid_arg, euid_arg; uid_t ruid, euid; - if (!PyArg_ParseTuple(args, "ll", &ruid_arg, &euid_arg)) - return NULL; - if (ruid_arg == -1) - ruid = (uid_t)-1; /* let the compiler choose how -1 fits */ - else - ruid = ruid_arg; /* otherwise, assign from our long */ - if (euid_arg == -1) - euid = (uid_t)-1; - else - euid = euid_arg; - if ((euid_arg != -1 && euid != euid_arg) || - (ruid_arg != -1 && ruid != ruid_arg)) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setreuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid)) + return NULL; if (setreuid(ruid, euid) < 0) { return posix_error(); } else { @@ -6659,23 +6776,11 @@ static PyObject * posix_setregid (PyObject *self, PyObject *args) { - long rgid_arg, egid_arg; gid_t rgid, egid; - if (!PyArg_ParseTuple(args, "ll", &rgid_arg, &egid_arg)) - return NULL; - if (rgid_arg == -1) - rgid = (gid_t)-1; /* let the compiler choose how -1 fits */ - else - rgid = rgid_arg; /* otherwise, assign from our long */ - if (egid_arg == -1) - egid = (gid_t)-1; - else - egid = egid_arg; - if ((egid_arg != -1 && egid != egid_arg) || - (rgid_arg != -1 && rgid != rgid_arg)) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setregid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid)) + return NULL; if (setregid(rgid, egid) < 0) { return posix_error(); } else { @@ -6693,15 +6798,9 @@ static PyObject * posix_setgid(PyObject *self, PyObject *args) { - long gid_arg; gid_t gid; - if (!PyArg_ParseTuple(args, "l:setgid", &gid_arg)) - return NULL; - gid = gid_arg; - if (gid != gid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setgid", _Py_Gid_Converter, &gid)) + return NULL; if (setgid(gid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -6740,18 +6839,7 @@ Py_DECREF(elem); return NULL; } else { - unsigned long x = PyLong_AsUnsignedLong(elem); - if (PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); - Py_DECREF(elem); - return NULL; - } - grouplist[i] = x; - /* read back the value to see if it fitted in gid_t */ - if (grouplist[i] != x) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); + if (!_Py_Gid_Converter(elem, &grouplist[i])) { Py_DECREF(elem); return NULL; } @@ -6913,7 +7001,7 @@ return NULL; PyStructSequence_SET_ITEM(result, 0, PyLong_FromPid(si.si_pid)); - PyStructSequence_SET_ITEM(result, 1, PyLong_FromPid(si.si_uid)); + PyStructSequence_SET_ITEM(result, 1, _PyLong_FromUid(si.si_uid)); PyStructSequence_SET_ITEM(result, 2, PyLong_FromLong((long)(si.si_signo))); PyStructSequence_SET_ITEM(result, 3, PyLong_FromLong((long)(si.si_status))); PyStructSequence_SET_ITEM(result, 4, PyLong_FromLong((long)(si.si_code))); @@ -10197,8 +10285,11 @@ posix_setresuid (PyObject *self, PyObject *args) { /* We assume uid_t is no larger than a long. */ - long ruid, euid, suid; - if (!PyArg_ParseTuple(args, "lll", &ruid, &euid, &suid)) + uid_t ruid, euid, suid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid, + _Py_Uid_Converter, &suid)) return NULL; if (setresuid(ruid, euid, suid) < 0) return posix_error(); @@ -10214,9 +10305,11 @@ static PyObject* posix_setresgid (PyObject *self, PyObject *args) { - /* We assume uid_t is no larger than a long. */ - long rgid, egid, sgid; - if (!PyArg_ParseTuple(args, "lll", &rgid, &egid, &sgid)) + gid_t rgid, egid, sgid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresgid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid, + _Py_Gid_Converter, &sgid)) return NULL; if (setresgid(rgid, egid, sgid) < 0) return posix_error(); @@ -10233,14 +10326,11 @@ posix_getresuid (PyObject *self, PyObject *noargs) { uid_t ruid, euid, suid; - long l_ruid, l_euid, l_suid; if (getresuid(&ruid, &euid, &suid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_ruid = ruid; - l_euid = euid; - l_suid = suid; - return Py_BuildValue("(lll)", l_ruid, l_euid, l_suid); + return Py_BuildValue("(NNN)", _PyLong_FromUid(ruid), + _PyLong_FromUid(euid), + _PyLong_FromUid(suid)); } #endif @@ -10253,14 +10343,11 @@ posix_getresgid (PyObject *self, PyObject *noargs) { uid_t rgid, egid, sgid; - long l_rgid, l_egid, l_sgid; if (getresgid(&rgid, &egid, &sgid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_rgid = rgid; - l_egid = egid; - l_sgid = sgid; - return Py_BuildValue("(lll)", l_rgid, l_egid, l_sgid); + return Py_BuildValue("(NNN)", _PyLong_FromGid(rgid), + _PyLong_FromGid(egid), + _PyLong_FromGid(sgid)); } #endif diff --git a/Modules/posixmodule.h b/Modules/posixmodule.h new file mode 100644 --- /dev/null +++ b/Modules/posixmodule.h @@ -0,0 +1,25 @@ +/* Declarations shared between the different POSIX-related modules */ + +#ifndef Py_POSIXMODULE_H +#define Py_POSIXMODULE_H +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef HAVE_SYS_TYPES_H +#include +#endif + +#ifndef Py_LIMITED_API +#ifndef MS_WINDOWS +PyAPI_FUNC(PyObject *) _PyLong_FromUid(uid_t); +PyAPI_FUNC(PyObject *) _PyLong_FromGid(gid_t); +PyAPI_FUNC(int) _Py_Uid_Converter(PyObject *, void *); +PyAPI_FUNC(int) _Py_Gid_Converter(PyObject *, void *); +#endif /* MS_WINDOWS */ +#endif + +#ifdef __cplusplus +} +#endif +#endif /* !Py_POSIXMODULE_H */ diff --git a/Modules/pwdmodule.c b/Modules/pwdmodule.c --- a/Modules/pwdmodule.c +++ b/Modules/pwdmodule.c @@ -2,8 +2,8 @@ /* UNIX password file access module */ #include "Python.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_pwd_type_fields[] = { @@ -74,8 +74,8 @@ #else SETS(setIndex++, p->pw_passwd); #endif - SETI(setIndex++, p->pw_uid); - SETI(setIndex++, p->pw_gid); + PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromUid(p->pw_uid)); + PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromGid(p->pw_gid)); #ifdef __VMS SETS(setIndex++, ""); #else @@ -104,13 +104,17 @@ static PyObject * pwd_getpwuid(PyObject *self, PyObject *args) { - unsigned int uid; + uid_t uid; struct passwd *p; - if (!PyArg_ParseTuple(args, "I:getpwuid", &uid)) + if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) return NULL; if ((p = getpwuid(uid)) == NULL) { + PyObject *uid_obj = _PyLong_FromUid(uid); + if (uid_obj == NULL) + return NULL; PyErr_Format(PyExc_KeyError, - "getpwuid(): uid not found: %d", uid); + "getpwuid(): uid not found: %S", uid_obj); + Py_DECREF(uid_obj); return NULL; } return mkpwent(p); diff --git a/Modules/signalmodule.c b/Modules/signalmodule.c --- a/Modules/signalmodule.c +++ b/Modules/signalmodule.c @@ -4,6 +4,9 @@ /* XXX Signals should be recorded per thread, now we have thread state. */ #include "Python.h" +#ifndef MS_WINDOWS +#include "posixmodule.h" +#endif #ifdef MS_WINDOWS #include @@ -728,7 +731,7 @@ PyStructSequence_SET_ITEM(result, 1, PyLong_FromLong((long)(si->si_code))); PyStructSequence_SET_ITEM(result, 2, PyLong_FromLong((long)(si->si_errno))); PyStructSequence_SET_ITEM(result, 3, PyLong_FromPid(si->si_pid)); - PyStructSequence_SET_ITEM(result, 4, PyLong_FromLong((long)(si->si_uid))); + PyStructSequence_SET_ITEM(result, 4, _PyLong_FromUid(si->si_uid)); PyStructSequence_SET_ITEM(result, 5, PyLong_FromLong((long)(si->si_status))); PyStructSequence_SET_ITEM(result, 6, PyLong_FromLong(si->si_band)); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 21:04:10 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 21:04:10 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=234591=3A_Uid_and_gid_values_larger_than_2**31_ar?= =?utf-8?q?e_supported_now=2E?= Message-ID: <3Z41L22cbwzSdk@mail.python.org> http://hg.python.org/cpython/rev/94256de0aff0 changeset: 82146:94256de0aff0 parent: 82144:44817c9e8ef7 parent: 82145:b322655a4a88 user: Serhiy Storchaka date: Sun Feb 10 22:03:08 2013 +0200 summary: Issue #4591: Uid and gid values larger than 2**31 are supported now. files: Lib/test/test_posix.py | 29 ++- Makefile.pre.in | 8 + Misc/NEWS | 2 + Modules/grpmodule.c | 17 +- Modules/posixmodule.c | 327 ++++++++++++++++++---------- Modules/posixmodule.h | 25 ++ Modules/pwdmodule.c | 16 +- Modules/signalmodule.c | 5 +- 8 files changed, 291 insertions(+), 138 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -404,10 +404,20 @@ else: self.assertTrue(stat.S_ISFIFO(posix.stat(support.TESTFN).st_mode)) - def _test_all_chown_common(self, chown_func, first_param): + def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" + def check_stat(): + if stat_func is not None: + stat = stat_func(first_param) + self.assertEqual(stat.st_uid, os.getuid()) + self.assertEqual(stat.st_gid, os.getgid()) # test a successful chown call chown_func(first_param, os.getuid(), os.getgid()) + check_stat() + chown_func(first_param, -1, os.getgid()) + check_stat() + chown_func(first_param, os.getuid(), -1) + check_stat() if os.getuid() == 0: try: @@ -427,8 +437,12 @@ "behavior") else: # non-root cannot chown to root, raises OSError - self.assertRaises(OSError, chown_func, - first_param, 0, 0) + self.assertRaises(OSError, chown_func, first_param, 0, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, 0, -1) + check_stat() @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): @@ -438,7 +452,8 @@ # re-create the file support.create_empty_file(support.TESTFN) - self._test_all_chown_common(posix.chown, support.TESTFN) + self._test_all_chown_common(posix.chown, support.TESTFN, + getattr(posix, 'stat', None)) @unittest.skipUnless(hasattr(posix, 'fchown'), "test needs os.fchown()") def test_fchown(self): @@ -448,7 +463,8 @@ test_file = open(support.TESTFN, 'w') try: fd = test_file.fileno() - self._test_all_chown_common(posix.fchown, fd) + self._test_all_chown_common(posix.fchown, fd, + getattr(posix, 'fstat', None)) finally: test_file.close() @@ -457,7 +473,8 @@ os.unlink(support.TESTFN) # create a symlink os.symlink(_DUMMY_SYMLINK, support.TESTFN) - self._test_all_chown_common(posix.lchown, support.TESTFN) + self._test_all_chown_common(posix.lchown, support.TESTFN, + getattr(posix, 'lstat', None)) def test_chdir(self): if hasattr(posix, 'chdir'): diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -639,6 +639,14 @@ Modules/_testembed.o: $(srcdir)/Modules/_testembed.c $(MAINCC) -c $(PY_CORE_CFLAGS) -o $@ $(srcdir)/Modules/_testembed.c +Modules/posixmodule.o: $(srcdir)/Modules/posixmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/grpmodule.o: $(srcdir)/Modules/grpmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/pwdmodule.o: $(srcdir)/Modules/pwdmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/signalmodule.o: $(srcdir)/Modules/signalmodule.c $(srcdir)/Modules/posixmodule.h + Python/dynload_shlib.o: $(srcdir)/Python/dynload_shlib.c Makefile $(CC) -c $(PY_CORE_CFLAGS) \ -DSOABI='"$(SOABI)"' \ diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -250,6 +250,8 @@ Library ------- +- Issue #4591: Uid and gid values larger than 2**31 are supported now. + - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. - Issue #17149: Fix random.vonmisesvariate to always return results in diff --git a/Modules/grpmodule.c b/Modules/grpmodule.c --- a/Modules/grpmodule.c +++ b/Modules/grpmodule.c @@ -2,8 +2,8 @@ /* UNIX group file access module */ #include "Python.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_group_type_fields[] = { @@ -69,7 +69,7 @@ Py_INCREF(Py_None); } #endif - SET(setIndex++, PyLong_FromLong((long) p->gr_gid)); + SET(setIndex++, _PyLong_FromGid(p->gr_gid)); SET(setIndex++, w); #undef SET @@ -85,17 +85,24 @@ grp_getgrgid(PyObject *self, PyObject *pyo_id) { PyObject *py_int_id; - unsigned int gid; + gid_t gid; struct group *p; py_int_id = PyNumber_Long(pyo_id); if (!py_int_id) return NULL; - gid = PyLong_AS_LONG(py_int_id); + if (!_Py_Gid_Converter(py_int_id, &gid)) { + Py_DECREF(py_int_id); + return NULL; + } Py_DECREF(py_int_id); if ((p = getgrgid(gid)) == NULL) { - PyErr_Format(PyExc_KeyError, "getgrgid(): gid not found: %d", gid); + PyObject *gid_obj = _PyLong_FromGid(gid); + if (gid_obj == NULL) + return NULL; + PyErr_Format(PyExc_KeyError, "getgrgid(): gid not found: %S", gid_obj); + Py_DECREF(gid_obj); return NULL; } return mkgrent(p); diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -23,6 +23,9 @@ #define PY_SSIZE_T_CLEAN #include "Python.h" +#ifndef MS_WINDOWS +#include "posixmodule.h" +#endif #if defined(__VMS) # error "PEP 11: VMS is now unsupported, code will be removed in Python 3.4" @@ -382,6 +385,121 @@ #endif +#ifndef MS_WINDOWS +PyObject * +_PyLong_FromUid(uid_t uid) +{ + if (uid == (uid_t)-1) + return PyLong_FromLong(-1); + return PyLong_FromUnsignedLong(uid); +} + +PyObject * +_PyLong_FromGid(gid_t gid) +{ + if (gid == (gid_t)-1) + return PyLong_FromLong(-1); + return PyLong_FromUnsignedLong(gid); +} + +int +_Py_Uid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(uid_t *)p = (uid_t)-1; + } + else { + /* unsigned uid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((uid_t)uresult == (uid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(uid_t) < sizeof(long) && + (unsigned long)(uid_t)uresult != uresult) + goto OverflowUp; + *(uid_t *)p = (uid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "user id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "user id is greater than maximum"); + return 0; +} + +int +_Py_Gid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(gid_t *)p = (gid_t)-1; + } + else { + /* unsigned gid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((gid_t)uresult == (gid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(gid_t) < sizeof(long) && + (unsigned long)(gid_t)uresult != uresult) + goto OverflowUp; + *(gid_t *)p = (gid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "group id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "group id is greater than maximum"); + return 0; +} +#endif /* MS_WINDOWS */ + + #ifdef AT_FDCWD /* * Why the (int) cast? Solaris 10 defines AT_FDCWD as 0xffd19553 (-3041965); @@ -1965,8 +2083,13 @@ PyStructSequence_SET_ITEM(v, 2, PyLong_FromLong((long)st->st_dev)); #endif PyStructSequence_SET_ITEM(v, 3, PyLong_FromLong((long)st->st_nlink)); - PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong((long)st->st_uid)); - PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong((long)st->st_gid)); +#if defined(MS_WINDOWS) + PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong(0)); + PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong(0)); +#else + PyStructSequence_SET_ITEM(v, 4, _PyLong_FromUid(st->st_uid)); + PyStructSequence_SET_ITEM(v, 5, _PyLong_FromGid(st->st_gid)); +#endif #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 6, PyLong_FromLongLong((PY_LONG_LONG)st->st_size)); @@ -2780,7 +2903,6 @@ posix_chown(PyObject *self, PyObject *args, PyObject *kwargs) { path_t path; - long uid_l, gid_l; uid_t uid; gid_t gid; int dir_fd = DEFAULT_DIR_FD; @@ -2795,9 +2917,10 @@ #ifdef HAVE_FCHOWN path.allow_fd = 1; #endif - if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O&ll|$O&p:chown", keywords, + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O&O&O&|$O&p:chown", keywords, path_converter, &path, - &uid_l, &gid_l, + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid, #ifdef HAVE_FCHOWNAT dir_fd_converter, &dir_fd, #else @@ -2828,8 +2951,6 @@ #endif Py_BEGIN_ALLOW_THREADS - uid = (uid_t)uid_l; - gid = (uid_t)gid_l; #ifdef HAVE_FCHOWN if (path.fd != -1) result = fchown(path.fd, uid, gid); @@ -2873,12 +2994,15 @@ posix_fchown(PyObject *self, PyObject *args) { int fd; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "ill:fchown", &fd, &uid, &gid)) + if (!PyArg_ParseTuple(args, "iO&O&:fchown", &fd, + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; Py_BEGIN_ALLOW_THREADS - res = fchown(fd, (uid_t) uid, (gid_t) gid); + res = fchown(fd, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error(); @@ -2897,16 +3021,18 @@ posix_lchown(PyObject *self, PyObject *args) { path_t path; - long uid, gid; + uid_t uid; + gid_t gid; int res; memset(&path, 0, sizeof(path)); path.function_name = "lchown"; - if (!PyArg_ParseTuple(args, "O&ll:lchown", + if (!PyArg_ParseTuple(args, "O&O&O&:lchown", path_converter, &path, - &uid, &gid)) + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; Py_BEGIN_ALLOW_THREADS - res = lchown(path.narrow, (uid_t) uid, (gid_t) gid); + res = lchown(path.narrow, uid, gid); Py_END_ALLOW_THREADS if (res < 0) { path_error(&path); @@ -5555,7 +5681,7 @@ static PyObject * posix_getegid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getegid()); + return _PyLong_FromGid(getegid()); } #endif @@ -5568,7 +5694,7 @@ static PyObject * posix_geteuid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)geteuid()); + return _PyLong_FromUid(geteuid()); } #endif @@ -5581,7 +5707,7 @@ static PyObject * posix_getgid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getgid()); + return _PyLong_FromGid(getgid()); } #endif @@ -5623,8 +5749,14 @@ #endif ngroups = MAX_GROUPS; - if (!PyArg_ParseTuple(args, "si", &user, &basegid)) - return NULL; +#ifdef __APPLE__ + if (!PyArg_ParseTuple(args, "si:getgrouplist", &user, &basegid)) + return NULL; +#else + if (!PyArg_ParseTuple(args, "sO&:getgrouplist", &user, + _Py_Gid_Converter, &basegid)) + return NULL; +#endif #ifdef __APPLE__ groups = PyMem_Malloc(ngroups * sizeof(int)); @@ -5646,7 +5778,11 @@ } for (i = 0; i < ngroups; i++) { +#ifdef __APPLE__ PyObject *o = PyLong_FromUnsignedLong((unsigned long)groups[i]); +#else + PyObject *o = _PyLong_FromGid(groups[i]); +#endif if (o == NULL) { Py_DECREF(list); PyMem_Del(groups); @@ -5720,7 +5856,7 @@ if (result != NULL) { int i; for (i = 0; i < n; ++i) { - PyObject *o = PyLong_FromLong((long)alt_grouplist[i]); + PyObject *o = _PyLong_FromGid(alt_grouplist[i]); if (o == NULL) { Py_DECREF(result); result = NULL; @@ -5751,14 +5887,25 @@ PyObject *oname; char *username; int res; - long gid; - - if (!PyArg_ParseTuple(args, "O&l:initgroups", - PyUnicode_FSConverter, &oname, &gid)) +#ifdef __APPLE__ + int gid; +#else + gid_t gid; +#endif + +#ifdef __APPLE__ + if (!PyArg_ParseTuple(args, "O&i:initgroups", + PyUnicode_FSConverter, &oname, + &gid)) +#else + if (!PyArg_ParseTuple(args, "O&O&:initgroups", + PyUnicode_FSConverter, &oname, + _Py_Gid_Converter, &gid)) +#endif return NULL; username = PyBytes_AS_STRING(oname); - res = initgroups(username, (gid_t) gid); + res = initgroups(username, gid); Py_DECREF(oname); if (res == -1) return PyErr_SetFromErrno(PyExc_OSError); @@ -5933,7 +6080,7 @@ static PyObject * posix_getuid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getuid()); + return _PyLong_FromUid(getuid()); } #endif @@ -6058,15 +6205,9 @@ static PyObject * posix_setuid(PyObject *self, PyObject *args) { - long uid_arg; uid_t uid; - if (!PyArg_ParseTuple(args, "l:setuid", &uid_arg)) - return NULL; - uid = uid_arg; - if (uid != uid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setuid", _Py_Uid_Converter, &uid)) + return NULL; if (setuid(uid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -6083,15 +6224,9 @@ static PyObject * posix_seteuid (PyObject *self, PyObject *args) { - long euid_arg; uid_t euid; - if (!PyArg_ParseTuple(args, "l", &euid_arg)) - return NULL; - euid = euid_arg; - if (euid != euid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:seteuid", _Py_Uid_Converter, &euid)) + return NULL; if (seteuid(euid) < 0) { return posix_error(); } else { @@ -6109,15 +6244,9 @@ static PyObject * posix_setegid (PyObject *self, PyObject *args) { - long egid_arg; gid_t egid; - if (!PyArg_ParseTuple(args, "l", &egid_arg)) - return NULL; - egid = egid_arg; - if (egid != egid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setegid", _Py_Gid_Converter, &egid)) + return NULL; if (setegid(egid) < 0) { return posix_error(); } else { @@ -6135,23 +6264,11 @@ static PyObject * posix_setreuid (PyObject *self, PyObject *args) { - long ruid_arg, euid_arg; uid_t ruid, euid; - if (!PyArg_ParseTuple(args, "ll", &ruid_arg, &euid_arg)) - return NULL; - if (ruid_arg == -1) - ruid = (uid_t)-1; /* let the compiler choose how -1 fits */ - else - ruid = ruid_arg; /* otherwise, assign from our long */ - if (euid_arg == -1) - euid = (uid_t)-1; - else - euid = euid_arg; - if ((euid_arg != -1 && euid != euid_arg) || - (ruid_arg != -1 && ruid != ruid_arg)) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setreuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid)) + return NULL; if (setreuid(ruid, euid) < 0) { return posix_error(); } else { @@ -6169,23 +6286,11 @@ static PyObject * posix_setregid (PyObject *self, PyObject *args) { - long rgid_arg, egid_arg; gid_t rgid, egid; - if (!PyArg_ParseTuple(args, "ll", &rgid_arg, &egid_arg)) - return NULL; - if (rgid_arg == -1) - rgid = (gid_t)-1; /* let the compiler choose how -1 fits */ - else - rgid = rgid_arg; /* otherwise, assign from our long */ - if (egid_arg == -1) - egid = (gid_t)-1; - else - egid = egid_arg; - if ((egid_arg != -1 && egid != egid_arg) || - (rgid_arg != -1 && rgid != rgid_arg)) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setregid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid)) + return NULL; if (setregid(rgid, egid) < 0) { return posix_error(); } else { @@ -6203,15 +6308,9 @@ static PyObject * posix_setgid(PyObject *self, PyObject *args) { - long gid_arg; gid_t gid; - if (!PyArg_ParseTuple(args, "l:setgid", &gid_arg)) - return NULL; - gid = gid_arg; - if (gid != gid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setgid", _Py_Gid_Converter, &gid)) + return NULL; if (setgid(gid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -6250,18 +6349,7 @@ Py_DECREF(elem); return NULL; } else { - unsigned long x = PyLong_AsUnsignedLong(elem); - if (PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); - Py_DECREF(elem); - return NULL; - } - grouplist[i] = x; - /* read back the value to see if it fitted in gid_t */ - if (grouplist[i] != x) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); + if (!_Py_Gid_Converter(elem, &grouplist[i])) { Py_DECREF(elem); return NULL; } @@ -6423,7 +6511,7 @@ return NULL; PyStructSequence_SET_ITEM(result, 0, PyLong_FromPid(si.si_pid)); - PyStructSequence_SET_ITEM(result, 1, PyLong_FromPid(si.si_uid)); + PyStructSequence_SET_ITEM(result, 1, _PyLong_FromUid(si.si_uid)); PyStructSequence_SET_ITEM(result, 2, PyLong_FromLong((long)(si.si_signo))); PyStructSequence_SET_ITEM(result, 3, PyLong_FromLong((long)(si.si_status))); PyStructSequence_SET_ITEM(result, 4, PyLong_FromLong((long)(si.si_code))); @@ -9673,8 +9761,11 @@ posix_setresuid (PyObject *self, PyObject *args) { /* We assume uid_t is no larger than a long. */ - long ruid, euid, suid; - if (!PyArg_ParseTuple(args, "lll", &ruid, &euid, &suid)) + uid_t ruid, euid, suid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid, + _Py_Uid_Converter, &suid)) return NULL; if (setresuid(ruid, euid, suid) < 0) return posix_error(); @@ -9690,9 +9781,11 @@ static PyObject* posix_setresgid (PyObject *self, PyObject *args) { - /* We assume uid_t is no larger than a long. */ - long rgid, egid, sgid; - if (!PyArg_ParseTuple(args, "lll", &rgid, &egid, &sgid)) + gid_t rgid, egid, sgid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresgid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid, + _Py_Gid_Converter, &sgid)) return NULL; if (setresgid(rgid, egid, sgid) < 0) return posix_error(); @@ -9709,14 +9802,11 @@ posix_getresuid (PyObject *self, PyObject *noargs) { uid_t ruid, euid, suid; - long l_ruid, l_euid, l_suid; if (getresuid(&ruid, &euid, &suid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_ruid = ruid; - l_euid = euid; - l_suid = suid; - return Py_BuildValue("(lll)", l_ruid, l_euid, l_suid); + return Py_BuildValue("(NNN)", _PyLong_FromUid(ruid), + _PyLong_FromUid(euid), + _PyLong_FromUid(suid)); } #endif @@ -9729,14 +9819,11 @@ posix_getresgid (PyObject *self, PyObject *noargs) { uid_t rgid, egid, sgid; - long l_rgid, l_egid, l_sgid; if (getresgid(&rgid, &egid, &sgid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_rgid = rgid; - l_egid = egid; - l_sgid = sgid; - return Py_BuildValue("(lll)", l_rgid, l_egid, l_sgid); + return Py_BuildValue("(NNN)", _PyLong_FromGid(rgid), + _PyLong_FromGid(egid), + _PyLong_FromGid(sgid)); } #endif diff --git a/Modules/posixmodule.h b/Modules/posixmodule.h new file mode 100644 --- /dev/null +++ b/Modules/posixmodule.h @@ -0,0 +1,25 @@ +/* Declarations shared between the different POSIX-related modules */ + +#ifndef Py_POSIXMODULE_H +#define Py_POSIXMODULE_H +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef HAVE_SYS_TYPES_H +#include +#endif + +#ifndef Py_LIMITED_API +#ifndef MS_WINDOWS +PyAPI_FUNC(PyObject *) _PyLong_FromUid(uid_t); +PyAPI_FUNC(PyObject *) _PyLong_FromGid(gid_t); +PyAPI_FUNC(int) _Py_Uid_Converter(PyObject *, void *); +PyAPI_FUNC(int) _Py_Gid_Converter(PyObject *, void *); +#endif /* MS_WINDOWS */ +#endif + +#ifdef __cplusplus +} +#endif +#endif /* !Py_POSIXMODULE_H */ diff --git a/Modules/pwdmodule.c b/Modules/pwdmodule.c --- a/Modules/pwdmodule.c +++ b/Modules/pwdmodule.c @@ -2,8 +2,8 @@ /* UNIX password file access module */ #include "Python.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_pwd_type_fields[] = { @@ -74,8 +74,8 @@ #else SETS(setIndex++, p->pw_passwd); #endif - SETI(setIndex++, p->pw_uid); - SETI(setIndex++, p->pw_gid); + PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromUid(p->pw_uid)); + PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromGid(p->pw_gid)); #ifdef __VMS SETS(setIndex++, ""); #else @@ -104,13 +104,17 @@ static PyObject * pwd_getpwuid(PyObject *self, PyObject *args) { - unsigned int uid; + uid_t uid; struct passwd *p; - if (!PyArg_ParseTuple(args, "I:getpwuid", &uid)) + if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) return NULL; if ((p = getpwuid(uid)) == NULL) { + PyObject *uid_obj = _PyLong_FromUid(uid); + if (uid_obj == NULL) + return NULL; PyErr_Format(PyExc_KeyError, - "getpwuid(): uid not found: %d", uid); + "getpwuid(): uid not found: %S", uid_obj); + Py_DECREF(uid_obj); return NULL; } return mkpwent(p); diff --git a/Modules/signalmodule.c b/Modules/signalmodule.c --- a/Modules/signalmodule.c +++ b/Modules/signalmodule.c @@ -4,6 +4,9 @@ /* XXX Signals should be recorded per thread, now we have thread state. */ #include "Python.h" +#ifndef MS_WINDOWS +#include "posixmodule.h" +#endif #ifdef MS_WINDOWS #include @@ -723,7 +726,7 @@ PyStructSequence_SET_ITEM(result, 1, PyLong_FromLong((long)(si->si_code))); PyStructSequence_SET_ITEM(result, 2, PyLong_FromLong((long)(si->si_errno))); PyStructSequence_SET_ITEM(result, 3, PyLong_FromPid(si->si_pid)); - PyStructSequence_SET_ITEM(result, 4, PyLong_FromLong((long)(si->si_uid))); + PyStructSequence_SET_ITEM(result, 4, _PyLong_FromUid(si->si_uid)); PyStructSequence_SET_ITEM(result, 5, PyLong_FromLong((long)(si->si_status))); PyStructSequence_SET_ITEM(result, 6, PyLong_FromLong(si->si_band)); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 22:29:29 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 22:29:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Reject_float_a?= =?utf-8?q?s_uid_or_gid=2E?= Message-ID: <3Z43DT257WzScR@mail.python.org> http://hg.python.org/cpython/rev/4ef048f4834e changeset: 82147:4ef048f4834e branch: 3.3 parent: 82145:b322655a4a88 user: Serhiy Storchaka date: Sun Feb 10 23:28:02 2013 +0200 summary: Reject float as uid or gid. A regression was introduced in the commit for issue issue #4591. files: Modules/posixmodule.c | 16 ++++++++++++++-- 1 files changed, 14 insertions(+), 2 deletions(-) diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -437,7 +437,13 @@ _Py_Uid_Converter(PyObject *obj, void *p) { int overflow; - long result = PyLong_AsLongAndOverflow(obj, &overflow); + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); if (overflow < 0) goto OverflowDown; if (!overflow && result == -1) { @@ -485,7 +491,13 @@ _Py_Gid_Converter(PyObject *obj, void *p) { int overflow; - long result = PyLong_AsLongAndOverflow(obj, &overflow); + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); if (overflow < 0) goto OverflowDown; if (!overflow && result == -1) { -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 10 22:29:30 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sun, 10 Feb 2013 22:29:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Reject_float_as_uid_or_gid=2E?= Message-ID: <3Z43DV4sRszScJ@mail.python.org> http://hg.python.org/cpython/rev/3fdcffdfd3e6 changeset: 82148:3fdcffdfd3e6 parent: 82146:94256de0aff0 parent: 82147:4ef048f4834e user: Serhiy Storchaka date: Sun Feb 10 23:28:33 2013 +0200 summary: Reject float as uid or gid. A regression was introduced in the commit for issue #4591. files: Modules/posixmodule.c | 16 ++++++++++++++-- 1 files changed, 14 insertions(+), 2 deletions(-) diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -406,7 +406,13 @@ _Py_Uid_Converter(PyObject *obj, void *p) { int overflow; - long result = PyLong_AsLongAndOverflow(obj, &overflow); + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); if (overflow < 0) goto OverflowDown; if (!overflow && result == -1) { @@ -454,7 +460,13 @@ _Py_Gid_Converter(PyObject *obj, void *p) { int overflow; - long result = PyLong_AsLongAndOverflow(obj, &overflow); + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); if (overflow < 0) goto OverflowDown; if (!overflow && result == -1) { -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 01:28:24 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 01:28:24 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgMTc1MDI6?= =?utf-8?q?_unittest_discovery_should_use_self=2EtestLoader?= Message-ID: <3Z47Bw6zgNzST0@mail.python.org> http://hg.python.org/cpython/rev/1ccf0756a13d changeset: 82149:1ccf0756a13d branch: 2.7 parent: 82139:0f9113e1b541 user: Michael Foord date: Sun Feb 10 23:59:46 2013 +0000 summary: Issue 17502: unittest discovery should use self.testLoader files: Lib/unittest/main.py | 5 ++++- Lib/unittest/test/test_discovery.py | 14 ++++++++++++++ Misc/NEWS | 2 ++ 3 files changed, 20 insertions(+), 1 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -157,7 +157,10 @@ self.test = self.testLoader.loadTestsFromNames(self.testNames, self.module) - def _do_discovery(self, argv, Loader=loader.TestLoader): + def _do_discovery(self, argv, Loader=None): + if Loader is None: + Loader = self.testLoader + # handle command line args for test discovery self.progName = '%s discover' % self.progName import optparse diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -220,12 +220,26 @@ program = object.__new__(unittest.TestProgram) program.usageExit = usageExit + program.testLoader = None with self.assertRaises(Stop): # too many args program._do_discovery(['one', 'two', 'three', 'four']) + def test_command_line_handling_do_discovery_uses_default_loader(self): + program = object.__new__(unittest.TestProgram) + + class Loader(object): + args = [] + def discover(self, start_dir, pattern, top_level_dir): + self.args.append((start_dir, pattern, top_level_dir)) + return 'tests' + + program.testLoader = Loader + program._do_discovery(['-v']) + self.assertEqual(Loader.args, [('.', 'test*.py', None)]) + def test_command_line_handling_do_discovery_calls_loader(self): program = object.__new__(unittest.TestProgram) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,8 @@ Library ------- +- Issue #17502: unittest discovery should use self.testLoader. + - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. - Issue #17149: Fix random.vonmisesvariate to always return results in -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 01:28:26 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 01:28:26 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgMTc1MDI6?= =?utf-8?q?_unittest_discovery_should_use_self=2EtestLoader?= Message-ID: <3Z47By2jWqzSjF@mail.python.org> http://hg.python.org/cpython/rev/6860ac76bdea changeset: 82150:6860ac76bdea branch: 3.2 parent: 82140:d94b73c95646 user: Michael Foord date: Mon Feb 11 00:04:24 2013 +0000 summary: Issue 17502: unittest discovery should use self.testLoader files: Lib/unittest/main.py | 5 ++++- Lib/unittest/test/test_discovery.py | 14 ++++++++++++++ Misc/NEWS | 2 ++ 3 files changed, 20 insertions(+), 1 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -197,7 +197,10 @@ self.test = self.testLoader.loadTestsFromNames(self.testNames, self.module) - def _do_discovery(self, argv, Loader=loader.TestLoader): + def _do_discovery(self, argv, Loader=None): + if Loader is None: + Loader = self.testLoader + # handle command line args for test discovery self.progName = '%s discover' % self.progName import optparse diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -253,12 +253,26 @@ program = TestableTestProgram() program.usageExit = usageExit + program.testLoader = None with self.assertRaises(Stop): # too many args program._do_discovery(['one', 'two', 'three', 'four']) + def test_command_line_handling_do_discovery_uses_default_loader(self): + program = object.__new__(unittest.TestProgram) + + class Loader(object): + args = [] + def discover(self, start_dir, pattern, top_level_dir): + self.args.append((start_dir, pattern, top_level_dir)) + return 'tests' + + program.testLoader = Loader + program._do_discovery(['-v']) + self.assertEqual(Loader.args, [('.', 'test*.py', None)]) + def test_command_line_handling_do_discovery_calls_loader(self): program = TestableTestProgram() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -221,6 +221,8 @@ Library ------- +- Issue #17502: unittest discovery should use self.testLoader. + - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. - Issue #17149: Fix random.vonmisesvariate to always return results in -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 01:28:27 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 01:28:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge?= Message-ID: <3Z47Bz5TwNzShg@mail.python.org> http://hg.python.org/cpython/rev/b8d7e385553d changeset: 82151:b8d7e385553d branch: 3.3 parent: 82147:4ef048f4834e parent: 82150:6860ac76bdea user: Michael Foord date: Mon Feb 11 00:18:07 2013 +0000 summary: Merge files: Lib/unittest/main.py | 6 +++++- Lib/unittest/test/test_discovery.py | 14 ++++++++++++++ Misc/NEWS | 2 ++ 3 files changed, 21 insertions(+), 1 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -175,6 +175,7 @@ self.module) def _getOptParser(self): + import optparse parser = optparse.OptionParser() parser.prog = self.progName parser.add_option('-v', '--verbose', dest='verbose', default=False, @@ -219,7 +220,10 @@ parser.add_option('-t', '--top-level-directory', dest='top', default=None, help='Top level directory of project (defaults to start directory)') - def _do_discovery(self, argv, Loader=loader.TestLoader): + def _do_discovery(self, argv, Loader=None): + if Loader is None: + Loader = self.testLoader + # handle command line args for test discovery self.progName = '%s discover' % self.progName parser = self._getOptParser() diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -253,12 +253,26 @@ program = TestableTestProgram() program.usageExit = usageExit + program.testLoader = None with self.assertRaises(Stop): # too many args program._do_discovery(['one', 'two', 'three', 'four']) + def test_command_line_handling_do_discovery_uses_default_loader(self): + program = object.__new__(unittest.TestProgram) + + class Loader(object): + args = [] + def discover(self, start_dir, pattern, top_level_dir): + self.args.append((start_dir, pattern, top_level_dir)) + return 'tests' + + program.testLoader = Loader + program._do_discovery(['-v']) + self.assertEqual(Loader.args, [('.', 'test*.py', None)]) + def test_command_line_handling_do_discovery_calls_loader(self): program = TestableTestProgram() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -175,6 +175,8 @@ Library ------- +- Issue #17502: unittest discovery should use self.testLoader. + - Issue #4591: Uid and gid values larger than 2**31 are supported now. - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 01:28:29 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 01:28:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=2E_Closes_issue_17052=2E?= Message-ID: <3Z47C11QznzSjP@mail.python.org> http://hg.python.org/cpython/rev/b53b029895df changeset: 82152:b53b029895df parent: 82148:3fdcffdfd3e6 parent: 82151:b8d7e385553d user: Michael Foord date: Mon Feb 11 00:28:02 2013 +0000 summary: Merge. Closes issue 17052. files: Lib/unittest/main.py | 6 +++++- Lib/unittest/test/test_discovery.py | 14 ++++++++++++++ Misc/NEWS | 2 ++ 3 files changed, 21 insertions(+), 1 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -175,6 +175,7 @@ self.module) def _getOptParser(self): + import optparse parser = optparse.OptionParser() parser.prog = self.progName parser.add_option('-v', '--verbose', dest='verbose', default=False, @@ -219,7 +220,10 @@ parser.add_option('-t', '--top-level-directory', dest='top', default=None, help='Top level directory of project (defaults to start directory)') - def _do_discovery(self, argv, Loader=loader.TestLoader): + def _do_discovery(self, argv, Loader=None): + if Loader is None: + Loader = self.testLoader + # handle command line args for test discovery self.progName = '%s discover' % self.progName parser = self._getOptParser() diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -253,12 +253,26 @@ program = TestableTestProgram() program.usageExit = usageExit + program.testLoader = None with self.assertRaises(Stop): # too many args program._do_discovery(['one', 'two', 'three', 'four']) + def test_command_line_handling_do_discovery_uses_default_loader(self): + program = object.__new__(unittest.TestProgram) + + class Loader(object): + args = [] + def discover(self, start_dir, pattern, top_level_dir): + self.args.append((start_dir, pattern, top_level_dir)) + return 'tests' + + program.testLoader = Loader + program._do_discovery(['-v']) + self.assertEqual(Loader.args, [('.', 'test*.py', None)]) + def test_command_line_handling_do_discovery_calls_loader(self): program = TestableTestProgram() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -250,6 +250,8 @@ Library ------- +- Issue #17502: unittest discovery should use self.testLoader. + - Issue #4591: Uid and gid values larger than 2**31 are supported now. - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Mon Feb 11 06:00:10 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Mon, 11 Feb 2013 06:00:10 +0100 Subject: [Python-checkins] Daily reference leaks (b53b029895df): sum=2 Message-ID: results for b53b029895df on branch "default" -------------------------------------------- test_concurrent_futures leaked [2, 1, -1] memory blocks, sum=2 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogTMq8Q4', '-x'] From python-checkins at python.org Mon Feb 11 08:23:46 2013 From: python-checkins at python.org (terry.reedy) Date: Mon, 11 Feb 2013 08:23:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Closes_=2317158=3A_Add_=27?= =?utf-8?q?symbols=27_to_help=28=29_welcome_message=3B_clarify_=27modules_?= =?utf-8?q?spam=27?= Message-ID: <3Z4JQB6WwnzSbh@mail.python.org> http://hg.python.org/cpython/rev/4f84fe5a997b changeset: 82153:4f84fe5a997b user: Terry Jan Reedy date: Mon Feb 11 02:23:13 2013 -0500 summary: Closes #17158: Add 'symbols' to help() welcome message; clarify 'modules spam' messages. files: Lib/pydoc.py | 15 ++++++++------- Misc/NEWS | 3 +++ 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/Lib/pydoc.py b/Lib/pydoc.py --- a/Lib/pydoc.py +++ b/Lib/pydoc.py @@ -1847,10 +1847,10 @@ Python programs and using Python modules. To quit this help utility and return to the interpreter, just type "quit". -To get a list of available modules, keywords, or topics, type "modules", -"keywords", or "topics". Each module also comes with a one-line summary -of what it does; to list the modules whose summaries contain a given word -such as "spam", type "modules spam". +To get a list of available modules, keywords, symbols, or topics, type +"modules", "keywords", "symbols", or "topics". Each module also comes +with a one-line summary of what it does; to list the modules whose name +or summary contain a given string such as "spam", type "modules spam". ''' % tuple([sys.version[:3]]*2)) def list(self, items, columns=4, width=80): @@ -1955,9 +1955,10 @@ def listmodules(self, key=''): if key: self.output.write(''' -Here is a list of matching modules. Enter any module name to get more help. +Here is a list of modules whose name or summary contains '{}'. +If there are any, enter a module name to get more help. -''') +'''.format(key)) apropos(key) else: self.output.write(''' @@ -1976,7 +1977,7 @@ self.list(modules.keys()) self.output.write(''' Enter any module name to get more help. Or, type "modules spam" to search -for modules whose descriptions contain the word "spam". +for modules whose name or summary contain the string "spam". ''') help = Helper() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1122,6 +1122,9 @@ Library ------- +- Issue #17158: Add 'symbols' to help() welcome message; clarify + 'modules spam' messages. + - Issue #15847: Fix a regression in argparse, which did not accept tuples as argument lists anymore. -- Repository URL: http://hg.python.org/cpython From eliben at gmail.com Mon Feb 11 14:29:56 2013 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 11 Feb 2013 05:29:56 -0800 Subject: [Python-checkins] Daily reference leaks (b53b029895df): sum=2 In-Reply-To: References: Message-ID: On Sun, Feb 10, 2013 at 9:00 PM, wrote: > results for b53b029895df on branch "default" > -------------------------------------------- > > test_concurrent_futures leaked [2, 1, -1] memory blocks, sum=2 > > When did this start happening? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-checkins at python.org Mon Feb 11 14:39:59 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 14:39:59 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Correction_to_?= =?utf-8?q?issue_17052_fix?= Message-ID: <3Z4SmH1yPNzSjR@mail.python.org> http://hg.python.org/cpython/rev/ece0a2e6b08e changeset: 82154:ece0a2e6b08e branch: 2.7 parent: 82149:1ccf0756a13d user: Michael Foord date: Mon Feb 11 12:53:21 2013 +0000 summary: Correction to issue 17052 fix files: Lib/unittest/main.py | 2 +- Lib/unittest/test/test_discovery.py | 2 +- Misc/NEWS | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -159,7 +159,7 @@ def _do_discovery(self, argv, Loader=None): if Loader is None: - Loader = self.testLoader + Loader = lambda: self.testLoader # handle command line args for test discovery self.progName = '%s discover' % self.progName diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -236,7 +236,7 @@ self.args.append((start_dir, pattern, top_level_dir)) return 'tests' - program.testLoader = Loader + program.testLoader = Loader() program._do_discovery(['-v']) self.assertEqual(Loader.args, [('.', 'test*.py', None)]) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,7 +202,7 @@ Library ------- -- Issue #17502: unittest discovery should use self.testLoader. +- Issue #17052: unittest discovery should use self.testLoader. - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 14:40:00 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 14:40:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Correction_to_?= =?utf-8?q?issue_17052_fix?= Message-ID: <3Z4SmJ4jvzzSZD@mail.python.org> http://hg.python.org/cpython/rev/867763eb6985 changeset: 82155:867763eb6985 branch: 3.2 parent: 82150:6860ac76bdea user: Michael Foord date: Mon Feb 11 13:20:52 2013 +0000 summary: Correction to issue 17052 fix files: Lib/unittest/main.py | 2 +- Lib/unittest/test/test_discovery.py | 2 +- Misc/NEWS | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -199,7 +199,7 @@ def _do_discovery(self, argv, Loader=None): if Loader is None: - Loader = self.testLoader + Loader = lambda: self.testLoader # handle command line args for test discovery self.progName = '%s discover' % self.progName diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -269,7 +269,7 @@ self.args.append((start_dir, pattern, top_level_dir)) return 'tests' - program.testLoader = Loader + program.testLoader = Loader() program._do_discovery(['-v']) self.assertEqual(Loader.args, [('.', 'test*.py', None)]) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -221,7 +221,7 @@ Library ------- -- Issue #17502: unittest discovery should use self.testLoader. +- Issue #17052: unittest discovery should use self.testLoader. - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 14:40:02 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 14:40:02 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge?= Message-ID: <3Z4SmL0DxyzPwq@mail.python.org> http://hg.python.org/cpython/rev/c148804c2f5e changeset: 82156:c148804c2f5e branch: 3.3 parent: 82151:b8d7e385553d parent: 82155:867763eb6985 user: Michael Foord date: Mon Feb 11 13:29:58 2013 +0000 summary: Merge files: Lib/unittest/main.py | 2 +- Lib/unittest/test/test_discovery.py | 2 +- Misc/NEWS | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -222,7 +222,7 @@ def _do_discovery(self, argv, Loader=None): if Loader is None: - Loader = self.testLoader + Loader = lambda: self.testLoader # handle command line args for test discovery self.progName = '%s discover' % self.progName diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -269,7 +269,7 @@ self.args.append((start_dir, pattern, top_level_dir)) return 'tests' - program.testLoader = Loader + program.testLoader = Loader() program._do_discovery(['-v']) self.assertEqual(Loader.args, [('.', 'test*.py', None)]) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -175,7 +175,7 @@ Library ------- -- Issue #17502: unittest discovery should use self.testLoader. +- Issue #17052: unittest discovery should use self.testLoader. - Issue #4591: Uid and gid values larger than 2**31 are supported now. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 14:40:03 2013 From: python-checkins at python.org (michael.foord) Date: Mon, 11 Feb 2013 14:40:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=2E_Closes_issue_17052=2E?= Message-ID: <3Z4SmM377JzSkB@mail.python.org> http://hg.python.org/cpython/rev/a79650aacb43 changeset: 82157:a79650aacb43 parent: 82153:4f84fe5a997b parent: 82156:c148804c2f5e user: Michael Foord date: Mon Feb 11 13:33:00 2013 +0000 summary: Merge. Closes issue 17052. files: Lib/unittest/main.py | 2 +- Lib/unittest/test/test_discovery.py | 2 +- Misc/NEWS | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -222,7 +222,7 @@ def _do_discovery(self, argv, Loader=None): if Loader is None: - Loader = self.testLoader + Loader = lambda: self.testLoader # handle command line args for test discovery self.progName = '%s discover' % self.progName diff --git a/Lib/unittest/test/test_discovery.py b/Lib/unittest/test/test_discovery.py --- a/Lib/unittest/test/test_discovery.py +++ b/Lib/unittest/test/test_discovery.py @@ -269,7 +269,7 @@ self.args.append((start_dir, pattern, top_level_dir)) return 'tests' - program.testLoader = Loader + program.testLoader = Loader() program._do_discovery(['-v']) self.assertEqual(Loader.args, [('.', 'test*.py', None)]) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -250,7 +250,7 @@ Library ------- -- Issue #17502: unittest discovery should use self.testLoader. +- Issue #17052: unittest discovery should use self.testLoader. - Issue #4591: Uid and gid values larger than 2**31 are supported now. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 16:14:50 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 16:14:50 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MDY0OiBmaXgg?= =?utf-8?q?sporadic_permission_errors_in_test=5Fmailbox_on_windows=2E?= Message-ID: <3Z4Vsk6W0BzSX1@mail.python.org> http://hg.python.org/cpython/rev/bbeff2958cc5 changeset: 82158:bbeff2958cc5 branch: 3.2 parent: 82155:867763eb6985 user: R David Murray date: Mon Feb 11 10:04:26 2013 -0500 summary: #17064: fix sporadic permission errors in test_mailbox on windows. Patch by Jeremy Kloth. files: Lib/test/test_mailbox.py | 16 ++++++++-------- 1 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_mailbox.py b/Lib/test/test_mailbox.py --- a/Lib/test/test_mailbox.py +++ b/Lib/test/test_mailbox.py @@ -39,9 +39,9 @@ def _delete_recursively(self, target): # Delete a file or delete a directory recursively if os.path.isdir(target): - shutil.rmtree(target) + support.rmtree(target) elif os.path.exists(target): - os.remove(target) + support.unlink(target) class TestMailbox(TestBase): @@ -2096,9 +2096,9 @@ # create a new maildir mailbox to work with: self._dir = support.TESTFN if os.path.isdir(self._dir): - shutil.rmtree(self._dir) + support.rmtree(self._dir) elif os.path.isfile(self._dir): - os.unlink(self._dir) + support.unlink(self._dir) os.mkdir(self._dir) os.mkdir(os.path.join(self._dir, "cur")) os.mkdir(os.path.join(self._dir, "tmp")) @@ -2108,10 +2108,10 @@ def tearDown(self): list(map(os.unlink, self._msgfiles)) - os.rmdir(os.path.join(self._dir, "cur")) - os.rmdir(os.path.join(self._dir, "tmp")) - os.rmdir(os.path.join(self._dir, "new")) - os.rmdir(self._dir) + support.rmdir(os.path.join(self._dir, "cur")) + support.rmdir(os.path.join(self._dir, "tmp")) + support.rmdir(os.path.join(self._dir, "new")) + support.rmdir(self._dir) def createMessage(self, dir, mbox=False): t = int(time.time() % 1000000) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 16:14:52 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 16:14:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge=3A_=2317064=3A_fix_sporadic_permission_errors_in_test=5F?= =?utf-8?q?mailbox_on_windows=2E?= Message-ID: <3Z4Vsm22jVzQM1@mail.python.org> http://hg.python.org/cpython/rev/3e3915cbfde3 changeset: 82159:3e3915cbfde3 branch: 3.3 parent: 82156:c148804c2f5e parent: 82158:bbeff2958cc5 user: R David Murray date: Mon Feb 11 10:05:03 2013 -0500 summary: Merge: #17064: fix sporadic permission errors in test_mailbox on windows. Patch by Jeremy Kloth. files: Lib/test/test_mailbox.py | 16 ++++++++-------- 1 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_mailbox.py b/Lib/test/test_mailbox.py --- a/Lib/test/test_mailbox.py +++ b/Lib/test/test_mailbox.py @@ -43,9 +43,9 @@ def _delete_recursively(self, target): # Delete a file or delete a directory recursively if os.path.isdir(target): - shutil.rmtree(target) + support.rmtree(target) elif os.path.exists(target): - os.remove(target) + support.unlink(target) class TestMailbox(TestBase): @@ -2112,9 +2112,9 @@ # create a new maildir mailbox to work with: self._dir = support.TESTFN if os.path.isdir(self._dir): - shutil.rmtree(self._dir) + support.rmtree(self._dir) elif os.path.isfile(self._dir): - os.unlink(self._dir) + support.unlink(self._dir) os.mkdir(self._dir) os.mkdir(os.path.join(self._dir, "cur")) os.mkdir(os.path.join(self._dir, "tmp")) @@ -2124,10 +2124,10 @@ def tearDown(self): list(map(os.unlink, self._msgfiles)) - os.rmdir(os.path.join(self._dir, "cur")) - os.rmdir(os.path.join(self._dir, "tmp")) - os.rmdir(os.path.join(self._dir, "new")) - os.rmdir(self._dir) + support.rmdir(os.path.join(self._dir, "cur")) + support.rmdir(os.path.join(self._dir, "tmp")) + support.rmdir(os.path.join(self._dir, "new")) + support.rmdir(self._dir) def createMessage(self, dir, mbox=False): t = int(time.time() % 1000000) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 16:14:53 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 16:14:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=2317064=3A_fix_sporadic_permission_errors_in_t?= =?utf-8?q?est=5Fmailbox_on_windows=2E?= Message-ID: <3Z4Vsn4gzyzSXb@mail.python.org> http://hg.python.org/cpython/rev/aa15df77e58f changeset: 82160:aa15df77e58f parent: 82157:a79650aacb43 parent: 82159:3e3915cbfde3 user: R David Murray date: Mon Feb 11 10:05:34 2013 -0500 summary: Merge: #17064: fix sporadic permission errors in test_mailbox on windows. Patch by Jeremy Kloth. files: Lib/test/test_mailbox.py | 16 ++++++++-------- 1 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_mailbox.py b/Lib/test/test_mailbox.py --- a/Lib/test/test_mailbox.py +++ b/Lib/test/test_mailbox.py @@ -43,9 +43,9 @@ def _delete_recursively(self, target): # Delete a file or delete a directory recursively if os.path.isdir(target): - shutil.rmtree(target) + support.rmtree(target) elif os.path.exists(target): - os.remove(target) + support.unlink(target) class TestMailbox(TestBase): @@ -2112,9 +2112,9 @@ # create a new maildir mailbox to work with: self._dir = support.TESTFN if os.path.isdir(self._dir): - shutil.rmtree(self._dir) + support.rmtree(self._dir) elif os.path.isfile(self._dir): - os.unlink(self._dir) + support.unlink(self._dir) os.mkdir(self._dir) os.mkdir(os.path.join(self._dir, "cur")) os.mkdir(os.path.join(self._dir, "tmp")) @@ -2124,10 +2124,10 @@ def tearDown(self): list(map(os.unlink, self._msgfiles)) - os.rmdir(os.path.join(self._dir, "cur")) - os.rmdir(os.path.join(self._dir, "tmp")) - os.rmdir(os.path.join(self._dir, "new")) - os.rmdir(self._dir) + support.rmdir(os.path.join(self._dir, "cur")) + support.rmdir(os.path.join(self._dir, "tmp")) + support.rmdir(os.path.join(self._dir, "new")) + support.rmdir(self._dir) def createMessage(self, dir, mbox=False): t = int(time.time() % 1000000) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 16:14:55 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 16:14:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MDY0OiBmaXgg?= =?utf-8?q?sporadic_permission_errors_in_test=5Fmailbox_on_windows=2E?= Message-ID: <3Z4Vsq0L37zQ1C@mail.python.org> http://hg.python.org/cpython/rev/1c2dbed859ca changeset: 82161:1c2dbed859ca branch: 2.7 parent: 82154:ece0a2e6b08e user: R David Murray date: Mon Feb 11 10:14:24 2013 -0500 summary: #17064: fix sporadic permission errors in test_mailbox on windows. Backported from patch by Jeremy Kloth. files: Lib/test/test_mailbox.py | 16 ++++++++++------ 1 files changed, 10 insertions(+), 6 deletions(-) diff --git a/Lib/test/test_mailbox.py b/Lib/test/test_mailbox.py --- a/Lib/test/test_mailbox.py +++ b/Lib/test/test_mailbox.py @@ -40,9 +40,9 @@ def _delete_recursively(self, target): # Delete a file or delete a directory recursively if os.path.isdir(target): - shutil.rmtree(target) + test_support.rmtree(target) elif os.path.exists(target): - os.remove(target) + test_support.unlink(target) class TestMailbox(TestBase): @@ -1927,6 +1927,10 @@ def setUp(self): # create a new maildir mailbox to work with: self._dir = test_support.TESTFN + if os.path.isdir(self._dir): + test_support.rmtree(self._dir) + if os.path.isfile(self._dir): + test_support.unlink(self._dir) os.mkdir(self._dir) os.mkdir(os.path.join(self._dir, "cur")) os.mkdir(os.path.join(self._dir, "tmp")) @@ -1936,10 +1940,10 @@ def tearDown(self): map(os.unlink, self._msgfiles) - os.rmdir(os.path.join(self._dir, "cur")) - os.rmdir(os.path.join(self._dir, "tmp")) - os.rmdir(os.path.join(self._dir, "new")) - os.rmdir(self._dir) + test_support.rmdir(os.path.join(self._dir, "cur")) + test_support.rmdir(os.path.join(self._dir, "tmp")) + test_support.rmdir(os.path.join(self._dir, "new")) + test_support.rmdir(self._dir) def createMessage(self, dir, mbox=False): t = int(time.time() % 1000000) -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Mon Feb 11 16:36:15 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 11 Feb 2013 16:36:15 +0100 Subject: [Python-checkins] Daily reference leaks (b53b029895df): sum=2 References: Message-ID: <20130211163615.114f656a@pitrou.net> Le Mon, 11 Feb 2013 05:29:56 -0800, Eli Bendersky a ?crit : > On Sun, Feb 10, 2013 at 9:00 PM, wrote: > > > results for b53b029895df on branch "default" > > -------------------------------------------- > > > > test_concurrent_futures leaked [2, 1, -1] memory blocks, sum=2 > > > > > When did this start happening? This is just a false positive. The memory blocks count is quite fragile, as it can be influenced by multiple factors (allocation caches, etc.). Regards Antoine. From python-checkins at python.org Mon Feb 11 17:20:00 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 17:20:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MTcxOiBmaXgg?= =?utf-8?q?email=2Eencoders=2Eencode=5F7or8bit_when_applied_to_binary_data?= =?utf-8?q?=2E?= Message-ID: <3Z4XJw046FzQ2D@mail.python.org> http://hg.python.org/cpython/rev/f83581135ec4 changeset: 82162:f83581135ec4 branch: 3.2 parent: 82158:bbeff2958cc5 user: R David Murray date: Mon Feb 11 10:51:28 2013 -0500 summary: #17171: fix email.encoders.encode_7or8bit when applied to binary data. files: Lib/email/encoders.py | 4 +++- Lib/email/test/test_email.py | 19 ++++++++++++++++++- Misc/NEWS | 3 +++ 3 files changed, 24 insertions(+), 2 deletions(-) diff --git a/Lib/email/encoders.py b/Lib/email/encoders.py --- a/Lib/email/encoders.py +++ b/Lib/email/encoders.py @@ -62,15 +62,17 @@ else: orig.decode('ascii') except UnicodeError: - # iso-2022-* is non-ASCII but still 7-bit charset = msg.get_charset() output_cset = charset and charset.output_charset + # iso-2022-* is non-ASCII but encodes to a 7-bit representation if output_cset and output_cset.lower().startswith('iso-2022-'): msg['Content-Transfer-Encoding'] = '7bit' else: msg['Content-Transfer-Encoding'] = '8bit' else: msg['Content-Transfer-Encoding'] = '7bit' + if not isinstance(orig, str): + msg.set_payload(orig.decode('ascii', 'surrogateescape')) diff --git a/Lib/email/test/test_email.py b/Lib/email/test/test_email.py --- a/Lib/email/test/test_email.py +++ b/Lib/email/test/test_email.py @@ -1438,7 +1438,24 @@ eq(msg.get_payload().strip(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytesdata) - def test_body_with_encode_noop(self): + def test_binary_body_with_encode_7or8bit(self): + # Issue 17171. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_7or8bit) + # Treated as a string, this will be invalid code points. + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + self.assertEqual(msg['Content-Transfer-Encoding'], '8bit') + s = BytesIO() + g = BytesGenerator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_bytes(wireform) + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) + self.assertEqual(msg2['Content-Transfer-Encoding'], '8bit') + + def test_binary_body_with_encode_noop(self): # Issue 16564: This does not produce an RFC valid message, since to be # valid it should have a CTE of binary. But the below works in # Python2, and is documented as working this way. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -221,6 +221,9 @@ Library ------- +- Issue #16564: Fixed regression relative to Python2 in the operation of + email.encoders.encode_7or8bit when used with binary data. + - Issue #17052: unittest discovery should use self.testLoader. - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 17:20:01 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 17:20:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge=3A_=2317171=3A_fix_email=2Eencoders=2Eencode=5F7or8bit_w?= =?utf-8?q?hen_applied_to_binary_data=2E?= Message-ID: <3Z4XJx2whGzSl0@mail.python.org> http://hg.python.org/cpython/rev/cabcddbed377 changeset: 82163:cabcddbed377 branch: 3.3 parent: 82159:3e3915cbfde3 parent: 82162:f83581135ec4 user: R David Murray date: Mon Feb 11 10:53:35 2013 -0500 summary: Merge: #17171: fix email.encoders.encode_7or8bit when applied to binary data. files: Lib/email/encoders.py | 4 +++- Lib/test/test_email/test_email.py | 19 ++++++++++++++++++- Misc/NEWS | 3 +++ 3 files changed, 24 insertions(+), 2 deletions(-) diff --git a/Lib/email/encoders.py b/Lib/email/encoders.py --- a/Lib/email/encoders.py +++ b/Lib/email/encoders.py @@ -62,15 +62,17 @@ else: orig.decode('ascii') except UnicodeError: - # iso-2022-* is non-ASCII but still 7-bit charset = msg.get_charset() output_cset = charset and charset.output_charset + # iso-2022-* is non-ASCII but encodes to a 7-bit representation if output_cset and output_cset.lower().startswith('iso-2022-'): msg['Content-Transfer-Encoding'] = '7bit' else: msg['Content-Transfer-Encoding'] = '8bit' else: msg['Content-Transfer-Encoding'] = '7bit' + if not isinstance(orig, str): + msg.set_payload(orig.decode('ascii', 'surrogateescape')) diff --git a/Lib/test/test_email/test_email.py b/Lib/test/test_email/test_email.py --- a/Lib/test/test_email/test_email.py +++ b/Lib/test/test_email/test_email.py @@ -1440,7 +1440,24 @@ eq(msg.get_payload().strip(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytesdata) - def test_body_with_encode_noop(self): + def test_binary_body_with_encode_7or8bit(self): + # Issue 17171. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_7or8bit) + # Treated as a string, this will be invalid code points. + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + self.assertEqual(msg['Content-Transfer-Encoding'], '8bit') + s = BytesIO() + g = BytesGenerator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_bytes(wireform) + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) + self.assertEqual(msg2['Content-Transfer-Encoding'], '8bit') + + def test_binary_body_with_encode_noop(self): # Issue 16564: This does not produce an RFC valid message, since to be # valid it should have a CTE of binary. But the below works in # Python2, and is documented as working this way. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -175,6 +175,9 @@ Library ------- +- Issue #16564: Fixed regression relative to Python2 in the operation of + email.encoders.encode_7or8bit when used with binary data. + - Issue #17052: unittest discovery should use self.testLoader. - Issue #4591: Uid and gid values larger than 2**31 are supported now. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 17:20:02 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 17:20:02 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=2317171=3A_fix_email=2Eencoders=2Eencode=5F7or?= =?utf-8?q?8bit_when_applied_to_binary_data=2E?= Message-ID: <3Z4XJy5gxhzR9m@mail.python.org> http://hg.python.org/cpython/rev/a80b67611c6d changeset: 82164:a80b67611c6d parent: 82160:aa15df77e58f parent: 82163:cabcddbed377 user: R David Murray date: Mon Feb 11 10:54:22 2013 -0500 summary: Merge: #17171: fix email.encoders.encode_7or8bit when applied to binary data. files: Lib/email/encoders.py | 4 +++- Lib/test/test_email/test_email.py | 19 ++++++++++++++++++- Misc/NEWS | 3 +++ 3 files changed, 24 insertions(+), 2 deletions(-) diff --git a/Lib/email/encoders.py b/Lib/email/encoders.py --- a/Lib/email/encoders.py +++ b/Lib/email/encoders.py @@ -62,15 +62,17 @@ else: orig.decode('ascii') except UnicodeError: - # iso-2022-* is non-ASCII but still 7-bit charset = msg.get_charset() output_cset = charset and charset.output_charset + # iso-2022-* is non-ASCII but encodes to a 7-bit representation if output_cset and output_cset.lower().startswith('iso-2022-'): msg['Content-Transfer-Encoding'] = '7bit' else: msg['Content-Transfer-Encoding'] = '8bit' else: msg['Content-Transfer-Encoding'] = '7bit' + if not isinstance(orig, str): + msg.set_payload(orig.decode('ascii', 'surrogateescape')) diff --git a/Lib/test/test_email/test_email.py b/Lib/test/test_email/test_email.py --- a/Lib/test/test_email/test_email.py +++ b/Lib/test/test_email/test_email.py @@ -1440,7 +1440,24 @@ eq(msg.get_payload().strip(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytesdata) - def test_body_with_encode_noop(self): + def test_binary_body_with_encode_7or8bit(self): + # Issue 17171. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_7or8bit) + # Treated as a string, this will be invalid code points. + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + self.assertEqual(msg['Content-Transfer-Encoding'], '8bit') + s = BytesIO() + g = BytesGenerator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_bytes(wireform) + self.assertEqual(msg.get_payload(), '\uFFFD' * len(bytesdata)) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) + self.assertEqual(msg2['Content-Transfer-Encoding'], '8bit') + + def test_binary_body_with_encode_noop(self): # Issue 16564: This does not produce an RFC valid message, since to be # valid it should have a CTE of binary. But the below works in # Python2, and is documented as working this way. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -250,6 +250,9 @@ Library ------- +- Issue #16564: Fixed regression relative to Python2 in the operation of + email.encoders.encode_7or8bit when used with binary data. + - Issue #17052: unittest discovery should use self.testLoader. - Issue #4591: Uid and gid values larger than 2**31 are supported now. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 17:20:04 2013 From: python-checkins at python.org (r.david.murray) Date: Mon, 11 Feb 2013 17:20:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MTcxOiBiYWNr?= =?utf-8?q?port_behavior-confirming_test_from_python3=2E?= Message-ID: <3Z4XK01Gz5zSm3@mail.python.org> http://hg.python.org/cpython/rev/e44fa71d76fe changeset: 82165:e44fa71d76fe branch: 2.7 parent: 82161:1c2dbed859ca user: R David Murray date: Mon Feb 11 10:57:37 2013 -0500 summary: #17171: backport behavior-confirming test from python3. files: Lib/email/test/test_email_renamed.py | 19 +++++++++++++++- 1 files changed, 18 insertions(+), 1 deletions(-) diff --git a/Lib/email/test/test_email_renamed.py b/Lib/email/test/test_email_renamed.py --- a/Lib/email/test/test_email_renamed.py +++ b/Lib/email/test/test_email_renamed.py @@ -994,7 +994,24 @@ eq(msg.get_payload(), '+vv8/f7/') eq(msg.get_payload(decode=True), bytes) - def test_body_with_encode_noop(self): + def test_binary_body_with_encode_7or8bit(self): + # Issue 17171. + bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff' + msg = MIMEApplication(bytesdata, _encoder=encoders.encode_7or8bit) + # Treated as a string, this will be invalid code points. + self.assertEqual(msg.get_payload(), bytesdata) + self.assertEqual(msg.get_payload(decode=True), bytesdata) + self.assertEqual(msg['Content-Transfer-Encoding'], '8bit') + s = StringIO() + g = Generator(s) + g.flatten(msg) + wireform = s.getvalue() + msg2 = email.message_from_string(wireform) + self.assertEqual(msg.get_payload(), bytesdata) + self.assertEqual(msg2.get_payload(decode=True), bytesdata) + self.assertEqual(msg2['Content-Transfer-Encoding'], '8bit') + + def test_binary_body_with_encode_noop(self): # Issue 16564: This does not produce an RFC valid message, since to be # valid it should have a CTE of binary. But the below works, and is # documented as working this way. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 19:34:36 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 11 Feb 2013 19:34:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Raise_KeyError?= =?utf-8?q?_instead_of_OverflowError_when_getpwuid=27s_argument_is_out_of?= Message-ID: <3Z4bJD6STSzRFC@mail.python.org> http://hg.python.org/cpython/rev/a0983e46feb1 changeset: 82166:a0983e46feb1 branch: 3.3 parent: 82163:cabcddbed377 user: Serhiy Storchaka date: Mon Feb 11 20:32:47 2013 +0200 summary: Raise KeyError instead of OverflowError when getpwuid's argument is out of uid_t range. files: Lib/test/test_pwd.py | 9 +++++++++ Modules/pwdmodule.c | 6 +++++- 2 files changed, 14 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_pwd.py b/Lib/test/test_pwd.py --- a/Lib/test/test_pwd.py +++ b/Lib/test/test_pwd.py @@ -49,7 +49,9 @@ def test_errors(self): self.assertRaises(TypeError, pwd.getpwuid) + self.assertRaises(TypeError, pwd.getpwuid, 3.14) self.assertRaises(TypeError, pwd.getpwnam) + self.assertRaises(TypeError, pwd.getpwnam, 42) self.assertRaises(TypeError, pwd.getpwall, 42) # try to get some errors @@ -93,6 +95,13 @@ self.assertNotIn(fakeuid, byuids) self.assertRaises(KeyError, pwd.getpwuid, fakeuid) + # -1 shouldn't be a valid uid because it has a special meaning in many + # uid-related functions + self.assertRaises(KeyError, pwd.getpwuid, -1) + # should be out of uid_t range + self.assertRaises(KeyError, pwd.getpwuid, 2**128) + self.assertRaises(KeyError, pwd.getpwuid, -2**128) + def test_main(): support.run_unittest(PwdTest) diff --git a/Modules/pwdmodule.c b/Modules/pwdmodule.c --- a/Modules/pwdmodule.c +++ b/Modules/pwdmodule.c @@ -106,8 +106,12 @@ { uid_t uid; struct passwd *p; - if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) + if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + PyErr_Format(PyExc_KeyError, + "getpwuid(): uid not found"); return NULL; + } if ((p = getpwuid(uid)) == NULL) { PyObject *uid_obj = _PyLong_FromUid(uid); if (uid_obj == NULL) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 11 19:34:38 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 11 Feb 2013 19:34:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Raise_KeyError_instead_of_OverflowError_when_getpwuid=27?= =?utf-8?q?s_argument_is_out_of?= Message-ID: <3Z4bJG2stBzSWy@mail.python.org> http://hg.python.org/cpython/rev/1e9fa629756c changeset: 82167:1e9fa629756c parent: 82164:a80b67611c6d parent: 82166:a0983e46feb1 user: Serhiy Storchaka date: Mon Feb 11 20:33:24 2013 +0200 summary: Raise KeyError instead of OverflowError when getpwuid's argument is out of uid_t range. files: Lib/test/test_pwd.py | 9 +++++++++ Modules/pwdmodule.c | 6 +++++- 2 files changed, 14 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_pwd.py b/Lib/test/test_pwd.py --- a/Lib/test/test_pwd.py +++ b/Lib/test/test_pwd.py @@ -49,7 +49,9 @@ def test_errors(self): self.assertRaises(TypeError, pwd.getpwuid) + self.assertRaises(TypeError, pwd.getpwuid, 3.14) self.assertRaises(TypeError, pwd.getpwnam) + self.assertRaises(TypeError, pwd.getpwnam, 42) self.assertRaises(TypeError, pwd.getpwall, 42) # try to get some errors @@ -93,6 +95,13 @@ self.assertNotIn(fakeuid, byuids) self.assertRaises(KeyError, pwd.getpwuid, fakeuid) + # -1 shouldn't be a valid uid because it has a special meaning in many + # uid-related functions + self.assertRaises(KeyError, pwd.getpwuid, -1) + # should be out of uid_t range + self.assertRaises(KeyError, pwd.getpwuid, 2**128) + self.assertRaises(KeyError, pwd.getpwuid, -2**128) + def test_main(): support.run_unittest(PwdTest) diff --git a/Modules/pwdmodule.c b/Modules/pwdmodule.c --- a/Modules/pwdmodule.c +++ b/Modules/pwdmodule.c @@ -106,8 +106,12 @@ { uid_t uid; struct passwd *p; - if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) + if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + PyErr_Format(PyExc_KeyError, + "getpwuid(): uid not found"); return NULL; + } if ((p = getpwuid(uid)) == NULL) { PyObject *uid_obj = _PyLong_FromUid(uid); if (uid_obj == NULL) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 02:04:38 2013 From: python-checkins at python.org (giampaolo.rodola) Date: Tue, 12 Feb 2013 02:04:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_modernize_some_modules=27_?= =?utf-8?q?code_by_using_with_statement_around_open=28=29?= Message-ID: <3Z4lyG1sPhzPV3@mail.python.org> http://hg.python.org/cpython/rev/cb876235f29d changeset: 82168:cb876235f29d user: Giampaolo Rodola' date: Tue Feb 12 02:04:27 2013 +0100 summary: modernize some modules' code by using with statement around open() files: Lib/argparse.py | 5 +---- Lib/cProfile.py | 7 +++---- Lib/http/cookiejar.py | 15 +++------------ Lib/keyword.py | 5 ++--- Lib/lib2to3/pgen2/grammar.py | 10 ++++------ Lib/mailbox.py | 18 ++++-------------- Lib/pkgutil.py | 17 ++++++++--------- Lib/profile.py | 7 +++---- Lib/pstats.py | 10 +++------- Lib/pydoc.py | 5 ++--- Lib/site.py | 5 ++--- Lib/symtable.py | 3 ++- Lib/turtle.py | 22 +++++++++++----------- Lib/xml/dom/expatbuilder.py | 10 ++-------- 14 files changed, 50 insertions(+), 89 deletions(-) diff --git a/Lib/argparse.py b/Lib/argparse.py --- a/Lib/argparse.py +++ b/Lib/argparse.py @@ -2010,16 +2010,13 @@ # replace arguments referencing files with the file content else: try: - args_file = open(arg_string[1:]) - try: + with open(arg_string[1:]) as args_file: arg_strings = [] for arg_line in args_file.read().splitlines(): for arg in self.convert_arg_line_to_args(arg_line): arg_strings.append(arg) arg_strings = self._read_args_from_files(arg_strings) new_arg_strings.extend(arg_strings) - finally: - args_file.close() except OSError: err = _sys.exc_info()[1] self.error(str(err)) diff --git a/Lib/cProfile.py b/Lib/cProfile.py --- a/Lib/cProfile.py +++ b/Lib/cProfile.py @@ -77,10 +77,9 @@ def dump_stats(self, file): import marshal - f = open(file, 'wb') - self.create_stats() - marshal.dump(self.stats, f) - f.close() + with open(file, 'wb') as f: + self.create_stats() + marshal.dump(self.stats, f) def create_stats(self): self.disable() diff --git a/Lib/http/cookiejar.py b/Lib/http/cookiejar.py --- a/Lib/http/cookiejar.py +++ b/Lib/http/cookiejar.py @@ -1761,11 +1761,8 @@ if self.filename is not None: filename = self.filename else: raise ValueError(MISSING_FILENAME_TEXT) - f = open(filename) - try: + with open(filename) as f: self._really_load(f, filename, ignore_discard, ignore_expires) - finally: - f.close() def revert(self, filename=None, ignore_discard=False, ignore_expires=False): @@ -1856,15 +1853,12 @@ if self.filename is not None: filename = self.filename else: raise ValueError(MISSING_FILENAME_TEXT) - f = open(filename, "w") - try: + with open(filename, "w") as f: # There really isn't an LWP Cookies 2.0 format, but this indicates # that there is extra information in here (domain_dot and # port_spec) while still being compatible with libwww-perl, I hope. f.write("#LWP-Cookies-2.0\n") f.write(self.as_lwp_str(ignore_discard, ignore_expires)) - finally: - f.close() def _really_load(self, f, filename, ignore_discard, ignore_expires): magic = f.readline() @@ -2055,8 +2049,7 @@ if self.filename is not None: filename = self.filename else: raise ValueError(MISSING_FILENAME_TEXT) - f = open(filename, "w") - try: + with open(filename, "w") as f: f.write(self.header) now = time.time() for cookie in self: @@ -2085,5 +2078,3 @@ "\t".join([cookie.domain, initial_dot, cookie.path, secure, expires, name, value])+ "\n") - finally: - f.close() diff --git a/Lib/keyword.py b/Lib/keyword.py --- a/Lib/keyword.py +++ b/Lib/keyword.py @@ -85,9 +85,8 @@ sys.exit(1) # write the output file - fp = open(optfile, 'w') - fp.write(''.join(format)) - fp.close() + with open(optfile, 'w') as fp: + fp.write(''.join(format)) if __name__ == "__main__": main() diff --git a/Lib/lib2to3/pgen2/grammar.py b/Lib/lib2to3/pgen2/grammar.py --- a/Lib/lib2to3/pgen2/grammar.py +++ b/Lib/lib2to3/pgen2/grammar.py @@ -86,15 +86,13 @@ def dump(self, filename): """Dump the grammar tables to a pickle file.""" - f = open(filename, "wb") - pickle.dump(self.__dict__, f, 2) - f.close() + with open(filename, "wb") as f: + pickle.dump(self.__dict__, f, 2) def load(self, filename): """Load the grammar tables from a pickle file.""" - f = open(filename, "rb") - d = pickle.load(f) - f.close() + with open(filename, "rb") as f: + d = pickle.load(f) self.__dict__.update(d) def copy(self): diff --git a/Lib/mailbox.py b/Lib/mailbox.py --- a/Lib/mailbox.py +++ b/Lib/mailbox.py @@ -366,14 +366,11 @@ def get_message(self, key): """Return a Message representation or raise a KeyError.""" subpath = self._lookup(key) - f = open(os.path.join(self._path, subpath), 'rb') - try: + with open(os.path.join(self._path, subpath), 'rb') as f: if self._factory: msg = self._factory(f) else: msg = MaildirMessage(f) - finally: - f.close() subdir, name = os.path.split(subpath) msg.set_subdir(subdir) if self.colon in name: @@ -383,11 +380,8 @@ def get_bytes(self, key): """Return a bytes representation or raise a KeyError.""" - f = open(os.path.join(self._path, self._lookup(key)), 'rb') - try: + with open(os.path.join(self._path, self._lookup(key)), 'rb') as f: return f.read().replace(linesep, b'\n') - finally: - f.close() def get_file(self, key): """Return a file-like representation or raise a KeyError.""" @@ -1033,7 +1027,7 @@ raise KeyError('No message with key: %s' % key) else: raise - try: + with f: if self._locked: _lock_file(f) try: @@ -1041,8 +1035,6 @@ finally: if self._locked: _unlock_file(f) - finally: - f.close() for name, key_list in self.get_sequences().items(): if key in key_list: msg.add_sequence(name) @@ -1060,7 +1052,7 @@ raise KeyError('No message with key: %s' % key) else: raise - try: + with f: if self._locked: _lock_file(f) try: @@ -1068,8 +1060,6 @@ finally: if self._locked: _unlock_file(f) - finally: - f.close() def get_file(self, key): """Return a file-like representation or raise a KeyError.""" diff --git a/Lib/pkgutil.py b/Lib/pkgutil.py --- a/Lib/pkgutil.py +++ b/Lib/pkgutil.py @@ -349,9 +349,8 @@ self.file.close() elif mod_type==imp.PY_COMPILED: if os.path.exists(self.filename[:-1]): - f = open(self.filename[:-1], 'r') - self.source = f.read() - f.close() + with open(self.filename[:-1], 'r') as f: + self.source = f.read() elif mod_type==imp.PKG_DIRECTORY: self.source = self._get_delegate().get_source() return self.source @@ -591,12 +590,12 @@ sys.stderr.write("Can't open %s: %s\n" % (pkgfile, msg)) else: - for line in f: - line = line.rstrip('\n') - if not line or line.startswith('#'): - continue - path.append(line) # Don't check for existence! - f.close() + with f: + for line in f: + line = line.rstrip('\n') + if not line or line.startswith('#'): + continue + path.append(line) # Don't check for existence! return path diff --git a/Lib/profile.py b/Lib/profile.py --- a/Lib/profile.py +++ b/Lib/profile.py @@ -373,10 +373,9 @@ print_stats() def dump_stats(self, file): - f = open(file, 'wb') - self.create_stats() - marshal.dump(self.stats, f) - f.close() + with open(file, 'wb') as f: + self.create_stats() + marshal.dump(self.stats, f) def create_stats(self): self.simulate_cmd_complete() diff --git a/Lib/pstats.py b/Lib/pstats.py --- a/Lib/pstats.py +++ b/Lib/pstats.py @@ -93,9 +93,8 @@ self.stats = {} return elif isinstance(arg, str): - f = open(arg, 'rb') - self.stats = marshal.load(f) - f.close() + with open(arg, 'rb') as f: + self.stats = marshal.load(f) try: file_stats = os.stat(arg) arg = time.ctime(file_stats.st_mtime) + " " + arg @@ -149,11 +148,8 @@ def dump_stats(self, filename): """Write the profile data to a file we know how to load back.""" - f = open(filename, 'wb') - try: + with open(filename, 'wb') as f: marshal.dump(self.stats, f) - finally: - f.close() # list the tuple indices and directions for sorting, # along with some printable description diff --git a/Lib/pydoc.py b/Lib/pydoc.py --- a/Lib/pydoc.py +++ b/Lib/pydoc.py @@ -1426,9 +1426,8 @@ """Page through text by invoking a program on a temporary file.""" import tempfile filename = tempfile.mktemp() - file = open(filename, 'w') - file.write(text) - file.close() + with open(filename, 'w') as file: + file.write(text) try: os.system(cmd + ' "' + filename + '"') finally: diff --git a/Lib/site.py b/Lib/site.py --- a/Lib/site.py +++ b/Lib/site.py @@ -384,9 +384,8 @@ for filename in self.__files: filename = os.path.join(dir, filename) try: - fp = open(filename, "r") - data = fp.read() - fp.close() + with open(filename, "r") as fp: + data = fp.read() break except OSError: pass diff --git a/Lib/symtable.py b/Lib/symtable.py --- a/Lib/symtable.py +++ b/Lib/symtable.py @@ -235,7 +235,8 @@ if __name__ == "__main__": import os, sys - src = open(sys.argv[0]).read() + with open(sys.argv[0]) as f: + src = f.read() mod = symtable(src, os.path.split(sys.argv[0])[1], "exec") for ident in mod.get_identifiers(): info = mod.lookup(ident) diff --git a/Lib/turtle.py b/Lib/turtle.py --- a/Lib/turtle.py +++ b/Lib/turtle.py @@ -3843,18 +3843,18 @@ key = "Turtle."+methodname docsdict[key] = eval(key).__doc__ - f = open("%s.py" % filename,"w") - keys = sorted([x for x in docsdict.keys() - if x.split('.')[1] not in _alias_list]) - f.write('docsdict = {\n\n') - for key in keys[:-1]: + with open("%s.py" % filename,"w") as f: + keys = sorted([x for x in docsdict.keys() + if x.split('.')[1] not in _alias_list]) + f.write('docsdict = {\n\n') + for key in keys[:-1]: + f.write('%s :\n' % repr(key)) + f.write(' """%s\n""",\n\n' % docsdict[key]) + key = keys[-1] f.write('%s :\n' % repr(key)) - f.write(' """%s\n""",\n\n' % docsdict[key]) - key = keys[-1] - f.write('%s :\n' % repr(key)) - f.write(' """%s\n"""\n\n' % docsdict[key]) - f.write("}\n") - f.close() + f.write(' """%s\n"""\n\n' % docsdict[key]) + f.write("}\n") + f.close() def read_docstrings(lang): """Read in docstrings from lang-specific docstring dictionary. diff --git a/Lib/xml/dom/expatbuilder.py b/Lib/xml/dom/expatbuilder.py --- a/Lib/xml/dom/expatbuilder.py +++ b/Lib/xml/dom/expatbuilder.py @@ -905,11 +905,8 @@ builder = ExpatBuilder() if isinstance(file, str): - fp = open(file, 'rb') - try: + with open(file, 'rb') as fp: result = builder.parseFile(fp) - finally: - fp.close() else: result = builder.parseFile(file) return result @@ -939,11 +936,8 @@ builder = FragmentBuilder(context) if isinstance(file, str): - fp = open(file, 'rb') - try: + with open(file, 'rb') as fp: result = builder.parseFile(fp) - finally: - fp.close() else: result = builder.parseFile(file) return result -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 04:50:10 2013 From: python-checkins at python.org (daniel.holth) Date: Tue, 12 Feb 2013 04:50:10 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_pep-0426=3A_remove_extras_fro?= =?utf-8?q?m_Setup-Requires-Dist?= Message-ID: <3Z4qdG2vX9zPkW@mail.python.org> http://hg.python.org/peps/rev/bdf21b565b3a changeset: 4733:bdf21b565b3a parent: 4728:e85481d9e6ef user: Daniel Holth date: Mon Feb 11 22:37:10 2013 -0500 summary: pep-0426: remove extras from Setup-Requires-Dist Today Setup-Requires-Dist is needed so that "python setup.py" will run at all. Pip executes setup.py to write the requirements to a file, installs them, and runs setup.py again to do the build/install. Not prepared to add extras to this stage as installing a package with extras would have different results depending on whether you were installing package[extra] from source or from a binary. files: pep-0426.txt | 18 ++++++------------ 1 files changed, 6 insertions(+), 12 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -392,9 +392,6 @@ compiler support or a package needed to generate a manifest from version control. -Distributions may also depend on optional features of other distributions. -See `Optional Features`_ for details. - Examples:: Setup-Requires-Dist: custom_setup_command @@ -1086,8 +1083,7 @@ Distributions may use the ``Provides-Extra`` field to declare additional features that they provide. Environment markers may then be used to indicate -that particular dependencies (as specified in ``Requires-Dist`` or -``Setup-Requires-Dist``) are needed only when a particular optional +that particular dependencies are needed only when a particular optional feature has been requested. Other distributions then require an optional feature by placing it @@ -1095,10 +1091,9 @@ dependency. Multiple features can be requisted by separating them with a comma within the brackets. -The full set of dependency requirements is then the union of the -sets created by first evaluating the `Requires-Dist` (or -`Setup-Requires-Dist`) fields with `extra` set to `None` and then to -the name of each requested feature. +The full set of dependency requirements is then the union of the sets +created by first evaluating the `Requires-Dist` fields with `extra` +set to `None` and then to the name of each requested feature. Example:: @@ -1159,9 +1154,8 @@ * Optional feature mechanism * the new ``Provides-Extra`` field - * ``extra`` expression defined for environment markers. - * optional feature support in ``Requires-Dist`` and - ``Setup-Requires-Dist`` + * ``extra`` expression defined for environment markers + * optional feature support in ``Requires-Dist`` * Metadata extension mechanism -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Tue Feb 12 04:50:11 2013 From: python-checkins at python.org (daniel.holth) Date: Tue, 12 Feb 2013 04:50:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_pep-0426=3A_don=27t_require_P?= =?utf-8?q?rovides-Dist=3A_our_name?= Message-ID: <3Z4qdH5jpmzPks@mail.python.org> http://hg.python.org/peps/rev/08105ae9d5af changeset: 4734:08105ae9d5af user: Daniel Holth date: Mon Feb 11 22:49:33 2013 -0500 summary: pep-0426: don't require Provides-Dist: our name DRY rule for Provides-Dist: distributions always provide their own name. No point asking projects to choose their metadata version; wacky version numbers will not work correctly with tools regardless of the metadata version. files: pep-0426.txt | 17 ++++------------- 1 files changed, 4 insertions(+), 13 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -43,7 +43,7 @@ distribution. This format is parseable by the ``email`` module with an appropriate -``email.policy.Policy()``. When ``metadata`` is a Unicode string, +``email.policy.Policy()``. When ``metadata`` is a Unicode string, ```email.parser.Parser().parsestr(metadata)`` is a serviceable parser. There are two standard locations for these metadata files: @@ -277,11 +277,6 @@ ``Name`` or ``Name (Version)``, following the formats of the corresponding field definitions. -For ease of metadata consumption, distributions are required to explicitly -include a ``Provides-Dist`` entry for their own name and version. This also -allows developers of a project to discourage users explicitly depending on -the project (by deliberately omitting this entry). - A distribution may provide additional names, e.g. to indicate that multiple projects have been merged into and replaced by a single distribution or to indicate that this project is a substitute for another. @@ -388,9 +383,9 @@ ---------------------------------- Like ``Requires-Dist``, but names dependencies needed in order to build, -package or install the distribution. Commonly used to bring in extra -compiler support or a package needed to generate a manifest from -version control. +package or install the distribution -- in distutils, a dependency imported +by ``setup.py`` itself. Commonly used to bring in extra compiler support +or a package needed to generate a manifest from version control. Examples:: @@ -552,9 +547,6 @@ N[.N]+[{a|b|c|rc}N][.postN][.devN] Version identifiers which do not comply with this scheme are an error. -Projects which wish to use non-compliant version identifiers must restrict -themselves to metadata v1.1 (PEP 314) or earlier, as those specifications -do not constrain the versioning scheme. Any given version will be a "release", "pre-release", "post-release" or "developmental release" as defined in the following sections. @@ -1408,7 +1400,6 @@ Description: Description =========== - A description of the package. """ -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Tue Feb 12 04:50:13 2013 From: python-checkins at python.org (daniel.holth) Date: Tue, 12 Feb 2013 04:50:13 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps_=28merge_default_-=3E_default=29?= =?utf-8?q?=3A_merge?= Message-ID: <3Z4qdK2Gg3zPl8@mail.python.org> http://hg.python.org/peps/rev/6c0ec082c797 changeset: 4735:6c0ec082c797 parent: 4734:08105ae9d5af parent: 4732:fe7cd22d1064 user: Daniel Holth date: Mon Feb 11 22:50:02 2013 -0500 summary: merge files: pep-0422.txt | 48 ++++++++++++++++++++++++++++++---------- pep-0431.txt | 28 ++++++++++------------- 2 files changed, 48 insertions(+), 28 deletions(-) diff --git a/pep-0422.txt b/pep-0422.txt --- a/pep-0422.txt +++ b/pep-0422.txt @@ -2,13 +2,14 @@ Title: Simple class initialisation hook Version: $Revision$ Last-Modified: $Date$ -Author: Nick Coghlan +Author: Nick Coghlan , + Daniel Urban Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 5-Jun-2012 Python-Version: 3.4 -Post-History: 5-Jun-2012 +Post-History: 5-Jun-2012, 10-Feb-2012 Abstract @@ -19,7 +20,7 @@ by setting the ``__metaclass__`` attribute in the class body. While doing this implicitly from called code required the use of an implementation detail (specifically, ``sys._getframes()``), it could also be done explicitly in a -fully supported fashion (for example, by passing ``locals()`` to an +fully supported fashion (for example, by passing ``locals()`` to a function that calculated a suitable ``__metaclass__`` value) There is currently no corresponding mechanism in Python 3 that allows the @@ -44,7 +45,7 @@ While in many cases these two meanings end up referring to one and the same object, there are two situations where that is not the case: -* If the metaclass hint refers to a subclass of ``type``, then it is +* If the metaclass hint refers to an instance of ``type``, then it is considered as a candidate metaclass along with the metaclasses of all of the parents of the class being defined. If a more appropriate metaclass is found amongst the candidates, then it will be used instead of the one @@ -114,7 +115,7 @@ # This is invoked after the class is created, but before any # explicit decorators are called # The usual super() mechanisms are used to correctly support - # multiple inheritance. The decorator style invocation helps + # multiple inheritance. The class decorator style signature helps # ensure that invoking the parent class is as simple as possible. If present on the created object, this new hook will be called by the class @@ -125,10 +126,17 @@ If a metaclass wishes to block class initialisation for some reason, it must arrange for ``cls.__init_class__`` to trigger ``AttributeError``. +Note, that when ``__init_class__`` is called, the name of the class is not +bound to the new class object yet. As a consequence, the two argument form +of ``super()`` cannot be used to call methods (e.g., ``super(Example, cls)`` +wouldn't work in the example above). However, the zero argument form of +``super()`` works as expected, since the ``__class__`` reference is already +initialised. + This general proposal is not a new idea (it was first suggested for inclusion in the language definition `more than 10 years ago`_, and a similar mechanism has long been supported by `Zope's ExtensionClass`_), -but I believe the situation has changed sufficiently in recent years that +but the situation has changed sufficiently in recent years that the idea is worth reconsidering. @@ -156,7 +164,7 @@ class object) clearly distinct in your mind. Even when you know the rules, it's still easy to make a mistake if you're not being extremely careful. An earlier version of this PEP actually included such a mistake: it -stated "instance of type" for a constraint that is actually "subclass of +stated "subclass of type" for a constraint that is actually "instance of type". Understanding the proposed class initialisation hook only requires @@ -278,17 +286,24 @@ Using the current version of the PEP, the scheme originally proposed could be implemented as:: - class DynamicDecorators: + class DynamicDecorators(Base): @classmethod def __init_class__(cls): - super(DynamicDecorators, cls).__init_class__() + # Process any classes later in the MRO + try: + mro_chain = super().__init_class__ + except AttributeError: + pass + else: + mro_chain() + # Process any __decorators__ attributes in the MRO for entry in reversed(cls.mro()): decorators = entry.__dict__.get("__decorators__", ()) for deco in reversed(decorators): cls = deco(cls) -Any subclasses of this type would automatically have the contents of any -``__decorators__`` attributes processed and invoked. +Any subclasses of ``DynamicDecorators`` would then automatically have the +contents of any ``__decorators__`` attributes processed and invoked. The mechanism in the current PEP is considered superior, as many issues to do with ordering and the same decorator being invoked multiple times @@ -325,6 +340,12 @@ ``super()``), and could not make use of those features themselves. +Reference Implementation +======================== + +The reference implementation has been posted to the `issue tracker`_. + + References ========== @@ -337,12 +358,15 @@ .. _Zope's ExtensionClass: http://docs.zope.org/zope_secrets/extensionclass.html +.. _issue tracker: + http://bugs.python.org/issue17044 + Copyright ========= This document has been placed in the public domain. - + .. Local Variables: mode: indented-text diff --git a/pep-0431.txt b/pep-0431.txt --- a/pep-0431.txt +++ b/pep-0431.txt @@ -8,7 +8,7 @@ Type: Standards Track Content-Type: text/x-rst Created: 11-Dec-2012 -Post-History: 11-Dec-2012, 28-Dec-2012 +Post-History: 11-Dec-2012, 28-Dec-2012, 28-Jan-2013 Abstract @@ -94,7 +94,7 @@ When changing over from daylight savings time (DST) the clock is turned back one hour. This means that the times during that hour happens twice, once -without DST and then once with DST. Similarly, when changing to daylight +with DST and then once without DST. Similarly, when changing to daylight savings time, one hour goes missing. The current time zone API can not differentiate between the two ambiguous @@ -156,10 +156,10 @@ function, one new exception and four new collections. In addition to this, several methods on the datetime object gets a new ``is_dst`` parameter. -New class ``DstTzInfo`` -^^^^^^^^^^^^^^^^^^^^^^^^ +New class ``dsttimezone`` +^^^^^^^^^^^^^^^^^^^^^^^^^ -This class provides a concrete implementation of the ``zoneinfo`` base +This class provides a concrete implementation of the ``tzinfo`` base class that implements DST support. @@ -176,10 +176,10 @@ database which should be used. If not specified, the function will look for databases in the following order: -1. Use the database in ``/usr/share/zoneinfo``, if it exists. +1. Check if the `tzdata-update` module is installed, and then use that + database. -2. Check if the `tzdata-update` module is installed, and then use that - database. +2. Use the database in ``/usr/share/zoneinfo``, if it exists. 3. Use the Python-provided database in ``Lib/tzdata``. @@ -206,7 +206,7 @@ ``False`` will specify that the given datetime should be interpreted as not happening during daylight savings time, i.e. that the time specified is after -the change from DST. +the change from DST. This is default to preserve existing behavior. ``True`` will specify that the given datetime should be interpreted as happening during daylight savings time, i.e. that the time specified is before the change @@ -224,7 +224,7 @@ This exception is a subclass of KeyError and raised when giving a time zone specification that can't be found:: - >>> datetime.Timezone('Europe/New_York') + >>> datetime.zoneinfo('Europe/New_York') Traceback (most recent call last): ... UnknownTimeZoneError: There is no time zone called 'Europe/New_York' @@ -250,8 +250,8 @@ * ``NonExistentTimeError`` - This exception is raised when giving a datetime specification that is ambiguous - while setting ``is_dst`` to None:: + This exception is raised when giving a datetime specification for a time that due to + daylight saving does not exist, while setting ``is_dst`` to None:: >>> datetime(2012, 3, 25, 2, 0, tzinfo=zoneinfo('Europe/Stockholm'), is_dst=None) >>> @@ -266,13 +266,9 @@ * ``all_timezones`` is the exhaustive list of the time zone names that can be used, listed alphabethically. -* ``all_timezones_set`` is a set of the time zones in ``all_timezones``. - * ``common_timezones`` is a list of useful, current time zones, listed alphabethically. -* ``common_timezones_set`` is a set of the time zones in ``common_timezones``. - The ``tzdata-update``-package ----------------------------- -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Tue Feb 12 05:59:05 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Tue, 12 Feb 2013 05:59:05 +0100 Subject: [Python-checkins] Daily reference leaks (cb876235f29d): sum=0 Message-ID: results for cb876235f29d on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogV30PCd', '-x'] From python-checkins at python.org Tue Feb 12 07:11:39 2013 From: python-checkins at python.org (ned.deily) Date: Tue, 12 Feb 2013 07:11:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MTEx?= =?utf-8?q?=3A_Prevent_test=5Fsurrogates_=28test=5Ffileio=29_failure_on_OS?= =?utf-8?q?_X_10=2E4=2E?= Message-ID: <3Z4tmW0NQzzPsY@mail.python.org> http://hg.python.org/cpython/rev/9497adb7355f changeset: 82169:9497adb7355f branch: 2.7 parent: 82165:e44fa71d76fe user: Ned Deily date: Mon Feb 11 22:10:59 2013 -0800 summary: Issue #17111: Prevent test_surrogates (test_fileio) failure on OS X 10.4. An odd bug in OS X 10.4 causes open(2) on a non-existent, invalid-encoded filename to return errno 22, EINVAL: Invalid argument, instead of the expected errno 2, ENOENT: No such file or directory, *if* the containing directory is not empty. That caused frequent failures when running the buildbot tests on 10.4 depending on the state of the test working directory. The failure is easy to reproduce on 10.4 by running the test directly (not with regrtest), first in an empty directory, then after adding a file to it. The fix is to check for and pass if either errno is returned. files: Lib/test/test_fileio.py | 5 +++-- Misc/NEWS | 2 ++ 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_fileio.py b/Lib/test/test_fileio.py --- a/Lib/test/test_fileio.py +++ b/Lib/test/test_fileio.py @@ -450,8 +450,9 @@ env = dict(os.environ) env[b'LC_CTYPE'] = b'C' _, out = run_python('-c', 'import _io; _io.FileIO(%r)' % filename, env=env) - if ('UnicodeEncodeError' not in out and - 'IOError: [Errno 2] No such file or directory' not in out): + if ('UnicodeEncodeError' not in out and not + ( ('IOError: [Errno 2] No such file or directory' in out) or + ('IOError: [Errno 22] Invalid argument' in out) ) ): self.fail('Bad output: %r' % out) def testUnclosedFDOnException(self): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -818,6 +818,8 @@ - Issue #16698: Skip posix test_getgroups when built with OS X deployment target prior to 10.6. +- Issue #17111: Prevent test_surrogates (test_fileio) failure on OS X 10.4. + Build ----- -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 08:36:41 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 08:36:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Clean_trailing?= =?utf-8?q?_whitespaces_in_Makefile=2Epre=2Ein_and_grpmodule=2Ec=2E?= Message-ID: <3Z4wfd260LzPkW@mail.python.org> http://hg.python.org/cpython/rev/7cb403f8a865 changeset: 82170:7cb403f8a865 branch: 2.7 user: Serhiy Storchaka date: Tue Feb 12 09:20:19 2013 +0200 summary: Clean trailing whitespaces in Makefile.pre.in and grpmodule.c. files: Makefile.pre.in | 6 +++--- Modules/grpmodule.c | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -475,7 +475,7 @@ libpython$(VERSION).dylib: $(LIBRARY_OBJS) $(CC) -dynamiclib -Wl,-single_module $(LDFLAGS) -undefined dynamic_lookup -Wl,-install_name,$(prefix)/lib/libpython$(VERSION).dylib -Wl,-compatibility_version,$(VERSION) -Wl,-current_version,$(VERSION) -o $@ $(LIBRARY_OBJS) $(SHLIBS) $(LIBC) $(LIBM) $(LDLAST); \ - + libpython$(VERSION).sl: $(LIBRARY_OBJS) $(LDSHARED) -o $@ $(LIBRARY_OBJS) $(MODLIBS) $(SHLIBS) $(LIBC) $(LIBM) $(LDLAST) @@ -911,7 +911,7 @@ plat-mac/lib-scriptpackages/Netscape \ plat-mac/lib-scriptpackages/StdSuites \ plat-mac/lib-scriptpackages/SystemEvents \ - plat-mac/lib-scriptpackages/Terminal + plat-mac/lib-scriptpackages/Terminal PLATMACPATH=:plat-mac:plat-mac/lib-scriptpackages LIBSUBDIRS= lib-tk lib-tk/test lib-tk/test/test_tkinter \ lib-tk/test/test_ttk site-packages test test/data \ @@ -1333,7 +1333,7 @@ .PHONY: maninstall libinstall inclinstall libainstall sharedinstall .PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure .PHONY: frameworkinstallmaclib frameworkinstallapps frameworkinstallunixtools -.PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean +.PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean .PHONY: smelly funny patchcheck altmaninstall .PHONY: gdbhooks diff --git a/Modules/grpmodule.c b/Modules/grpmodule.c --- a/Modules/grpmodule.c +++ b/Modules/grpmodule.c @@ -10,8 +10,8 @@ static PyStructSequence_Field struct_group_type_fields[] = { {"gr_name", "group name"}, {"gr_passwd", "password"}, - {"gr_gid", "group id"}, - {"gr_mem", "group memebers"}, + {"gr_gid", "group id"}, + {"gr_mem", "group memebers"}, {0} }; @@ -113,7 +113,7 @@ if (!py_str_name) return NULL; name = PyString_AS_STRING(py_str_name); - + if ((p = getgrnam(name)) == NULL) { PyErr_Format(PyExc_KeyError, "getgrnam(): name not found: %s", name); Py_DECREF(py_str_name); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 08:36:42 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 08:36:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Clean_trailing?= =?utf-8?q?_whitespaces_in_Makefile=2Epre=2Ein=2E?= Message-ID: <3Z4wff520lzPyw@mail.python.org> http://hg.python.org/cpython/rev/be4aea0cb120 changeset: 82171:be4aea0cb120 branch: 3.2 parent: 82162:f83581135ec4 user: Serhiy Storchaka date: Tue Feb 12 09:21:36 2013 +0200 summary: Clean trailing whitespaces in Makefile.pre.in. files: Makefile.pre.in | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -509,7 +509,7 @@ -install_name $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Versions/$(VERSION)/$(PYTHONFRAMEWORK) \ -compatibility_version $(VERSION) \ -current_version $(VERSION) \ - -framework CoreFoundation $(LIBS); + -framework CoreFoundation $(LIBS); $(INSTALL) -d -m $(DIRMODE) \ $(PYTHONFRAMEWORKDIR)/Versions/$(VERSION)/Resources/English.lproj $(INSTALL_DATA) $(RESSRCDIR)/Info.plist \ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 08:36:44 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 08:36:44 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzQ1OTE6?= =?utf-8?q?_Uid_and_gid_values_larger_than_2**31_are_supported_now=2E?= Message-ID: <3Z4wfh4QnbzPx1@mail.python.org> http://hg.python.org/cpython/rev/3893ab574c55 changeset: 82172:3893ab574c55 branch: 3.2 user: Serhiy Storchaka date: Tue Feb 12 09:24:16 2013 +0200 summary: Issue #4591: Uid and gid values larger than 2**31 are supported now. files: Lib/test/test_posix.py | 29 ++- Lib/test/test_pwd.py | 9 + Makefile.pre.in | 6 + Misc/NEWS | 2 + Modules/grpmodule.c | 17 +- Modules/posixmodule.c | 327 ++++++++++++++++++---------- Modules/posixmodule.h | 25 ++ Modules/pwdmodule.c | 20 +- 8 files changed, 301 insertions(+), 134 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -232,10 +232,20 @@ else: self.assertTrue(stat.S_ISFIFO(posix.stat(support.TESTFN).st_mode)) - def _test_all_chown_common(self, chown_func, first_param): + def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" + def check_stat(): + if stat_func is not None: + stat = stat_func(first_param) + self.assertEqual(stat.st_uid, os.getuid()) + self.assertEqual(stat.st_gid, os.getgid()) # test a successful chown call chown_func(first_param, os.getuid(), os.getgid()) + check_stat() + chown_func(first_param, -1, os.getgid()) + check_stat() + chown_func(first_param, os.getuid(), -1) + check_stat() if os.getuid() == 0: try: @@ -255,8 +265,12 @@ "behavior") else: # non-root cannot chown to root, raises OSError - self.assertRaises(OSError, chown_func, - first_param, 0, 0) + self.assertRaises(OSError, chown_func, first_param, 0, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, 0, -1) + check_stat() @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): @@ -266,7 +280,8 @@ # re-create the file open(support.TESTFN, 'w').close() - self._test_all_chown_common(posix.chown, support.TESTFN) + self._test_all_chown_common(posix.chown, support.TESTFN, + getattr(posix, 'stat', None)) @unittest.skipUnless(hasattr(posix, 'fchown'), "test needs os.fchown()") def test_fchown(self): @@ -276,7 +291,8 @@ test_file = open(support.TESTFN, 'w') try: fd = test_file.fileno() - self._test_all_chown_common(posix.fchown, fd) + self._test_all_chown_common(posix.fchown, fd, + getattr(posix, 'fstat', None)) finally: test_file.close() @@ -285,7 +301,8 @@ os.unlink(support.TESTFN) # create a symlink os.symlink(_DUMMY_SYMLINK, support.TESTFN) - self._test_all_chown_common(posix.lchown, support.TESTFN) + self._test_all_chown_common(posix.lchown, support.TESTFN, + getattr(posix, 'lstat', None)) def test_chdir(self): if hasattr(posix, 'chdir'): diff --git a/Lib/test/test_pwd.py b/Lib/test/test_pwd.py --- a/Lib/test/test_pwd.py +++ b/Lib/test/test_pwd.py @@ -49,7 +49,9 @@ def test_errors(self): self.assertRaises(TypeError, pwd.getpwuid) + self.assertRaises(TypeError, pwd.getpwuid, 3.14) self.assertRaises(TypeError, pwd.getpwnam) + self.assertRaises(TypeError, pwd.getpwnam, 42) self.assertRaises(TypeError, pwd.getpwall, 42) # try to get some errors @@ -93,6 +95,13 @@ self.assertNotIn(fakeuid, byuids) self.assertRaises(KeyError, pwd.getpwuid, fakeuid) + # -1 shouldn't be a valid uid because it has a special meaning in many + # uid-related functions + self.assertRaises(KeyError, pwd.getpwuid, -1) + # should be out of uid_t range + self.assertRaises(KeyError, pwd.getpwuid, 2**128) + self.assertRaises(KeyError, pwd.getpwuid, -2**128) + def test_main(): support.run_unittest(PwdTest) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -587,6 +587,12 @@ Modules/python.o: $(srcdir)/Modules/python.c $(MAINCC) -c $(PY_CORE_CFLAGS) -o $@ $(srcdir)/Modules/python.c +Modules/posixmodule.o: $(srcdir)/Modules/posixmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/grpmodule.o: $(srcdir)/Modules/grpmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/pwdmodule.o: $(srcdir)/Modules/pwdmodule.c $(srcdir)/Modules/posixmodule.h + Python/dynload_shlib.o: $(srcdir)/Python/dynload_shlib.c Makefile $(CC) -c $(PY_CORE_CFLAGS) \ -DSOABI='"$(SOABI)"' \ diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -226,6 +226,8 @@ - Issue #17052: unittest discovery should use self.testLoader. +- Issue #4591: Uid and gid values larger than 2**31 are supported now. + - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. - Issue #17149: Fix random.vonmisesvariate to always return results in diff --git a/Modules/grpmodule.c b/Modules/grpmodule.c --- a/Modules/grpmodule.c +++ b/Modules/grpmodule.c @@ -2,8 +2,8 @@ /* UNIX group file access module */ #include "Python.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_group_type_fields[] = { @@ -69,7 +69,7 @@ Py_INCREF(Py_None); } #endif - SET(setIndex++, PyLong_FromLong((long) p->gr_gid)); + SET(setIndex++, _PyLong_FromGid(p->gr_gid)); SET(setIndex++, w); #undef SET @@ -85,17 +85,24 @@ grp_getgrgid(PyObject *self, PyObject *pyo_id) { PyObject *py_int_id; - unsigned int gid; + gid_t gid; struct group *p; py_int_id = PyNumber_Long(pyo_id); if (!py_int_id) return NULL; - gid = PyLong_AS_LONG(py_int_id); + if (!_Py_Gid_Converter(py_int_id, &gid)) { + Py_DECREF(py_int_id); + return NULL; + } Py_DECREF(py_int_id); if ((p = getgrgid(gid)) == NULL) { - PyErr_Format(PyExc_KeyError, "getgrgid(): gid not found: %d", gid); + PyObject *gid_obj = _PyLong_FromGid(gid); + if (gid_obj == NULL) + return NULL; + PyErr_Format(PyExc_KeyError, "getgrgid(): gid not found: %S", gid_obj); + Py_DECREF(gid_obj); return NULL; } return mkgrent(p); diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -26,6 +26,9 @@ #define PY_SSIZE_T_CLEAN #include "Python.h" +#ifndef MS_WINDOWS +#include "posixmodule.h" +#endif #if defined(__VMS) # include @@ -347,6 +350,134 @@ #endif #endif + +#ifndef MS_WINDOWS +PyObject * +_PyLong_FromUid(uid_t uid) +{ + if (uid == (uid_t)-1) + return PyLong_FromLong(-1); + return PyLong_FromUnsignedLong(uid); +} + +PyObject * +_PyLong_FromGid(gid_t gid) +{ + if (gid == (gid_t)-1) + return PyLong_FromLong(-1); + return PyLong_FromUnsignedLong(gid); +} + +int +_Py_Uid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(uid_t *)p = (uid_t)-1; + } + else { + /* unsigned uid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((uid_t)uresult == (uid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(uid_t) < sizeof(long) && + (unsigned long)(uid_t)uresult != uresult) + goto OverflowUp; + *(uid_t *)p = (uid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "user id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "user id is greater than maximum"); + return 0; +} + +int +_Py_Gid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(gid_t *)p = (gid_t)-1; + } + else { + /* unsigned gid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((gid_t)uresult == (gid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(gid_t) < sizeof(long) && + (unsigned long)(gid_t)uresult != uresult) + goto OverflowUp; + *(gid_t *)p = (gid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "group id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "group id is greater than maximum"); + return 0; +} +#endif /* MS_WINDOWS */ + + #if defined _MSC_VER && _MSC_VER >= 1400 /* Microsoft CRT in VS2005 and higher will verify that a filehandle is * valid and raise an assertion if it isn't. @@ -1643,8 +1774,13 @@ PyStructSequence_SET_ITEM(v, 2, PyLong_FromLong((long)st->st_dev)); #endif PyStructSequence_SET_ITEM(v, 3, PyLong_FromLong((long)st->st_nlink)); - PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong((long)st->st_uid)); - PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong((long)st->st_gid)); +#if defined(MS_WINDOWS) + PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong(0)); + PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong(0)); +#else + PyStructSequence_SET_ITEM(v, 4, _PyLong_FromUid(st->st_uid)); + PyStructSequence_SET_ITEM(v, 5, _PyLong_FromGid(st->st_gid)); +#endif #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 6, PyLong_FromLongLong((PY_LONG_LONG)st->st_size)); @@ -2173,15 +2309,17 @@ { PyObject *opath; char *path; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "O&ll:chown", + if (!PyArg_ParseTuple(args, "O&O&O&:chown", PyUnicode_FSConverter, &opath, - &uid, &gid)) + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; path = PyBytes_AsString(opath); Py_BEGIN_ALLOW_THREADS - res = chown(path, (uid_t) uid, (gid_t) gid); + res = chown(path, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error_with_allocated_filename(opath); @@ -2201,12 +2339,15 @@ posix_fchown(PyObject *self, PyObject *args) { int fd; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "ill:fchown", &fd, &uid, &gid)) + if (!PyArg_ParseTuple(args, "iO&O&:fchown", &fd, + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; Py_BEGIN_ALLOW_THREADS - res = fchown(fd, (uid_t) uid, (gid_t) gid); + res = fchown(fd, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error(); @@ -2225,15 +2366,17 @@ { PyObject *opath; char *path; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "O&ll:lchown", + if (!PyArg_ParseTuple(args, "O&O&O&:lchown", PyUnicode_FSConverter, &opath, - &uid, &gid)) + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; path = PyBytes_AsString(opath); Py_BEGIN_ALLOW_THREADS - res = lchown(path, (uid_t) uid, (gid_t) gid); + res = lchown(path, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error_with_allocated_filename(opath); @@ -4288,7 +4431,7 @@ static PyObject * posix_getegid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getegid()); + return _PyLong_FromGid(getegid()); } #endif @@ -4301,7 +4444,7 @@ static PyObject * posix_geteuid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)geteuid()); + return _PyLong_FromUid(geteuid()); } #endif @@ -4314,7 +4457,7 @@ static PyObject * posix_getgid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getgid()); + return _PyLong_FromGid(getgid()); } #endif @@ -4389,7 +4532,7 @@ if (result != NULL) { int i; for (i = 0; i < n; ++i) { - PyObject *o = PyLong_FromLong((long)alt_grouplist[i]); + PyObject *o = _PyLong_FromGid(alt_grouplist[i]); if (o == NULL) { Py_DECREF(result); result = NULL; @@ -4420,14 +4563,25 @@ PyObject *oname; char *username; int res; - long gid; - - if (!PyArg_ParseTuple(args, "O&l:initgroups", - PyUnicode_FSConverter, &oname, &gid)) +#ifdef __APPLE__ + int gid; +#else + gid_t gid; +#endif + +#ifdef __APPLE__ + if (!PyArg_ParseTuple(args, "O&i:initgroups", + PyUnicode_FSConverter, &oname, + &gid)) +#else + if (!PyArg_ParseTuple(args, "O&O&:initgroups", + PyUnicode_FSConverter, &oname, + _Py_Gid_Converter, &gid)) +#endif return NULL; username = PyBytes_AS_STRING(oname); - res = initgroups(username, (gid_t) gid); + res = initgroups(username, gid); Py_DECREF(oname); if (res == -1) return PyErr_SetFromErrno(PyExc_OSError); @@ -4602,7 +4756,7 @@ static PyObject * posix_getuid(PyObject *self, PyObject *noargs) { - return PyLong_FromLong((long)getuid()); + return _PyLong_FromUid(getuid()); } #endif @@ -4742,15 +4896,9 @@ static PyObject * posix_setuid(PyObject *self, PyObject *args) { - long uid_arg; uid_t uid; - if (!PyArg_ParseTuple(args, "l:setuid", &uid_arg)) - return NULL; - uid = uid_arg; - if (uid != uid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setuid", _Py_Uid_Converter, &uid)) + return NULL; if (setuid(uid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -4767,15 +4915,9 @@ static PyObject * posix_seteuid (PyObject *self, PyObject *args) { - long euid_arg; uid_t euid; - if (!PyArg_ParseTuple(args, "l", &euid_arg)) - return NULL; - euid = euid_arg; - if (euid != euid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:seteuid", _Py_Uid_Converter, &euid)) + return NULL; if (seteuid(euid) < 0) { return posix_error(); } else { @@ -4793,15 +4935,9 @@ static PyObject * posix_setegid (PyObject *self, PyObject *args) { - long egid_arg; gid_t egid; - if (!PyArg_ParseTuple(args, "l", &egid_arg)) - return NULL; - egid = egid_arg; - if (egid != egid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setegid", _Py_Gid_Converter, &egid)) + return NULL; if (setegid(egid) < 0) { return posix_error(); } else { @@ -4819,23 +4955,11 @@ static PyObject * posix_setreuid (PyObject *self, PyObject *args) { - long ruid_arg, euid_arg; uid_t ruid, euid; - if (!PyArg_ParseTuple(args, "ll", &ruid_arg, &euid_arg)) - return NULL; - if (ruid_arg == -1) - ruid = (uid_t)-1; /* let the compiler choose how -1 fits */ - else - ruid = ruid_arg; /* otherwise, assign from our long */ - if (euid_arg == -1) - euid = (uid_t)-1; - else - euid = euid_arg; - if ((euid_arg != -1 && euid != euid_arg) || - (ruid_arg != -1 && ruid != ruid_arg)) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setreuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid)) + return NULL; if (setreuid(ruid, euid) < 0) { return posix_error(); } else { @@ -4853,23 +4977,11 @@ static PyObject * posix_setregid (PyObject *self, PyObject *args) { - long rgid_arg, egid_arg; gid_t rgid, egid; - if (!PyArg_ParseTuple(args, "ll", &rgid_arg, &egid_arg)) - return NULL; - if (rgid_arg == -1) - rgid = (gid_t)-1; /* let the compiler choose how -1 fits */ - else - rgid = rgid_arg; /* otherwise, assign from our long */ - if (egid_arg == -1) - egid = (gid_t)-1; - else - egid = egid_arg; - if ((egid_arg != -1 && egid != egid_arg) || - (rgid_arg != -1 && rgid != rgid_arg)) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setregid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid)) + return NULL; if (setregid(rgid, egid) < 0) { return posix_error(); } else { @@ -4887,15 +4999,9 @@ static PyObject * posix_setgid(PyObject *self, PyObject *args) { - long gid_arg; gid_t gid; - if (!PyArg_ParseTuple(args, "l:setgid", &gid_arg)) - return NULL; - gid = gid_arg; - if (gid != gid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setgid", _Py_Gid_Converter, &gid)) + return NULL; if (setgid(gid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -4934,18 +5040,7 @@ Py_DECREF(elem); return NULL; } else { - unsigned long x = PyLong_AsUnsignedLong(elem); - if (PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); - Py_DECREF(elem); - return NULL; - } - grouplist[i] = x; - /* read back the value to see if it fitted in gid_t */ - if (grouplist[i] != x) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); + if (!_Py_Gid_Converter(elem, &grouplist[i])) { Py_DECREF(elem); return NULL; } @@ -7694,9 +7789,11 @@ static PyObject* posix_setresuid (PyObject *self, PyObject *args) { - /* We assume uid_t is no larger than a long. */ - long ruid, euid, suid; - if (!PyArg_ParseTuple(args, "lll", &ruid, &euid, &suid)) + uid_t ruid, euid, suid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid, + _Py_Uid_Converter, &suid)) return NULL; if (setresuid(ruid, euid, suid) < 0) return posix_error(); @@ -7712,9 +7809,11 @@ static PyObject* posix_setresgid (PyObject *self, PyObject *args) { - /* We assume uid_t is no larger than a long. */ - long rgid, egid, sgid; - if (!PyArg_ParseTuple(args, "lll", &rgid, &egid, &sgid)) + gid_t rgid, egid, sgid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresgid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid, + _Py_Gid_Converter, &sgid)) return NULL; if (setresgid(rgid, egid, sgid) < 0) return posix_error(); @@ -7731,14 +7830,11 @@ posix_getresuid (PyObject *self, PyObject *noargs) { uid_t ruid, euid, suid; - long l_ruid, l_euid, l_suid; if (getresuid(&ruid, &euid, &suid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_ruid = ruid; - l_euid = euid; - l_suid = suid; - return Py_BuildValue("(lll)", l_ruid, l_euid, l_suid); + return Py_BuildValue("(NNN)", _PyLong_FromUid(ruid), + _PyLong_FromUid(euid), + _PyLong_FromUid(suid)); } #endif @@ -7751,14 +7847,11 @@ posix_getresgid (PyObject *self, PyObject *noargs) { uid_t rgid, egid, sgid; - long l_rgid, l_egid, l_sgid; if (getresgid(&rgid, &egid, &sgid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_rgid = rgid; - l_egid = egid; - l_sgid = sgid; - return Py_BuildValue("(lll)", l_rgid, l_egid, l_sgid); + return Py_BuildValue("(NNN)", _PyLong_FromGid(rgid), + _PyLong_FromGid(egid), + _PyLong_FromGid(sgid)); } #endif diff --git a/Modules/posixmodule.h b/Modules/posixmodule.h new file mode 100644 --- /dev/null +++ b/Modules/posixmodule.h @@ -0,0 +1,25 @@ +/* Declarations shared between the different POSIX-related modules */ + +#ifndef Py_POSIXMODULE_H +#define Py_POSIXMODULE_H +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef HAVE_SYS_TYPES_H +#include +#endif + +#ifndef Py_LIMITED_API +#ifndef MS_WINDOWS +PyAPI_FUNC(PyObject *) _PyLong_FromUid(uid_t); +PyAPI_FUNC(PyObject *) _PyLong_FromGid(gid_t); +PyAPI_FUNC(int) _Py_Uid_Converter(PyObject *, void *); +PyAPI_FUNC(int) _Py_Gid_Converter(PyObject *, void *); +#endif /* MS_WINDOWS */ +#endif + +#ifdef __cplusplus +} +#endif +#endif /* !Py_POSIXMODULE_H */ diff --git a/Modules/pwdmodule.c b/Modules/pwdmodule.c --- a/Modules/pwdmodule.c +++ b/Modules/pwdmodule.c @@ -2,8 +2,8 @@ /* UNIX password file access module */ #include "Python.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_pwd_type_fields[] = { @@ -74,8 +74,8 @@ #else SETS(setIndex++, p->pw_passwd); #endif - SETI(setIndex++, p->pw_uid); - SETI(setIndex++, p->pw_gid); + PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromUid(p->pw_uid)); + PyStructSequence_SET_ITEM(v, setIndex++, _PyLong_FromGid(p->pw_gid)); #ifdef __VMS SETS(setIndex++, ""); #else @@ -104,13 +104,21 @@ static PyObject * pwd_getpwuid(PyObject *self, PyObject *args) { - unsigned int uid; + uid_t uid; struct passwd *p; - if (!PyArg_ParseTuple(args, "I:getpwuid", &uid)) + if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + PyErr_Format(PyExc_KeyError, + "getpwuid(): uid not found"); return NULL; + } if ((p = getpwuid(uid)) == NULL) { + PyObject *uid_obj = _PyLong_FromUid(uid); + if (uid_obj == NULL) + return NULL; PyErr_Format(PyExc_KeyError, - "getpwuid(): uid not found: %d", uid); + "getpwuid(): uid not found: %S", uid_obj); + Py_DECREF(uid_obj); return NULL; } return mkpwent(p); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 08:36:46 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 08:36:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzQ1OTE6?= =?utf-8?q?_Uid_and_gid_values_larger_than_2**31_are_supported_now=2E?= Message-ID: <3Z4wfk31JqzQ7h@mail.python.org> http://hg.python.org/cpython/rev/035cbc654889 changeset: 82173:035cbc654889 branch: 2.7 parent: 82170:7cb403f8a865 user: Serhiy Storchaka date: Tue Feb 12 09:27:53 2013 +0200 summary: Issue #4591: Uid and gid values larger than 2**31 are supported now. files: Lib/test/test_posix.py | 29 +- Lib/test/test_pwd.py | 9 + Makefile.pre.in | 5 + Misc/NEWS | 2 + Modules/grpmodule.c | 20 +- Modules/posixmodule.c | 349 +++++++++++++++++----------- Modules/posixmodule.h | 25 ++ Modules/pwdmodule.c | 22 +- 8 files changed, 308 insertions(+), 153 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -222,10 +222,20 @@ if hasattr(posix, 'stat'): self.assertTrue(posix.stat(test_support.TESTFN)) - def _test_all_chown_common(self, chown_func, first_param): + def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" + def check_stat(): + if stat_func is not None: + stat = stat_func(first_param) + self.assertEqual(stat.st_uid, os.getuid()) + self.assertEqual(stat.st_gid, os.getgid()) # test a successful chown call chown_func(first_param, os.getuid(), os.getgid()) + check_stat() + chown_func(first_param, -1, os.getgid()) + check_stat() + chown_func(first_param, os.getuid(), -1) + check_stat() if os.getuid() == 0: try: @@ -245,8 +255,12 @@ "behavior") else: # non-root cannot chown to root, raises OSError - self.assertRaises(OSError, chown_func, - first_param, 0, 0) + self.assertRaises(OSError, chown_func, first_param, 0, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat() + self.assertRaises(OSError, chown_func, first_param, 0, -1) + check_stat() @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): @@ -256,7 +270,8 @@ # re-create the file open(test_support.TESTFN, 'w').close() - self._test_all_chown_common(posix.chown, test_support.TESTFN) + self._test_all_chown_common(posix.chown, test_support.TESTFN, + getattr(posix, 'stat', None)) @unittest.skipUnless(hasattr(posix, 'fchown'), "test needs os.fchown()") def test_fchown(self): @@ -266,7 +281,8 @@ test_file = open(test_support.TESTFN, 'w') try: fd = test_file.fileno() - self._test_all_chown_common(posix.fchown, fd) + self._test_all_chown_common(posix.fchown, fd, + getattr(posix, 'fstat', None)) finally: test_file.close() @@ -275,7 +291,8 @@ os.unlink(test_support.TESTFN) # create a symlink os.symlink(_DUMMY_SYMLINK, test_support.TESTFN) - self._test_all_chown_common(posix.lchown, test_support.TESTFN) + self._test_all_chown_common(posix.lchown, test_support.TESTFN, + getattr(posix, 'lstat', None)) def test_chdir(self): if hasattr(posix, 'chdir'): diff --git a/Lib/test/test_pwd.py b/Lib/test/test_pwd.py --- a/Lib/test/test_pwd.py +++ b/Lib/test/test_pwd.py @@ -49,7 +49,9 @@ def test_errors(self): self.assertRaises(TypeError, pwd.getpwuid) + self.assertRaises(TypeError, pwd.getpwuid, 3.14) self.assertRaises(TypeError, pwd.getpwnam) + self.assertRaises(TypeError, pwd.getpwnam, 42) self.assertRaises(TypeError, pwd.getpwall, 42) # try to get some errors @@ -93,6 +95,13 @@ self.assertNotIn(fakeuid, byuids) self.assertRaises(KeyError, pwd.getpwuid, fakeuid) + # -1 shouldn't be a valid uid because it has a special meaning in many + # uid-related functions + self.assertRaises(KeyError, pwd.getpwuid, -1) + # should be out of uid_t range + self.assertRaises(KeyError, pwd.getpwuid, 2**128) + self.assertRaises(KeyError, pwd.getpwuid, -2**128) + def test_main(): test_support.run_unittest(PwdTest) diff --git a/Makefile.pre.in b/Makefile.pre.in --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -579,6 +579,11 @@ Modules/python.o: $(srcdir)/Modules/python.c $(MAINCC) -c $(PY_CFLAGS) -o $@ $(srcdir)/Modules/python.c +Modules/posixmodule.o: $(srcdir)/Modules/posixmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/grpmodule.o: $(srcdir)/Modules/grpmodule.c $(srcdir)/Modules/posixmodule.h + +Modules/pwdmodule.o: $(srcdir)/Modules/pwdmodule.c $(srcdir)/Modules/posixmodule.h $(GRAMMAR_H): $(GRAMMAR_INPUT) $(PGENSRCS) @$(MKDIR_P) Include diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -204,6 +204,8 @@ - Issue #17052: unittest discovery should use self.testLoader. +- Issue #4591: Uid and gid values larger than 2**31 are supported now. + - Issue #17141: random.vonmisesvariate() no more hangs for large kappas. - Issue #17149: Fix random.vonmisesvariate to always return results in diff --git a/Modules/grpmodule.c b/Modules/grpmodule.c --- a/Modules/grpmodule.c +++ b/Modules/grpmodule.c @@ -3,8 +3,8 @@ #include "Python.h" #include "structseq.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_group_type_fields[] = { @@ -70,7 +70,7 @@ Py_INCREF(Py_None); } #endif - SET(setIndex++, PyInt_FromLong((long) p->gr_gid)); + SET(setIndex++, _PyInt_FromGid(p->gr_gid)); SET(setIndex++, w); #undef SET @@ -86,17 +86,25 @@ grp_getgrgid(PyObject *self, PyObject *pyo_id) { PyObject *py_int_id; - unsigned int gid; + gid_t gid; struct group *p; py_int_id = PyNumber_Int(pyo_id); if (!py_int_id) - return NULL; - gid = PyInt_AS_LONG(py_int_id); + return NULL; + if (!_Py_Gid_Converter(py_int_id, &gid)) { + Py_DECREF(py_int_id); + return NULL; + } Py_DECREF(py_int_id); if ((p = getgrgid(gid)) == NULL) { - PyErr_Format(PyExc_KeyError, "getgrgid(): gid not found: %d", gid); + if (gid < 0) + PyErr_Format(PyExc_KeyError, + "getgrgid(): gid not found: %ld", (long)gid); + else + PyErr_Format(PyExc_KeyError, + "getgrgid(): gid not found: %lu", (unsigned long)gid); return NULL; } return mkgrent(p); diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -27,6 +27,9 @@ #include "Python.h" #include "structseq.h" +#ifndef MS_WINDOWS +#include "posixmodule.h" +#endif #if defined(__VMS) # include @@ -347,6 +350,134 @@ #endif #endif + +#ifndef MS_WINDOWS +PyObject * +_PyInt_FromUid(uid_t uid) +{ + if (uid <= LONG_MAX) + return PyInt_FromLong(uid); + return PyLong_FromUnsignedLong(uid); +} + +PyObject * +_PyInt_FromGid(gid_t gid) +{ + if (gid <= LONG_MAX) + return PyInt_FromLong(gid); + return PyLong_FromUnsignedLong(gid); +} + +int +_Py_Uid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(uid_t *)p = (uid_t)-1; + } + else { + /* unsigned uid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((uid_t)uresult == (uid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(uid_t) < sizeof(long) && + (unsigned long)(uid_t)uresult != uresult) + goto OverflowUp; + *(uid_t *)p = (uid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "user id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "user id is greater than maximum"); + return 0; +} + +int +_Py_Gid_Converter(PyObject *obj, void *p) +{ + int overflow; + long result; + if (PyFloat_Check(obj)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 0; + } + result = PyLong_AsLongAndOverflow(obj, &overflow); + if (overflow < 0) + goto OverflowDown; + if (!overflow && result == -1) { + /* error or -1 */ + if (PyErr_Occurred()) + return 0; + *(gid_t *)p = (gid_t)-1; + } + else { + /* unsigned gid_t */ + unsigned long uresult; + if (overflow > 0) { + uresult = PyLong_AsUnsignedLong(obj); + if (PyErr_Occurred()) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + goto OverflowUp; + return 0; + } + if ((gid_t)uresult == (gid_t)-1) + goto OverflowUp; + } else { + if (result < 0) + goto OverflowDown; + uresult = result; + } + if (sizeof(gid_t) < sizeof(long) && + (unsigned long)(gid_t)uresult != uresult) + goto OverflowUp; + *(gid_t *)p = (gid_t)uresult; + } + return 1; + +OverflowDown: + PyErr_SetString(PyExc_OverflowError, + "group id is less than minimum"); + return 0; + +OverflowUp: + PyErr_SetString(PyExc_OverflowError, + "group id is greater than maximum"); + return 0; +} +#endif /* MS_WINDOWS */ + + #if defined _MSC_VER && _MSC_VER >= 1400 /* Microsoft CRT in VS2005 and higher will verify that a filehandle is * valid and raise an assertion if it isn't. @@ -1306,8 +1437,13 @@ PyStructSequence_SET_ITEM(v, 2, PyInt_FromLong((long)st->st_dev)); #endif PyStructSequence_SET_ITEM(v, 3, PyInt_FromLong((long)st->st_nlink)); - PyStructSequence_SET_ITEM(v, 4, PyInt_FromLong((long)st->st_uid)); - PyStructSequence_SET_ITEM(v, 5, PyInt_FromLong((long)st->st_gid)); +#if defined(MS_WINDOWS) + PyStructSequence_SET_ITEM(v, 4, PyInt_FromLong(0)); + PyStructSequence_SET_ITEM(v, 5, PyInt_FromLong(0)); +#else + PyStructSequence_SET_ITEM(v, 4, _PyInt_FromUid(st->st_uid)); + PyStructSequence_SET_ITEM(v, 5, _PyInt_FromGid(st->st_gid)); +#endif #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 6, PyLong_FromLongLong((PY_LONG_LONG)st->st_size)); @@ -1884,14 +2020,16 @@ posix_chown(PyObject *self, PyObject *args) { char *path = NULL; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "etll:chown", + if (!PyArg_ParseTuple(args, "etO&O&:chown", Py_FileSystemDefaultEncoding, &path, - &uid, &gid)) + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; Py_BEGIN_ALLOW_THREADS - res = chown(path, (uid_t) uid, (gid_t) gid); + res = chown(path, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error_with_allocated_filename(path); @@ -1911,12 +2049,15 @@ posix_fchown(PyObject *self, PyObject *args) { int fd; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "ill:chown", &fd, &uid, &gid)) + if (!PyArg_ParseTuple(args, "iO&O&:fchown", &fd, + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; Py_BEGIN_ALLOW_THREADS - res = fchown(fd, (uid_t) uid, (gid_t) gid); + res = fchown(fd, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error(); @@ -1934,14 +2075,16 @@ posix_lchown(PyObject *self, PyObject *args) { char *path = NULL; - long uid, gid; + uid_t uid; + gid_t gid; int res; - if (!PyArg_ParseTuple(args, "etll:lchown", + if (!PyArg_ParseTuple(args, "etO&O&:lchown", Py_FileSystemDefaultEncoding, &path, - &uid, &gid)) + _Py_Uid_Converter, &uid, + _Py_Gid_Converter, &gid)) return NULL; Py_BEGIN_ALLOW_THREADS - res = lchown(path, (uid_t) uid, (gid_t) gid); + res = lchown(path, uid, gid); Py_END_ALLOW_THREADS if (res < 0) return posix_error_with_allocated_filename(path); @@ -3844,7 +3987,7 @@ static PyObject * posix_getegid(PyObject *self, PyObject *noargs) { - return PyInt_FromLong((long)getegid()); + return _PyInt_FromGid(getegid()); } #endif @@ -3857,7 +4000,7 @@ static PyObject * posix_geteuid(PyObject *self, PyObject *noargs) { - return PyInt_FromLong((long)geteuid()); + return _PyInt_FromUid(geteuid()); } #endif @@ -3870,7 +4013,7 @@ static PyObject * posix_getgid(PyObject *self, PyObject *noargs) { - return PyInt_FromLong((long)getgid()); + return _PyInt_FromGid(getgid()); } #endif @@ -3945,7 +4088,7 @@ if (result != NULL) { int i; for (i = 0; i < n; ++i) { - PyObject *o = PyInt_FromLong((long)alt_grouplist[i]); + PyObject *o = _PyInt_FromGid(alt_grouplist[i]); if (o == NULL) { Py_DECREF(result); result = NULL; @@ -3974,12 +4117,22 @@ posix_initgroups(PyObject *self, PyObject *args) { char *username; - long gid; - - if (!PyArg_ParseTuple(args, "sl:initgroups", &username, &gid)) - return NULL; - - if (initgroups(username, (gid_t) gid) == -1) +#ifdef __APPLE__ + int gid; +#else + gid_t gid; +#endif + +#ifdef __APPLE__ + if (!PyArg_ParseTuple(args, "si:initgroups", &username, + &gid)) +#else + if (!PyArg_ParseTuple(args, "sO&:initgroups", &username, + _Py_Gid_Converter, &gid)) +#endif + return NULL; + + if (initgroups(username, gid) == -1) return PyErr_SetFromErrno(PyExc_OSError); Py_INCREF(Py_None); @@ -4093,7 +4246,7 @@ static PyObject * posix_getuid(PyObject *self, PyObject *noargs) { - return PyInt_FromLong((long)getuid()); + return _PyInt_FromUid(getuid()); } #endif @@ -5740,15 +5893,9 @@ static PyObject * posix_setuid(PyObject *self, PyObject *args) { - long uid_arg; uid_t uid; - if (!PyArg_ParseTuple(args, "l:setuid", &uid_arg)) - return NULL; - uid = uid_arg; - if (uid != uid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setuid", _Py_Uid_Converter, &uid)) + return NULL; if (setuid(uid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -5765,15 +5912,9 @@ static PyObject * posix_seteuid (PyObject *self, PyObject *args) { - long euid_arg; uid_t euid; - if (!PyArg_ParseTuple(args, "l", &euid_arg)) - return NULL; - euid = euid_arg; - if (euid != euid_arg) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:seteuid", _Py_Uid_Converter, &euid)) + return NULL; if (seteuid(euid) < 0) { return posix_error(); } else { @@ -5791,15 +5932,9 @@ static PyObject * posix_setegid (PyObject *self, PyObject *args) { - long egid_arg; gid_t egid; - if (!PyArg_ParseTuple(args, "l", &egid_arg)) - return NULL; - egid = egid_arg; - if (egid != egid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setegid", _Py_Gid_Converter, &egid)) + return NULL; if (setegid(egid) < 0) { return posix_error(); } else { @@ -5817,23 +5952,11 @@ static PyObject * posix_setreuid (PyObject *self, PyObject *args) { - long ruid_arg, euid_arg; uid_t ruid, euid; - if (!PyArg_ParseTuple(args, "ll", &ruid_arg, &euid_arg)) - return NULL; - if (ruid_arg == -1) - ruid = (uid_t)-1; /* let the compiler choose how -1 fits */ - else - ruid = ruid_arg; /* otherwise, assign from our long */ - if (euid_arg == -1) - euid = (uid_t)-1; - else - euid = euid_arg; - if ((euid_arg != -1 && euid != euid_arg) || - (ruid_arg != -1 && ruid != ruid_arg)) { - PyErr_SetString(PyExc_OverflowError, "user id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setreuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid)) + return NULL; if (setreuid(ruid, euid) < 0) { return posix_error(); } else { @@ -5851,23 +5974,11 @@ static PyObject * posix_setregid (PyObject *self, PyObject *args) { - long rgid_arg, egid_arg; gid_t rgid, egid; - if (!PyArg_ParseTuple(args, "ll", &rgid_arg, &egid_arg)) - return NULL; - if (rgid_arg == -1) - rgid = (gid_t)-1; /* let the compiler choose how -1 fits */ - else - rgid = rgid_arg; /* otherwise, assign from our long */ - if (egid_arg == -1) - egid = (gid_t)-1; - else - egid = egid_arg; - if ((egid_arg != -1 && egid != egid_arg) || - (rgid_arg != -1 && rgid != rgid_arg)) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&O&:setregid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid)) + return NULL; if (setregid(rgid, egid) < 0) { return posix_error(); } else { @@ -5885,15 +5996,9 @@ static PyObject * posix_setgid(PyObject *self, PyObject *args) { - long gid_arg; gid_t gid; - if (!PyArg_ParseTuple(args, "l:setgid", &gid_arg)) - return NULL; - gid = gid_arg; - if (gid != gid_arg) { - PyErr_SetString(PyExc_OverflowError, "group id too big"); - return NULL; - } + if (!PyArg_ParseTuple(args, "O&:setgid", _Py_Gid_Converter, &gid)) + return NULL; if (setgid(gid) < 0) return posix_error(); Py_INCREF(Py_None); @@ -5926,35 +6031,13 @@ elem = PySequence_GetItem(groups, i); if (!elem) return NULL; - if (!PyInt_Check(elem)) { - if (!PyLong_Check(elem)) { - PyErr_SetString(PyExc_TypeError, - "groups must be integers"); - Py_DECREF(elem); - return NULL; - } else { - unsigned long x = PyLong_AsUnsignedLong(elem); - if (PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); - Py_DECREF(elem); - return NULL; - } - grouplist[i] = x; - /* read back to see if it fits in gid_t */ - if (grouplist[i] != x) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); - Py_DECREF(elem); - return NULL; - } - } + if (!PyInt_Check(elem) && !PyLong_Check(elem)) { + PyErr_SetString(PyExc_TypeError, + "groups must be integers"); + Py_DECREF(elem); + return NULL; } else { - long x = PyInt_AsLong(elem); - grouplist[i] = x; - if (grouplist[i] != x) { - PyErr_SetString(PyExc_TypeError, - "group id too big"); + if (!_Py_Gid_Converter(elem, &grouplist[i])) { Py_DECREF(elem); return NULL; } @@ -8580,9 +8663,11 @@ static PyObject* posix_setresuid (PyObject *self, PyObject *args) { - /* We assume uid_t is no larger than a long. */ - long ruid, euid, suid; - if (!PyArg_ParseTuple(args, "lll", &ruid, &euid, &suid)) + uid_t ruid, euid, suid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresuid", + _Py_Uid_Converter, &ruid, + _Py_Uid_Converter, &euid, + _Py_Uid_Converter, &suid)) return NULL; if (setresuid(ruid, euid, suid) < 0) return posix_error(); @@ -8598,9 +8683,11 @@ static PyObject* posix_setresgid (PyObject *self, PyObject *args) { - /* We assume uid_t is no larger than a long. */ - long rgid, egid, sgid; - if (!PyArg_ParseTuple(args, "lll", &rgid, &egid, &sgid)) + gid_t rgid, egid, sgid; + if (!PyArg_ParseTuple(args, "O&O&O&:setresgid", + _Py_Gid_Converter, &rgid, + _Py_Gid_Converter, &egid, + _Py_Gid_Converter, &sgid)) return NULL; if (setresgid(rgid, egid, sgid) < 0) return posix_error(); @@ -8617,14 +8704,11 @@ posix_getresuid (PyObject *self, PyObject *noargs) { uid_t ruid, euid, suid; - long l_ruid, l_euid, l_suid; if (getresuid(&ruid, &euid, &suid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_ruid = ruid; - l_euid = euid; - l_suid = suid; - return Py_BuildValue("(lll)", l_ruid, l_euid, l_suid); + return Py_BuildValue("(NNN)", _PyInt_FromUid(ruid), + _PyInt_FromUid(euid), + _PyInt_FromUid(suid)); } #endif @@ -8637,14 +8721,11 @@ posix_getresgid (PyObject *self, PyObject *noargs) { uid_t rgid, egid, sgid; - long l_rgid, l_egid, l_sgid; if (getresgid(&rgid, &egid, &sgid) < 0) return posix_error(); - /* Force the values into long's as we don't know the size of uid_t. */ - l_rgid = rgid; - l_egid = egid; - l_sgid = sgid; - return Py_BuildValue("(lll)", l_rgid, l_egid, l_sgid); + return Py_BuildValue("(NNN)", _PyInt_FromGid(rgid), + _PyInt_FromGid(egid), + _PyInt_FromGid(sgid)); } #endif diff --git a/Modules/posixmodule.h b/Modules/posixmodule.h new file mode 100644 --- /dev/null +++ b/Modules/posixmodule.h @@ -0,0 +1,25 @@ +/* Declarations shared between the different POSIX-related modules */ + +#ifndef Py_POSIXMODULE_H +#define Py_POSIXMODULE_H +#ifdef __cplusplus +extern "C" { +#endif + +#ifdef HAVE_SYS_TYPES_H +#include +#endif + +#ifndef Py_LIMITED_API +#ifndef MS_WINDOWS +PyAPI_FUNC(PyObject *) _PyInt_FromUid(uid_t); +PyAPI_FUNC(PyObject *) _PyInt_FromGid(gid_t); +PyAPI_FUNC(int) _Py_Uid_Converter(PyObject *, void *); +PyAPI_FUNC(int) _Py_Gid_Converter(PyObject *, void *); +#endif /* MS_WINDOWS */ +#endif + +#ifdef __cplusplus +} +#endif +#endif /* !Py_POSIXMODULE_H */ diff --git a/Modules/pwdmodule.c b/Modules/pwdmodule.c --- a/Modules/pwdmodule.c +++ b/Modules/pwdmodule.c @@ -3,8 +3,8 @@ #include "Python.h" #include "structseq.h" +#include "posixmodule.h" -#include #include static PyStructSequence_Field struct_pwd_type_fields[] = { @@ -73,8 +73,8 @@ #else SETS(setIndex++, p->pw_passwd); #endif - SETI(setIndex++, p->pw_uid); - SETI(setIndex++, p->pw_gid); + PyStructSequence_SET_ITEM(v, setIndex++, _PyInt_FromUid(p->pw_uid)); + PyStructSequence_SET_ITEM(v, setIndex++, _PyInt_FromGid(p->pw_gid)); #ifdef __VMS SETS(setIndex++, ""); #else @@ -103,13 +103,21 @@ static PyObject * pwd_getpwuid(PyObject *self, PyObject *args) { - unsigned int uid; + uid_t uid; struct passwd *p; - if (!PyArg_ParseTuple(args, "I:getpwuid", &uid)) + if (!PyArg_ParseTuple(args, "O&:getpwuid", _Py_Uid_Converter, &uid)) { + if (PyErr_ExceptionMatches(PyExc_OverflowError)) + PyErr_Format(PyExc_KeyError, + "getpwuid(): uid not found"); return NULL; + } if ((p = getpwuid(uid)) == NULL) { - PyErr_Format(PyExc_KeyError, - "getpwuid(): uid not found: %d", uid); + if (uid < 0) + PyErr_Format(PyExc_KeyError, + "getpwuid(): uid not found: %ld", (long)uid); + else + PyErr_Format(PyExc_KeyError, + "getpwuid(): uid not found: %lu", (unsigned long)uid); return NULL; } return mkpwent(p); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 08:36:47 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 08:36:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Null_merge?= Message-ID: <3Z4wfl5wNqzQ83@mail.python.org> http://hg.python.org/cpython/rev/574410153e73 changeset: 82174:574410153e73 branch: 3.3 parent: 82166:a0983e46feb1 parent: 82172:3893ab574c55 user: Serhiy Storchaka date: Tue Feb 12 09:30:03 2013 +0200 summary: Null merge files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 08:36:49 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 08:36:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Null_merge?= Message-ID: <3Z4wfn1cs9zQ2D@mail.python.org> http://hg.python.org/cpython/rev/a2fbfb9cd816 changeset: 82175:a2fbfb9cd816 parent: 82168:cb876235f29d parent: 82174:574410153e73 user: Serhiy Storchaka date: Tue Feb 12 09:30:55 2013 +0200 summary: Null merge files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 14:31:16 2013 From: python-checkins at python.org (giampaolo.rodola) Date: Tue, 12 Feb 2013 14:31:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_profile/cProfile=3A_add_te?= =?utf-8?q?sts_for_run=28=29_and_runctx=28=29_functions?= Message-ID: <3Z54Wm5P59zQ55@mail.python.org> http://hg.python.org/cpython/rev/4e22d9c58ac4 changeset: 82176:4e22d9c58ac4 user: Giampaolo Rodola' date: Tue Feb 12 14:31:06 2013 +0100 summary: profile/cProfile: add tests for run() and runctx() functions files: Lib/test/test_cprofile.py | 2 + Lib/test/test_profile.py | 29 ++++++++++++++++++++++++++- 2 files changed, 30 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_cprofile.py b/Lib/test/test_cprofile.py --- a/Lib/test/test_cprofile.py +++ b/Lib/test/test_cprofile.py @@ -6,9 +6,11 @@ # rip off all interesting stuff from test_profile import cProfile from test.test_profile import ProfileTest, regenerate_expected_output +from test.profilee import testfunc class CProfileTest(ProfileTest): profilerclass = cProfile.Profile + profilermodule = cProfile expected_max_output = "{built-in method max}" def get_expected_output(self): diff --git a/Lib/test/test_profile.py b/Lib/test/test_profile.py --- a/Lib/test/test_profile.py +++ b/Lib/test/test_profile.py @@ -3,9 +3,11 @@ import sys import pstats import unittest +import os from difflib import unified_diff from io import StringIO -from test.support import run_unittest +from test.support import TESTFN, run_unittest, unlink +from contextlib import contextmanager import profile from test.profilee import testfunc, timer @@ -14,9 +16,13 @@ class ProfileTest(unittest.TestCase): profilerclass = profile.Profile + profilermodule = profile methodnames = ['print_stats', 'print_callers', 'print_callees'] expected_max_output = ':0(max)' + def tearDown(self): + unlink(TESTFN) + def get_expected_output(self): return _ProfileOutput @@ -74,6 +80,19 @@ self.assertIn(self.expected_max_output, res, "Profiling {0!r} didn't report max:\n{1}".format(stmt, res)) + def test_run(self): + with silent(): + self.profilermodule.run("testfunc()") + self.profilermodule.run("testfunc()", filename=TESTFN) + self.assertTrue(os.path.exists(TESTFN)) + + def test_runctx(self): + with silent(): + self.profilermodule.runctx("testfunc()", globals(), locals()) + self.profilermodule.runctx("testfunc()", globals(), locals(), + filename=TESTFN) + self.assertTrue(os.path.exists(TESTFN)) + def regenerate_expected_output(filename, cls): filename = filename.rstrip('co') @@ -95,6 +114,14 @@ method, results[i+1])) f.write('\nif __name__ == "__main__":\n main()\n') + at contextmanager +def silent(): + stdout = sys.stdout + try: + sys.stdout = StringIO() + yield + finally: + sys.stdout = stdout def test_main(): run_unittest(ProfileTest) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 15:14:27 2013 From: python-checkins at python.org (giampaolo.rodola) Date: Tue, 12 Feb 2013 15:14:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_modernize_some_modules=27_?= =?utf-8?q?code_by_replacing_OSError-=3EENOENT/ENOTDIR/EPERM/EEXIST?= Message-ID: <3Z55Tb1clXzPsB@mail.python.org> http://hg.python.org/cpython/rev/79fd8659137b changeset: 82177:79fd8659137b user: Giampaolo Rodola' date: Tue Feb 12 15:14:17 2013 +0100 summary: modernize some modules' code by replacing OSError->ENOENT/ENOTDIR/EPERM/EEXIST occurrences with the corresponding pep-3151 exceptions (FileNotFoundError, NotADirectoryError, etc.) files: Lib/ctypes/util.py | 5 +-- Lib/logging/handlers.py | 7 +--- Lib/mailbox.py | 41 +++++++++------------------- Lib/os.py | 5 +-- Lib/smtpd.py | 3 +- Lib/test/support.py | 17 ++++------- 6 files changed, 27 insertions(+), 51 deletions(-) diff --git a/Lib/ctypes/util.py b/Lib/ctypes/util.py --- a/Lib/ctypes/util.py +++ b/Lib/ctypes/util.py @@ -102,9 +102,8 @@ finally: try: os.unlink(ccout) - except OSError as e: - if e.errno != errno.ENOENT: - raise + except FileNotFoundError: + pass if rv == 10: raise OSError('gcc or cc command not found') res = re.search(expr, trace) diff --git a/Lib/logging/handlers.py b/Lib/logging/handlers.py --- a/Lib/logging/handlers.py +++ b/Lib/logging/handlers.py @@ -438,11 +438,8 @@ try: # stat the file by path, checking for existence sres = os.stat(self.baseFilename) - except OSError as err: - if err.errno == errno.ENOENT: - sres = None - else: - raise + except FileNotFoundError: + sres = None # compare file system stat with that of our stream file handle if not sres or sres[ST_DEV] != self.dev or sres[ST_INO] != self.ino: if self.stream is not None: diff --git a/Lib/mailbox.py b/Lib/mailbox.py --- a/Lib/mailbox.py +++ b/Lib/mailbox.py @@ -334,11 +334,8 @@ # This overrides an inapplicable implementation in the superclass. try: self.remove(key) - except KeyError: + except (KeyError, FileNotFoundError): pass - except OSError as e: - if e.errno != errno.ENOENT: - raise def __setitem__(self, key, message): """Replace the keyed message; raise KeyError if it doesn't exist.""" @@ -493,16 +490,12 @@ path = os.path.join(self._path, 'tmp', uniq) try: os.stat(path) - except OSError as e: - if e.errno == errno.ENOENT: - Maildir._count += 1 - try: - return _create_carefully(path) - except OSError as e: - if e.errno != errno.EEXIST: - raise - else: - raise + except FileNotFoundError: + Maildir._count += 1 + try: + return _create_carefully(path) + except FileExistsError: + pass # Fall through to here if stat succeeded or open raised EEXIST. raise ExternalClashError('Name clash prevented file creation: %s' % @@ -700,12 +693,9 @@ os.chmod(new_file.name, mode) try: os.rename(new_file.name, self._path) - except OSError as e: - if e.errno == errno.EEXIST: - os.remove(self._path) - os.rename(new_file.name, self._path) - else: - raise + except FileExistsError: + os.remove(self._path) + os.rename(new_file.name, self._path) self._file = open(self._path, 'rb+') self._toc = new_toc self._pending = False @@ -2081,13 +2071,10 @@ else: os.rename(pre_lock.name, f.name + '.lock') dotlock_done = True - except OSError as e: - if e.errno == errno.EEXIST: - os.remove(pre_lock.name) - raise ExternalClashError('dot lock unavailable: %s' % - f.name) - else: - raise + except FileExistsError: + os.remove(pre_lock.name) + raise ExternalClashError('dot lock unavailable: %s' % + f.name) except: if fcntl: fcntl.lockf(f, fcntl.LOCK_UN) diff --git a/Lib/os.py b/Lib/os.py --- a/Lib/os.py +++ b/Lib/os.py @@ -232,10 +232,9 @@ if head and tail and not path.exists(head): try: makedirs(head, mode, exist_ok) - except OSError as e: + except FileExistsError: # be happy if someone already created the path - if e.errno != errno.EEXIST: - raise + pass cdir = curdir if isinstance(tail, bytes): cdir = bytes(curdir, 'ASCII') diff --git a/Lib/smtpd.py b/Lib/smtpd.py --- a/Lib/smtpd.py +++ b/Lib/smtpd.py @@ -850,8 +850,7 @@ nobody = pwd.getpwnam('nobody')[2] try: os.setuid(nobody) - except OSError as e: - if e.errno != errno.EPERM: raise + except PermissionError: print('Cannot setuid "nobody"; try running with -n option.', file=sys.stderr) sys.exit(1) try: diff --git a/Lib/test/support.py b/Lib/test/support.py --- a/Lib/test/support.py +++ b/Lib/test/support.py @@ -291,25 +291,20 @@ def unlink(filename): try: _unlink(filename) - except OSError as error: - # The filename need not exist. - if error.errno not in (errno.ENOENT, errno.ENOTDIR): - raise + except (FileNotFoundError, NotADirectoryError): + pass def rmdir(dirname): try: _rmdir(dirname) - except OSError as error: - # The directory need not exist. - if error.errno != errno.ENOENT: - raise + except FileNotFoundError: + pass def rmtree(path): try: _rmtree(path) - except OSError as error: - if error.errno != errno.ENOENT: - raise + except FileNotFoundError: + pass def make_legacy_pyc(source): """Move a PEP 3147 pyc/pyo file to its legacy pyc/pyo location. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 15:23:30 2013 From: python-checkins at python.org (giampaolo.rodola) Date: Tue, 12 Feb 2013 15:23:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_fix_NameError_exception_in?= =?utf-8?q?_test=5Fprofile?= Message-ID: <3Z55h20vn1zPp5@mail.python.org> http://hg.python.org/cpython/rev/c87e62ec6e61 changeset: 82178:c87e62ec6e61 user: Giampaolo Rodola' date: Tue Feb 12 15:23:21 2013 +0100 summary: fix NameError exception in test_profile files: Lib/test/test_profile.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_profile.py b/Lib/test/test_profile.py --- a/Lib/test/test_profile.py +++ b/Lib/test/test_profile.py @@ -82,8 +82,8 @@ def test_run(self): with silent(): - self.profilermodule.run("testfunc()") - self.profilermodule.run("testfunc()", filename=TESTFN) + self.profilermodule.run("int('1')") + self.profilermodule.run("int('1')", filename=TESTFN) self.assertTrue(os.path.exists(TESTFN)) def test_runctx(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 15:33:37 2013 From: python-checkins at python.org (matthias.klose) Date: Tue, 12 Feb 2013 15:33:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_-_Issue_=2317192=3A_Import?= =?utf-8?b?IGxpYmZmaS0zLjAuMTIu?= Message-ID: <3Z55vj3MdQzQBN@mail.python.org> http://hg.python.org/cpython/rev/7727be7613f9 changeset: 82179:7727be7613f9 user: doko at ubuntu.com date: Tue Feb 12 15:33:16 2013 +0100 summary: - Issue #17192: Import libffi-3.0.12. files: Misc/NEWS | 2 + Modules/_ctypes/libffi.diff | 118 +- Modules/_ctypes/libffi/ChangeLog | 475 +- Modules/_ctypes/libffi/ChangeLog.libffi | 4 +- Modules/_ctypes/libffi/Makefile.am | 118 +- Modules/_ctypes/libffi/Makefile.in | 820 +- Modules/_ctypes/libffi/README | 137 +- Modules/_ctypes/libffi/aclocal.m4 | 365 +- Modules/_ctypes/libffi/build-ios.sh | 67 + Modules/_ctypes/libffi/config.guess | 82 +- Modules/_ctypes/libffi/config.sub | 103 +- Modules/_ctypes/libffi/configure | 1358 +- Modules/_ctypes/libffi/configure.ac | 160 +- Modules/_ctypes/libffi/doc/libffi.info | Bin Modules/_ctypes/libffi/doc/stamp-vti | 8 +- Modules/_ctypes/libffi/doc/version.texi | 8 +- Modules/_ctypes/libffi/fficonfig.h.in | 3 + Modules/_ctypes/libffi/fficonfig.py.in | 1 + Modules/_ctypes/libffi/include/Makefile.in | 71 +- Modules/_ctypes/libffi/include/ffi_common.h | 2 +- Modules/_ctypes/libffi/libffi.xcodeproj/project.pbxproj | 16 - Modules/_ctypes/libffi/libtool-ldflags | 106 + Modules/_ctypes/libffi/libtool-version | 2 +- Modules/_ctypes/libffi/ltmain.sh | 32 +- Modules/_ctypes/libffi/m4/ax_cc_maxopt.m4 | 7 +- Modules/_ctypes/libffi/m4/ax_cflags_warn_all.m4 | 3 +- Modules/_ctypes/libffi/m4/ax_gcc_archflag.m4 | 44 +- Modules/_ctypes/libffi/m4/libtool.m4 | 45 +- Modules/_ctypes/libffi/man/Makefile.in | 69 +- Modules/_ctypes/libffi/man/ffi_prep_cif.3 | 4 +- Modules/_ctypes/libffi/mdate-sh | 0 Modules/_ctypes/libffi/src/aarch64/ffi.c | 1076 ++ Modules/_ctypes/libffi/src/aarch64/ffitarget.h | 59 + Modules/_ctypes/libffi/src/aarch64/sysv.S | 307 + Modules/_ctypes/libffi/src/arm/gentramp.sh | 0 Modules/_ctypes/libffi/src/bfin/ffi.c | 195 + Modules/_ctypes/libffi/src/bfin/ffitarget.h | 43 + Modules/_ctypes/libffi/src/bfin/sysv.S | 177 + Modules/_ctypes/libffi/src/closures.c | 27 + Modules/_ctypes/libffi/src/m68k/ffi.c | 10 + Modules/_ctypes/libffi/src/m68k/sysv.S | 45 +- Modules/_ctypes/libffi/src/microblaze/ffi.c | 321 + Modules/_ctypes/libffi/src/microblaze/ffitarget.h | 53 + Modules/_ctypes/libffi/src/microblaze/sysv.S | 302 + Modules/_ctypes/libffi/src/mips/ffi.c | 11 +- Modules/_ctypes/libffi/src/moxie/eabi.S | 137 +- Modules/_ctypes/libffi/src/moxie/ffi.c | 82 +- Modules/_ctypes/libffi/src/moxie/ffitarget.h | 52 + Modules/_ctypes/libffi/src/powerpc/aix.S | 12 +- Modules/_ctypes/libffi/src/powerpc/ffi.c | 54 +- Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c | 8 +- Modules/_ctypes/libffi/src/powerpc/linux64.S | 21 +- Modules/_ctypes/libffi/src/powerpc/linux64_closure.S | 20 +- Modules/_ctypes/libffi/src/powerpc/sysv.S | 21 +- Modules/_ctypes/libffi/src/prep_cif.c | 21 + Modules/_ctypes/libffi/src/s390/ffi.c | 3 +- Modules/_ctypes/libffi/src/sparc/ffi.c | 16 +- Modules/_ctypes/libffi/src/sparc/v8.S | 35 +- Modules/_ctypes/libffi/src/tile/ffi.c | 355 + Modules/_ctypes/libffi/src/tile/ffitarget.h | 65 + Modules/_ctypes/libffi/src/tile/tile.S | 360 + Modules/_ctypes/libffi/src/x86/ffi.c | 2 +- Modules/_ctypes/libffi/src/x86/ffi64.c | 44 +- Modules/_ctypes/libffi/src/x86/ffitarget.h | 3 +- Modules/_ctypes/libffi/src/x86/sysv.S | 17 +- Modules/_ctypes/libffi/src/x86/unix64.S | 10 +- Modules/_ctypes/libffi/src/xtensa/ffi.c | 298 + Modules/_ctypes/libffi/src/xtensa/ffitarget.h | 53 + Modules/_ctypes/libffi/src/xtensa/sysv.S | 253 + Modules/_ctypes/libffi/testsuite/Makefile.am | 145 +- Modules/_ctypes/libffi/testsuite/Makefile.in | 195 +- Modules/_ctypes/libffi/testsuite/lib/libffi.exp | 19 + Modules/_ctypes/libffi/testsuite/libffi.call/a.out | Bin Modules/_ctypes/libffi/testsuite/libffi.call/call.exp | 19 +- Modules/_ctypes/libffi/testsuite/libffi.call/cls_double_va.c | 8 +- Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble.c | 4 +- Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble_va.c | 8 +- Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer.c | 2 +- Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer_stack.c | 14 +- Modules/_ctypes/libffi/testsuite/libffi.call/cls_struct_va1.c | 114 + Modules/_ctypes/libffi/testsuite/libffi.call/cls_uchar_va.c | 44 + Modules/_ctypes/libffi/testsuite/libffi.call/cls_uint_va.c | 45 + Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulong_va.c | 45 + Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulonglong.c | 10 +- Modules/_ctypes/libffi/testsuite/libffi.call/cls_ushort_va.c | 44 + Modules/_ctypes/libffi/testsuite/libffi.call/ffitest.h | 44 +- Modules/_ctypes/libffi/testsuite/libffi.call/float_va.c | 16 +- Modules/_ctypes/libffi/testsuite/libffi.call/huge_struct.c | 9 +- Modules/_ctypes/libffi/testsuite/libffi.call/many2.c | 5 +- Modules/_ctypes/libffi/testsuite/libffi.call/negint.c | 1 - Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct1.c | 2 +- Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct11.c | 121 + Modules/_ctypes/libffi/testsuite/libffi.call/return_dbl.c | 1 + Modules/_ctypes/libffi/testsuite/libffi.call/return_uc.c | 2 +- Modules/_ctypes/libffi/testsuite/libffi.call/stret_large.c | 4 +- Modules/_ctypes/libffi/testsuite/libffi.call/stret_large2.c | 4 +- Modules/_ctypes/libffi/testsuite/libffi.call/uninitialized.c | 61 + Modules/_ctypes/libffi/testsuite/libffi.call/va_1.c | 196 + Modules/_ctypes/libffi/testsuite/libffi.call/va_struct1.c | 121 + Modules/_ctypes/libffi/testsuite/libffi.call/va_struct2.c | 123 + Modules/_ctypes/libffi/testsuite/libffi.call/va_struct3.c | 125 + Modules/_ctypes/libffi/testsuite/libffi.special/ffitestcxx.h | 41 - Modules/_ctypes/libffi/testsuite/libffi.special/special.exp | 12 +- Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest.cc | 1 + Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest_ffi_call.cc | 1 + Modules/_ctypes/libffi/texinfo.tex | 4999 +++++++-- 106 files changed, 12299 insertions(+), 3104 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -806,6 +806,8 @@ Extension Modules ----------------- +- Issue #17192: Import libffi-3.0.12. + - Issue #12268: The io module file object write methods no longer abort early when one of its write system calls is interrupted (EINTR). diff --git a/Modules/_ctypes/libffi.diff b/Modules/_ctypes/libffi.diff --- a/Modules/_ctypes/libffi.diff +++ b/Modules/_ctypes/libffi.diff @@ -1,5 +1,6 @@ ---- libffi.orig/configure.ac 2012-04-12 05:10:51.000000000 +0200 -+++ libffi/configure.ac 2012-06-26 15:42:42.477498938 +0200 +diff -urN libffi.orig/configure.ac libffi/configure.ac +--- libffi.orig/configure.ac 2013-02-11 20:24:24.000000000 +0100 ++++ libffi/configure.ac 2013-02-12 15:05:46.209844321 +0100 @@ -1,4 +1,7 @@ dnl Process this with autoconf to create configure +# @@ -8,17 +9,18 @@ AC_PREREQ(2.68) -@@ -114,6 +117,9 @@ - i?86-*-solaris2.1[[0-9]]*) - TARGET=X86_64; TARGETDIR=x86 +@@ -146,6 +149,10 @@ + fi ;; + + i*86-*-nto-qnx*) -+ TARGET=X86; TARGETDIR=x86 -+ ;; - i?86-*-*) - TARGET=X86_64; TARGETDIR=x86 ++ TARGET=X86; TARGETDIR=x86 ++ ;; ++ + x86_64-*-darwin*) + TARGET=X86_DARWIN; TARGETDIR=x86 ;; -@@ -131,12 +137,12 @@ +@@ -200,12 +207,12 @@ ;; mips-sgi-irix5.* | mips-sgi-irix6.* | mips*-*-rtems*) @@ -32,17 +34,17 @@ + TARGET=MIPS_IRIX; TARGETDIR=mips ;; - moxie-*-*) -@@ -212,7 +218,7 @@ + powerpc*-*-linux* | powerpc-*-sysv*) +@@ -265,7 +272,7 @@ AC_MSG_ERROR(["libffi has not been ported to $host."]) fi -AM_CONDITIONAL(MIPS, test x$TARGET = xMIPS) +AM_CONDITIONAL(MIPS,[expr x$TARGET : 'xMIPS' > /dev/null]) + AM_CONDITIONAL(BFIN, test x$TARGET = xBFIN) AM_CONDITIONAL(SPARC, test x$TARGET = xSPARC) AM_CONDITIONAL(X86, test x$TARGET = xX86) - AM_CONDITIONAL(X86_FREEBSD, test x$TARGET = xX86_FREEBSD) -@@ -499,4 +505,8 @@ +@@ -562,4 +569,8 @@ AC_CONFIG_FILES(include/Makefile include/ffi.h Makefile testsuite/Makefile man/Makefile libffi.pc) @@ -51,9 +53,70 @@ +AC_CONFIG_FILES(fficonfig.py) + AC_OUTPUT ---- libffi-3.0.11/fficonfig.py.in 1970-01-01 01:00:00.000000000 +0100 -+++ libffi/fficonfig.py.in 2012-03-15 01:04:27.000000000 +0100 -@@ -0,0 +1,35 @@ +diff -urN libffi.orig/configure libffi/configure +--- libffi.orig/configure 2013-02-11 20:24:24.000000000 +0100 ++++ libffi/configure 2013-02-12 15:11:42.353853081 +0100 +@@ -13366,6 +13366,10 @@ + fi + ;; + ++ i*86-*-nto-qnx*) ++ TARGET=X86; TARGETDIR=x86 ++ ;; ++ + x86_64-*-darwin*) + TARGET=X86_DARWIN; TARGETDIR=x86 + ;; +@@ -13420,12 +13424,12 @@ + ;; + + mips-sgi-irix5.* | mips-sgi-irix6.* | mips*-*-rtems*) +- TARGET=MIPS; TARGETDIR=mips ++ TARGET=MIPS_IRIX; TARGETDIR=mips + ;; + mips*-*-linux* | mips*-*-openbsd*) + # Support 128-bit long double for NewABI. + HAVE_LONG_DOUBLE='defined(__mips64)' +- TARGET=MIPS; TARGETDIR=mips ++ TARGET=MIPS_IRIX; TARGETDIR=mips + ;; + + powerpc*-*-linux* | powerpc-*-sysv*) +@@ -13485,7 +13489,7 @@ + as_fn_error $? "\"libffi has not been ported to $host.\"" "$LINENO" 5 + fi + +- if test x$TARGET = xMIPS; then ++ if expr x$TARGET : 'xMIPS' > /dev/null; then + MIPS_TRUE= + MIPS_FALSE='#' + else +@@ -14848,6 +14852,12 @@ + ac_config_files="$ac_config_files include/Makefile include/ffi.h Makefile testsuite/Makefile man/Makefile libffi.pc" + + ++ac_config_links="$ac_config_links include/ffi_common.h:include/ffi_common.h" ++ ++ ++ac_config_files="$ac_config_files fficonfig.py" ++ ++ + cat >confcache <<\_ACEOF + # This file is a shell script that caches the results of configure + # tests run on this system so they can be shared between configure +@@ -16029,6 +16039,8 @@ + "testsuite/Makefile") CONFIG_FILES="$CONFIG_FILES testsuite/Makefile" ;; + "man/Makefile") CONFIG_FILES="$CONFIG_FILES man/Makefile" ;; + "libffi.pc") CONFIG_FILES="$CONFIG_FILES libffi.pc" ;; ++ "include/ffi_common.h") CONFIG_LINKS="$CONFIG_LINKS include/ffi_common.h:include/ffi_common.h" ;; ++ "fficonfig.py") CONFIG_FILES="$CONFIG_FILES fficonfig.py" ;; + + *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; + esac +diff -urN libffi.orig/fficonfig.py.in libffi/fficonfig.py.in +--- libffi.orig/fficonfig.py.in 1970-01-01 01:00:00.000000000 +0100 ++++ libffi/fficonfig.py.in 2013-02-12 15:11:24.589852644 +0100 +@@ -0,0 +1,36 @@ +ffi_sources = """ +src/prep_cif.c +src/closures.c @@ -67,6 +130,7 @@ + 'X86_FREEBSD': ['src/x86/ffi.c', 'src/x86/freebsd.S'], + 'X86_WIN32': ['src/x86/ffi.c', 'src/x86/win32.S'], + 'SPARC': ['src/sparc/ffi.c', 'src/sparc/v8.S', 'src/sparc/v9.S'], ++ 'AARCH64': ['src/aarch64/ffi.c', 'src/aarch64/sysv.S'], + 'ALPHA': ['src/alpha/ffi.c', 'src/alpha/osf.S'], + 'IA64': ['src/ia64/ffi.c', 'src/ia64/unix.S'], + 'M32R': ['src/m32r/sysv.S', 'src/m32r/ffi.c'], @@ -89,9 +153,9 @@ +ffi_sources += ffi_platforms['@TARGET@'] + +ffi_cflags = '@CFLAGS@' -diff -urN libffi-3.0.11/src/dlmalloc.c libffi/src/dlmalloc.c ---- libffi-3.0.11/src/dlmalloc.c 2012-04-12 04:46:06.000000000 +0200 -+++ libffi/src/dlmalloc.c 2012-06-26 15:15:58.949547461 +0200 +diff -urN libffi.orig/src/dlmalloc.c libffi/src/dlmalloc.c +--- libffi.orig/src/dlmalloc.c 2013-02-11 20:24:18.000000000 +0100 ++++ libffi/src/dlmalloc.c 2013-02-12 15:15:12.113858241 +0100 @@ -457,6 +457,11 @@ #define LACKS_ERRNO_H #define MALLOC_FAILURE_ACTION @@ -104,17 +168,3 @@ #endif /* WIN32 */ #ifdef __OS2__ -diff -urN libffi-3.0.11/src/sparc/v8.S libffi/src/sparc/v8.S ---- libffi-3.0.11/src/sparc/v8.S 2012-04-12 04:46:06.000000000 +0200 -+++ libffi/src/sparc/v8.S 2011-03-13 05:15:04.000000000 +0100 -@@ -213,6 +213,10 @@ - be,a done1 - ldd [%fp-8], %i0 - -+ cmp %o0, FFI_TYPE_UINT64 -+ be,a done1 -+ ldd [%fp-8], %i0 -+ - ld [%fp-8], %i0 - done1: - jmp %i7+8 diff --git a/Modules/_ctypes/libffi/ChangeLog b/Modules/_ctypes/libffi/ChangeLog --- a/Modules/_ctypes/libffi/ChangeLog +++ b/Modules/_ctypes/libffi/ChangeLog @@ -1,3 +1,393 @@ +2013-02-11 Anthony Green + + * configure.ac: Update release number to 3.0.12. + * configure: Rebuilt. + * README: Update release info. + +2013-02-10 Anthony Green + + * README: Add Moxie. + * src/moxie/ffi.c: Created. + * src/moxie/eabi.S: Created. + * src/moxie/ffitarget.h: Created. + * Makefile.am (nodist_libffi_la_SOURCES): Add Moxie. + * Makefile.in: Rebuilt. + * configure.ac: Add Moxie. + * configure: Rebuilt. + * testsuite/libffi.call/huge_struct.c: Disable format string + warnings for moxie*-*-elf tests. + +2013-02-10 Anthony Green + + * Makefile.am (LTLDFLAGS): Fix reference. + * Makefile.in: Rebuilt. + +2013-02-10 Anthony Green + + * README: Update supported platforms. Update test results link. + +2013-02-09 Anthony Green + + * testsuite/libffi.call/negint.c: Remove forced -O2. + * testsuite/libffi.call/many2.c (foo): Remove GCCism. + * testsuite/libffi.call/ffitest.h: Add default PRIuPTR definition. + + * src/sparc/v8.S (ffi_closure_v8): Import ancient ulonglong + closure return type fix developed by Martin v. L?wis for cpython + fork. + +2013-02-08 Andreas Tobler + + * src/powerpc/ffi.c (ffi_prep_cif_machdep): Fix small struct + support. + * src/powerpc/sysv.S: Ditto. + +2013-02-08 Anthony Green + + * testsuite/libffi.call/cls_longdouble.c: Remove xfail for + arm*-*-*. + +2013-02-08 Anthony Green + + * src/sparc/ffi.c (ffi_prep_closure_loc): Fix cache flushing for GCC. + +2013-02-08 Matthias Klose + + * man/ffi_prep_cif.3: Clean up for debian linter. + +2013-02-08 Peter Bergner + + * src/powerpc/ffi.c (ffi_prep_args_SYSV): Account for FP args pushed + on the stack. + +2013-02-08 Anthony Green + + * Makefile.am (EXTRA_DIST): Add missing files. + * testsuite/Makefile.am (EXTRA_DIST): Ditto. + * Makefile.in: Rebuilt. + +2013-02-08 Anthony Green + + * configure.ac: Move sparc asm config checks to within functions + for compatibility with sun tools. + * configure: Rebuilt. + * src/sparc/ffi.c (ffi_prep_closure_loc): Flush cache on v9 + systems. + * src/sparc/v8.S (ffi_flush_icache): Implement a sparc v9 cache + flusher. + +2013-02-08 Nathan Rossi + + * src/microblaze/ffi.c (ffi_closure_call_SYSV): Fix handling of + small big-endian structures. + (ffi_prep_args): Ditto. + +2013-02-07 Anthony Green + + * src/sparc/v8.S (ffi_call_v8): Fix typo from last patch + (effectively hiding ffi_call_v8). + +2013-02-07 Anthony Green + + * configure.ac: Update bug reporting address. + * configure.in: Rebuild. + + * src/sparc/v8.S (ffi_flush_icache): Out-of-line cache flusher for + Sun compiler. + * src/sparc/ffi.c (ffi_call): Remove warning. + Call ffi_flush_icache for non-GCC builds. + (ffi_prep_closure_loc): Use ffi_flush_icache. + + * Makefile.am (EXTRA_DIST): Add libtool-ldflags. + * Makefile.in: Rebuilt. + * libtool-ldflags: New file. + +2013-02-07 Daniel Schepler + + * configure.ac: Correctly identify x32 systems as 64-bit. + * m4/libtool.m4: Remove libtool expr error. + * aclocal.m4, configure: Rebuilt. + +2013-02-07 Anthony Green + + * configure.ac: Fix GCC usage test. + * configure: Rebuilt. + * README: Mention LLVM/GCC x86_64 issue. + * testsuite/Makefile.in: Rebuilt. + +2013-02-07 Anthony Green + + * testsuite/libffi.call/cls_double_va.c (main): Replace // style + comments with /* */ for xlc compiler. + * testsuite/libffi.call/stret_large.c (main): Ditto. + * testsuite/libffi.call/stret_large2.c (main): Ditto. + * testsuite/libffi.call/nested_struct1.c (main): Ditto. + * testsuite/libffi.call/huge_struct.c (main): Ditto. + * testsuite/libffi.call/float_va.c (main): Ditto. + * testsuite/libffi.call/cls_struct_va1.c (main): Ditto. + * testsuite/libffi.call/cls_pointer_stack.c (main): Ditto. + * testsuite/libffi.call/cls_pointer.c (main): Ditto. + * testsuite/libffi.call/cls_longdouble_va.c (main): Ditto. + +2013-02-06 Anthony Green + + * man/ffi_prep_cif.3: Clean up for debian lintian checker. + +2013-02-06 Anthony Green + + * Makefile.am (pkgconfigdir): Add missing pkgconfig install bits. + * Makefile.in: Rebuild. + +2013-02-02 Mark H Weaver + + * src/x86/ffi64.c (ffi_call): Sign-extend integer arguments passed + via general purpose registers. + +2013-01-21 Nathan Rossi + + * README: Add MicroBlaze details. + * Makefile.am: Add MicroBlaze support. + * configure.ac: Likewise. + * src/microblaze/ffi.c: New. + * src/microblaze/ffitarget.h: Likewise. + * src/microblaze/sysv.S: Likewise. + +2013-01-21 Nathan Rossi + * testsuite/libffi.call/return_uc.c: Fixed issue. + +2013-01-21 Chris Zankel + + * README: Add Xtensa support. + * Makefile.am: Likewise. + * configure.ac: Likewise. + * Makefile.in Regenerate. + * configure: Likewise. + * src/prep_cif.c: Handle Xtensa. + * src/xtensa: New directory. + * src/xtensa/ffi.c: New file. + * src/xtensa/ffitarget.h: Ditto. + * src/xtensa/sysv.S: Ditto. + +2013-01-11 Anthony Green + + * src/powerpc/ffi_darwin.c (ffi_prep_args): Replace // style + comments with /* */ for xlc compiler. + * src/powerpc/aix.S (ffi_call_AIX): Ditto. + * testsuite/libffi.call/ffitest.h (allocate_mmap): Delete + deprecated inline function. + * testsuite/libffi.special/ffitestcxx.h: Ditto. + * README: Add update for AIX support. + +2013-01-11 Anthony Green + + * configure.ac: Robustify pc relative reloc check. + * m4/ax_cc_maxopt.m4: Don't -malign-double. This is an ABI + changing option for 32-bit x86. + * aclocal.m4, configure: Rebuilt. + * README: Update supported target list. + +2013-01-10 Anthony Green + + * README (tested): Add Compiler column to table. + +2013-01-10 Anthony Green + + * src/x86/ffi64.c (struct register_args): Make sse array and array + of unions for sunpro compiler compatibility. + +2013-01-10 Anthony Green + + * configure.ac: Test target platform size_t size. Handle both 32 + and 64-bit builds for x86_64-* and i?86-* targets (allowing for + CFLAG option to change default settings). + * configure, aclocal.m4: Rebuilt. + +2013-01-10 Anthony Green + + * testsuite/libffi.special/special.exp: Only run exception + handling tests when using GNU compiler. + + * m4/ax_compiler_vendor.m4: New file. + * configure.ac: Test for compiler vendor and don't use + AX_CFLAGS_WARN_ALL with the sun compiler. + * aclocal.m4, configure: Rebuilt. + +2013-01-10 Anthony Green + + * include/ffi_common.h: Don't use GCCisms to define types when + building with the SUNPRO compiler. + +2013-01-10 Anthony Green + + * configure.ac: Put local.exp in the right place. + * configure: Rebuilt. + + * src/x86/ffi.c: Update comment about regparm function attributes. + * src/x86/sysv.S (ffi_closure_SYSV): The SUNPRO compiler requires + that all function arguments be passed on the stack (no regparm + support). + +2013-01-08 Anthony Green + + * configure.ac: Generate local.exp. This sets CC_FOR_TARGET + when we are using the vendor compiler. + * testsuite/Makefile.am (EXTRA_DEJAGNU_SITE_CONFIG): Point to + ../local.exp. + * configure, testsuite/Makefile.in: Rebuilt. + + * testsuite/libffi.call/call.exp: Run tests with different + options, depending on whether or not we are using gcc or the + vendor compiler. + * testsuite/lib/libffi.exp (libffi-init): Set using_gcc based on + whether or not we are building/testing with gcc. + +2013-01-08 Anthony Green + + * configure.ac: Switch x86 solaris target to X86 by default. + * configure: Rebuilt. + +2013-01-08 Anthony Green + + * configure.ac: Fix test for read-only eh_frame. + * configure: Rebuilt. + +2013-01-08 Anthony Green + + * src/x86/sysv.S, src/x86/unix64.S: Only emit DWARF unwind info + when building with the GNU toolchain. + * testsuite/libffi.call/ffitest.h (CHECK): Fix for Solaris vendor + compiler. + +2013-01-07 Thorsten Glaser + + * testsuite/libffi.call/cls_uchar_va.c, + testsuite/libffi.call/cls_ushort_va.c, + testsuite/libffi.call/va_1.c: Testsuite fixes. + +2013-01-07 Thorsten Glaser + + * src/m68k/ffi.c (CIF_FLAGS_SINT8, CIF_FLAGS_SINT16): Define. + (ffi_prep_cif_machdep): Fix 8-bit and 16-bit signed calls. + * src/m68k/sysv.S (ffi_call_SYSV, ffi_closure_SYSV): Ditto. + +2013-01-04 Anthony Green + + * Makefile.am (AM_CFLAGS): Don't automatically add -fexceptions + and -Wall. This is set in the configure script after testing for + GCC. + * Makefile.in: Rebuilt. + +2013-01-02 rofl0r + + * src/powerpc/ffi.c (ffi_prep_cif_machdep): Fix build error on ppc + when long double == double. + +2013-01-02 Reini Urban + + * Makefile.am (libffi_la_LDFLAGS): Add -no-undefined to LDFLAGS + (required for shared libs on cygwin/mingw). + * Makefile.in: Rebuilt. + +2012-10-31 Alan Modra + + * src/powerpc/linux64_closure.S: Add new ABI support. + * src/powerpc/linux64.S: Likewise. + +2012-10-30 Magnus Granberg + Pavel Labushev + + * configure.ac: New options pax_emutramp + * configure, fficonfig.h.in: Regenerated + * src/closures.c: New function emutramp_enabled_check() and + checks. + +2012-10-30 Frederick Cheung + + * configure.ac: Enable FFI_MAP_EXEC_WRIT for Darwin 12 (mountain + lion) and future version. + * configure: Rebuild. + +2012-10-30 James Greenhalgh + Marcus Shawcroft + + * README: Add details of aarch64 port. + * src/aarch64/ffi.c: New. + * src/aarch64/ffitarget.h: Likewise. + * src/aarch64/sysv.S: Likewise. + * Makefile.am: Support aarch64. + * configure.ac: Support aarch64. + * Makefile.in, configure: Rebuilt. + +2012-10-30 James Greenhalgh + Marcus Shawcroft + + * testsuite/lib/libffi.exp: Add support for aarch64. + * testsuite/libffi.call/cls_struct_va1.c: New. + * testsuite/libffi.call/cls_uchar_va.c: Likewise. + * testsuite/libffi.call/cls_uint_va.c: Likewise. + * testsuite/libffi.call/cls_ulong_va.c: Likewise. + * testsuite/libffi.call/cls_ushort_va.c: Likewise. + * testsuite/libffi.call/nested_struct11.c: Likewise. + * testsuite/libffi.call/uninitialized.c: Likewise. + * testsuite/libffi.call/va_1.c: Likewise. + * testsuite/libffi.call/va_struct1.c: Likewise. + * testsuite/libffi.call/va_struct2.c: Likewise. + * testsuite/libffi.call/va_struct3.c: Likewise. + +2012-10-12 Walter Lee + + * Makefile.am: Add TILE-Gx/TILEPro support. + * configure.ac: Likewise. + * Makefile.in: Regenerate. + * configure: Likewise. + * src/prep_cif.c (ffi_prep_cif_core): Handle TILE-Gx/TILEPro. + * src/tile: New directory. + * src/tile/ffi.c: New file. + * src/tile/ffitarget.h: Ditto. + * src/tile/tile.S: Ditto. + +2012-10-12 Matthias Klose + + * generate-osx-source-and-headers.py: Normalize whitespace. + +2012-09-14 David Edelsohn + + * configure: Regenerated. + +2012-08-26 Andrew Pinski + + PR libffi/53014 + * src/mips/ffi.c (ffi_prep_closure_loc): Allow n32 with soft-float and n64 with + soft-float. + +2012-08-08 Uros Bizjak + + * src/s390/ffi.c (ffi_prep_closure_loc): Don't ASSERT ABI test, + just return FFI_BAD_ABI when things are wrong. + +2012-07-18 H.J. Lu + + PR libffi/53982 + PR libffi/53973 + * src/x86/ffitarget.h: Check __ILP32__ instead of __LP64__ for x32. + (FFI_SIZEOF_JAVA_RAW): Defined to 4 for x32. + +2012-05-16 H.J. Lu + + * configure: Regenerated. + +2012-05-05 Nicolas Lelong + + * libffi.xcodeproj/project.pbxproj: Fixes. + * README: Update for iOS builds. + +2012-04-23 Alexandre Keunecke I. de Mendonca + + * configure.ac: Add Blackfin/sysv support + * Makefile.am: Add Blackfin/sysv support + * src/bfin/ffi.c: Add Blackfin/sysv support + * src/bfin/ffitarget.h: Add Blackfin/sysv support + 2012-04-11 Anthony Green * Makefile.am (EXTRA_DIST): Add new script. @@ -27,15 +417,15 @@ * README: Update instructions on building iOS binary. * build-ios.sh: Delete. -2012-04-06 H.J. Lu - - * m4/libtool.m4 (_LT_ENABLE_LOCK): Support x32. - 2012-04-06 Anthony Green * src/x86/ffi64.c (UINT128): Define differently for Intel and GNU compilers, then use it. +2012-04-06 H.J. Lu + + * m4/libtool.m4 (_LT_ENABLE_LOCK): Support x32. + 2012-04-06 Anthony Green * testsuite/Makefile.am (EXTRA_DIST): Add missing test cases. @@ -48,6 +438,14 @@ in CNAME. * src/x86/ffi.c: Wrap Windows specific code in ifdefs. +2012-04-02 Peter Bergner + + * src/powerpc/ffi.c (ffi_prep_args_SYSV): Declare double_tmp. + Silence casting pointer to integer of different size warning. + Delete goto to previously deleted label. + (ffi_call): Silence possibly undefined warning. + (ffi_closure_helper_SYSV): Declare variable type. + 2012-04-02 Peter Rosin * src/x86/win32.S (ffi_call_win32): Sign/zero extend the return @@ -193,6 +591,43 @@ * testsuite/libffi.call/struct9.c: Likewise. * testsuite/libffi.call/testclosure.c: Likewise. +2012-03-21 Peter Rosin + + * testsuite/libffi.call/float_va.c (float_va_fn): Use %f when + printing doubles (%lf is for long doubles). + (main): Likewise. + +2012-03-21 Peter Rosin + + * testsuite/lib/target-libpath.exp [*-*-cygwin*, *-*-mingw*] + (set_ld_library_path_env_vars): Add the library search dir to PATH + (and save PATH for later). + (restore_ld_library_path_env_vars): Restore PATH. + +2012-03-21 Peter Rosin + + * testsuite/lib/target-libpath.exp [*-*-cygwin*, *-*-mingw*] + (set_ld_library_path_env_vars): Add the library search dir to PATH + (and save PATH for later). + (restore_ld_library_path_env_vars): Restore PATH. + +2012-03-20 Peter Rosin + + * testsuite/libffi.call/strlen2_win32.c (main): Remove bug. + * src/x86/win32.S [MSVC] (ffi_closure_SYSV): Make the 'stub' label + visible outside the PROC, so that ffi_closure_THISCALL can see it. + +2012-03-20 Peter Rosin + + * testsuite/libffi.call/strlen2_win32.c (main): Remove bug. + * src/x86/win32.S [MSVC] (ffi_closure_SYSV): Make the 'stub' label + visible outside the PROC, so that ffi_closure_THISCALL can see it. + +2012-03-19 Alan Hourihane + + * src/m68k/ffi.c: Add MINT support. + * src/m68k/sysv.S: Ditto. + 2012-03-06 Chung-Lin Tang * src/arm/ffi.c (ffi_call): Add __ARM_EABI__ guard around call to @@ -201,43 +636,11 @@ ffi_closure_VFP. * src/arm/sysv.S: Add __ARM_EABI__ guard around VFP code. -2012-03-21 Peter Rosin - - * testsuite/libffi.call/float_va.c (float_va_fn): Use %f when - printing doubles (%lf is for long doubles). - (main): Likewise. - -2012-03-21 Peter Rosin - - * testsuite/lib/target-libpath.exp [*-*-cygwin*, *-*-mingw*] - (set_ld_library_path_env_vars): Add the library search dir to PATH - (and save PATH for later). - (restore_ld_library_path_env_vars): Restore PATH. - -2012-03-20 Peter Rosin - - * testsuite/libffi.call/strlen2_win32.c (main): Remove bug. - * src/x86/win32.S [MSVC] (ffi_closure_SYSV): Make the 'stub' label - visible outside the PROC, so that ffi_closure_THISCALL can see it. - -2012-03-19 Alan Hourihane - - * src/m68k/ffi.c: Add MINT support. - * src/m68k/sysv.S: Ditto. - 2012-03-19 chennam * src/powerpc/ffi_darwin.c (ffi_prep_closure_loc): Fix AIX closure support. -2012-04-02 Peter Bergner - - * src/powerpc/ffi.c (ffi_prep_args_SYSV): Declare double_tmp. - Silence casting pointer to integer of different size warning. - Delete goto to previously deleted label. - (ffi_call): Silence possibly undefined warning. - (ffi_closure_helper_SYSV): Declare variable type. - 2012-03-13 Kaz Kojima * src/sh/ffi.c (ffi_prep_closure_loc): Don't ASSERT ABI test, diff --git a/Modules/_ctypes/libffi/ChangeLog.libffi b/Modules/_ctypes/libffi/ChangeLog.libffi --- a/Modules/_ctypes/libffi/ChangeLog.libffi +++ b/Modules/_ctypes/libffi/ChangeLog.libffi @@ -574,8 +574,8 @@ * Makefile.am, include/Makefile.am: Move headers to libffi_la_SOURCES for new automake. * Makefile.in, include/Makefile.in: Rebuilt. - - * testsuite/lib/wrapper.exp: Copied from gcc tree to allow for + + * testsuite/lib/wrapper.exp: Copied from gcc tree to allow for execution outside of gcc tree. * testsuite/lib/target-libpath.exp: Ditto. diff --git a/Modules/_ctypes/libffi/Makefile.am b/Modules/_ctypes/libffi/Makefile.am --- a/Modules/_ctypes/libffi/Makefile.am +++ b/Modules/_ctypes/libffi/Makefile.am @@ -2,40 +2,49 @@ AUTOMAKE_OPTIONS = foreign subdir-objects +ACLOCAL_AMFLAGS = -I m4 + SUBDIRS = include testsuite man -EXTRA_DIST = LICENSE ChangeLog.v1 ChangeLog.libgcj configure.host \ - src/alpha/ffi.c src/alpha/osf.S src/alpha/ffitarget.h \ - src/arm/ffi.c src/arm/sysv.S src/arm/ffitarget.h \ - src/avr32/ffi.c src/avr32/sysv.S src/avr32/ffitarget.h \ - src/cris/ffi.c src/cris/sysv.S src/cris/ffitarget.h \ - src/ia64/ffi.c src/ia64/ffitarget.h src/ia64/ia64_flags.h \ - src/ia64/unix.S src/mips/ffi.c src/mips/n32.S src/mips/o32.S \ - src/mips/ffitarget.h src/m32r/ffi.c src/m32r/sysv.S \ - src/m32r/ffitarget.h src/m68k/ffi.c src/m68k/sysv.S \ - src/m68k/ffitarget.h src/powerpc/ffi.c src/powerpc/sysv.S \ - src/powerpc/linux64.S src/powerpc/linux64_closure.S \ - src/powerpc/ppc_closure.S src/powerpc/asm.h src/powerpc/aix.S \ - src/powerpc/darwin.S src/powerpc/aix_closure.S \ - src/powerpc/darwin_closure.S src/powerpc/ffi_darwin.c \ - src/powerpc/ffitarget.h src/s390/ffi.c src/s390/sysv.S \ - src/s390/ffitarget.h src/sh/ffi.c src/sh/sysv.S \ - src/sh/ffitarget.h src/sh64/ffi.c src/sh64/sysv.S \ - src/sh64/ffitarget.h src/sparc/v8.S src/sparc/v9.S \ - src/sparc/ffitarget.h src/sparc/ffi.c src/x86/darwin64.S \ - src/x86/ffi.c src/x86/sysv.S src/x86/win32.S src/x86/darwin.S \ - src/x86/win64.S src/x86/freebsd.S src/x86/ffi64.c \ - src/x86/unix64.S src/x86/ffitarget.h src/pa/ffitarget.h \ - src/pa/ffi.c src/pa/linux.S src/pa/hpux32.S src/frv/ffi.c \ - src/frv/eabi.S src/frv/ffitarget.h src/dlmalloc.c \ - src/moxie/ffi.c src/moxie/eabi.S libtool-version \ - ChangeLog.libffi m4/libtool.m4 m4/lt~obsolete.m4 \ - m4/ltoptions.m4 m4/ltsugar.m4 m4/ltversion.m4 \ - m4/ltversion.m4 src/arm/gentramp.sh src/debug.c \ - msvcc.sh generate-ios-source-and-headers.py \ - generate-osx-source-and-headers.py \ - libffi.xcodeproj/project.pbxproj \ - src/arm/trampoline.S +EXTRA_DIST = LICENSE ChangeLog.v1 ChangeLog.libgcj configure.host \ + src/aarch64/ffi.c src/aarch64/ffitarget.h \ + src/aarch64/sysv.S build-ios.sh \ + src/alpha/ffi.c src/alpha/osf.S src/alpha/ffitarget.h \ + src/arm/ffi.c src/arm/sysv.S src/arm/ffitarget.h \ + src/avr32/ffi.c src/avr32/sysv.S src/avr32/ffitarget.h \ + src/cris/ffi.c src/cris/sysv.S src/cris/ffitarget.h \ + src/ia64/ffi.c src/ia64/ffitarget.h src/ia64/ia64_flags.h \ + src/ia64/unix.S src/mips/ffi.c src/mips/n32.S src/mips/o32.S \ + src/mips/ffitarget.h src/m32r/ffi.c src/m32r/sysv.S \ + src/m32r/ffitarget.h src/m68k/ffi.c src/m68k/sysv.S \ + src/m68k/ffitarget.h src/microblaze/ffi.c \ + src/microblaze/sysv.S src/microblaze/ffitarget.h \ + src/powerpc/ffi.c src/powerpc/sysv.S \ + src/powerpc/linux64.S src/powerpc/linux64_closure.S \ + src/powerpc/ppc_closure.S src/powerpc/asm.h \ + src/powerpc/aix.S src/powerpc/darwin.S \ + src/powerpc/aix_closure.S src/powerpc/darwin_closure.S \ + src/powerpc/ffi_darwin.c src/powerpc/ffitarget.h \ + src/s390/ffi.c src/s390/sysv.S src/s390/ffitarget.h \ + src/sh/ffi.c src/sh/sysv.S src/sh/ffitarget.h src/sh64/ffi.c \ + src/sh64/sysv.S src/sh64/ffitarget.h src/sparc/v8.S \ + src/sparc/v9.S src/sparc/ffitarget.h src/sparc/ffi.c \ + src/x86/darwin64.S src/x86/ffi.c src/x86/sysv.S \ + src/x86/win32.S src/x86/darwin.S src/x86/win64.S \ + src/x86/freebsd.S src/x86/ffi64.c src/x86/unix64.S \ + src/x86/ffitarget.h src/pa/ffitarget.h src/pa/ffi.c \ + src/pa/linux.S src/pa/hpux32.S src/frv/ffi.c src/bfin/ffi.c \ + src/bfin/ffitarget.h src/bfin/sysv.S src/frv/eabi.S \ + src/frv/ffitarget.h src/dlmalloc.c src/tile/ffi.c \ + src/tile/ffitarget.h src/tile/tile.S libtool-version \ + src/xtensa/ffitarget.h src/xtensa/ffi.c src/xtensa/sysv.S \ + ChangeLog.libffi m4/libtool.m4 m4/lt~obsolete.m4 \ + m4/ltoptions.m4 m4/ltsugar.m4 m4/ltversion.m4 \ + m4/ltversion.m4 src/arm/gentramp.sh src/debug.c msvcc.sh \ + generate-ios-source-and-headers.py \ + generate-osx-source-and-headers.py \ + libffi.xcodeproj/project.pbxproj src/arm/trampoline.S \ + libtool-ldflags info_TEXINFOS = doc/libffi.texi @@ -83,11 +92,12 @@ "RANLIB=$(RANLIB)" \ "DESTDIR=$(DESTDIR)" +# Subdir rules rely on $(FLAGS_TO_PASS) +FLAGS_TO_PASS = $(AM_MAKEFLAGS) + MAKEOVERRIDES= -ACLOCAL_AMFLAGS=$(ACLOCAL_AMFLAGS) -I m4 - -lib_LTLIBRARIES = libffi.la +toolexeclib_LTLIBRARIES = libffi.la noinst_LTLIBRARIES = libffi_convenience.la libffi_la_SOURCES = src/prep_cif.c src/types.c \ @@ -105,6 +115,9 @@ if MIPS nodist_libffi_la_SOURCES += src/mips/ffi.c src/mips/o32.S src/mips/n32.S endif +if BFIN +nodist_libffi_la_SOURCES += src/bfin/ffi.c src/bfin/sysv.S +endif if X86 nodist_libffi_la_SOURCES += src/x86/ffi.c src/x86/sysv.S endif @@ -135,6 +148,12 @@ if M68K nodist_libffi_la_SOURCES += src/m68k/ffi.c src/m68k/sysv.S endif +if MOXIE +nodist_libffi_la_SOURCES += src/moxie/ffi.c src/moxie/eabi.S +endif +if MICROBLAZE +nodist_libffi_la_SOURCES += src/microblaze/ffi.c src/microblaze/sysv.S +endif if POWERPC nodist_libffi_la_SOURCES += src/powerpc/ffi.c src/powerpc/sysv.S src/powerpc/ppc_closure.S src/powerpc/linux64.S src/powerpc/linux64_closure.S endif @@ -147,6 +166,9 @@ if POWERPC_FREEBSD nodist_libffi_la_SOURCES += src/powerpc/ffi.c src/powerpc/sysv.S src/powerpc/ppc_closure.S endif +if AARCH64 +nodist_libffi_la_SOURCES += src/aarch64/sysv.S src/aarch64/ffi.c +endif if ARM nodist_libffi_la_SOURCES += src/arm/sysv.S src/arm/ffi.c if FFI_EXEC_TRAMPOLINE_TABLE @@ -162,9 +184,6 @@ if FRV nodist_libffi_la_SOURCES += src/frv/eabi.S src/frv/ffi.c endif -if MOXIE -nodist_libffi_la_SOURCES += src/moxie/eabi.S src/moxie/ffi.c -endif if S390 nodist_libffi_la_SOURCES += src/s390/sysv.S src/s390/ffi.c endif @@ -183,23 +202,20 @@ if PA_HPUX nodist_libffi_la_SOURCES += src/pa/hpux32.S src/pa/ffi.c endif +if TILE +nodist_libffi_la_SOURCES += src/tile/tile.S src/tile/ffi.c +endif +if XTENSA +nodist_libffi_la_SOURCES += src/xtensa/sysv.S src/xtensa/ffi.c +endif libffi_convenience_la_SOURCES = $(libffi_la_SOURCES) nodist_libffi_convenience_la_SOURCES = $(nodist_libffi_la_SOURCES) -AM_CFLAGS = -g -if FFI_DEBUG -# Build debug. Define FFI_DEBUG on the commandline so that, when building with -# MSVC, it can link against the debug CRT. -AM_CFLAGS += -DFFI_DEBUG -endif +LTLDFLAGS = $(shell $(SHELL) $(top_srcdir)/libtool-ldflags $(LDFLAGS)) -libffi_la_LDFLAGS = -version-info `grep -v '^\#' $(srcdir)/libtool-version` $(LTLDFLAGS) $(AM_LTLDFLAGS) +libffi_la_LDFLAGS = -no-undefined -version-info `grep -v '^\#' $(srcdir)/libtool-version` $(LTLDFLAGS) $(AM_LTLDFLAGS) -AM_CPPFLAGS = -I. -I$(top_srcdir)/include -Iinclude -I$(top_srcdir)/src -DFFI_BUILDING -AM_CCASFLAGS = $(AM_CPPFLAGS) -g +AM_CPPFLAGS = -I. -I$(top_srcdir)/include -Iinclude -I$(top_srcdir)/src +AM_CCASFLAGS = $(AM_CPPFLAGS) -# No install-html or install-pdf support in automake yet -.PHONY: install-html install-pdf -install-html: -install-pdf: diff --git a/Modules/_ctypes/libffi/Makefile.in b/Modules/_ctypes/libffi/Makefile.in --- a/Modules/_ctypes/libffi/Makefile.in +++ b/Modules/_ctypes/libffi/Makefile.in @@ -1,9 +1,8 @@ -# Makefile.in generated by automake 1.11.3 from Makefile.am. +# Makefile.in generated by automake 1.12.2 from Makefile.am. # @configure_input@ -# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, -# 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software -# Foundation, Inc. +# Copyright (C) 1994-2012 Free Software Foundation, Inc. + # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. @@ -17,6 +16,23 @@ VPATH = @srcdir@ +am__make_dryrun = \ + { \ + am__dry=no; \ + case $$MAKEFLAGS in \ + *\\[\ \ ]*) \ + echo 'am--echo: ; @echo "AM" OK' | $(MAKE) -f - 2>/dev/null \ + | grep '^AM OK$$' >/dev/null || am__dry=yes;; \ + *) \ + for am__flg in $$MAKEFLAGS; do \ + case $$am__flg in \ + *=*|--*) ;; \ + *n*) am__dry=yes; break;; \ + esac; \ + done;; \ + esac; \ + test $$am__dry = yes; \ + } pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ @@ -38,45 +54,58 @@ target_triplet = @target@ @FFI_DEBUG_TRUE at am__append_1 = src/debug.c @MIPS_TRUE at am__append_2 = src/mips/ffi.c src/mips/o32.S src/mips/n32.S - at X86_TRUE@am__append_3 = src/x86/ffi.c src/x86/sysv.S - at X86_FREEBSD_TRUE@am__append_4 = src/x86/ffi.c src/x86/freebsd.S - at X86_WIN32_TRUE@am__append_5 = src/x86/ffi.c src/x86/win32.S - at X86_WIN64_TRUE@am__append_6 = src/x86/ffi.c src/x86/win64.S - at X86_DARWIN_TRUE@am__append_7 = src/x86/ffi.c src/x86/darwin.S src/x86/ffi64.c src/x86/darwin64.S - at SPARC_TRUE@am__append_8 = src/sparc/ffi.c src/sparc/v8.S src/sparc/v9.S - at ALPHA_TRUE@am__append_9 = src/alpha/ffi.c src/alpha/osf.S - at IA64_TRUE@am__append_10 = src/ia64/ffi.c src/ia64/unix.S - at M32R_TRUE@am__append_11 = src/m32r/sysv.S src/m32r/ffi.c - at M68K_TRUE@am__append_12 = src/m68k/ffi.c src/m68k/sysv.S - at POWERPC_TRUE@am__append_13 = src/powerpc/ffi.c src/powerpc/sysv.S src/powerpc/ppc_closure.S src/powerpc/linux64.S src/powerpc/linux64_closure.S - at POWERPC_AIX_TRUE@am__append_14 = src/powerpc/ffi_darwin.c src/powerpc/aix.S src/powerpc/aix_closure.S - at POWERPC_DARWIN_TRUE@am__append_15 = src/powerpc/ffi_darwin.c src/powerpc/darwin.S src/powerpc/darwin_closure.S - at POWERPC_FREEBSD_TRUE@am__append_16 = src/powerpc/ffi.c src/powerpc/sysv.S src/powerpc/ppc_closure.S - at ARM_TRUE@am__append_17 = src/arm/sysv.S src/arm/ffi.c - at ARM_TRUE@@FFI_EXEC_TRAMPOLINE_TABLE_TRUE at am__append_18 = src/arm/trampoline.S - at AVR32_TRUE@am__append_19 = src/avr32/sysv.S src/avr32/ffi.c - at LIBFFI_CRIS_TRUE@am__append_20 = src/cris/sysv.S src/cris/ffi.c - at FRV_TRUE@am__append_21 = src/frv/eabi.S src/frv/ffi.c - at MOXIE_TRUE@am__append_22 = src/moxie/eabi.S src/moxie/ffi.c - at S390_TRUE@am__append_23 = src/s390/sysv.S src/s390/ffi.c - at X86_64_TRUE@am__append_24 = src/x86/ffi64.c src/x86/unix64.S src/x86/ffi.c src/x86/sysv.S - at SH_TRUE@am__append_25 = src/sh/sysv.S src/sh/ffi.c - at SH64_TRUE@am__append_26 = src/sh64/sysv.S src/sh64/ffi.c - at PA_LINUX_TRUE@am__append_27 = src/pa/linux.S src/pa/ffi.c - at PA_HPUX_TRUE@am__append_28 = src/pa/hpux32.S src/pa/ffi.c -# Build debug. Define FFI_DEBUG on the commandline so that, when building with -# MSVC, it can link against the debug CRT. - at FFI_DEBUG_TRUE@am__append_29 = -DFFI_DEBUG + at BFIN_TRUE@am__append_3 = src/bfin/ffi.c src/bfin/sysv.S + at X86_TRUE@am__append_4 = src/x86/ffi.c src/x86/sysv.S + at X86_FREEBSD_TRUE@am__append_5 = src/x86/ffi.c src/x86/freebsd.S + at X86_WIN32_TRUE@am__append_6 = src/x86/ffi.c src/x86/win32.S + at X86_WIN64_TRUE@am__append_7 = src/x86/ffi.c src/x86/win64.S + at X86_DARWIN_TRUE@am__append_8 = src/x86/ffi.c src/x86/darwin.S src/x86/ffi64.c src/x86/darwin64.S + at SPARC_TRUE@am__append_9 = src/sparc/ffi.c src/sparc/v8.S src/sparc/v9.S + at ALPHA_TRUE@am__append_10 = src/alpha/ffi.c src/alpha/osf.S + at IA64_TRUE@am__append_11 = src/ia64/ffi.c src/ia64/unix.S + at M32R_TRUE@am__append_12 = src/m32r/sysv.S src/m32r/ffi.c + at M68K_TRUE@am__append_13 = src/m68k/ffi.c src/m68k/sysv.S + at MOXIE_TRUE@am__append_14 = src/moxie/ffi.c src/moxie/eabi.S + at MICROBLAZE_TRUE@am__append_15 = src/microblaze/ffi.c src/microblaze/sysv.S + at POWERPC_TRUE@am__append_16 = src/powerpc/ffi.c src/powerpc/sysv.S src/powerpc/ppc_closure.S src/powerpc/linux64.S src/powerpc/linux64_closure.S + at POWERPC_AIX_TRUE@am__append_17 = src/powerpc/ffi_darwin.c src/powerpc/aix.S src/powerpc/aix_closure.S + at POWERPC_DARWIN_TRUE@am__append_18 = src/powerpc/ffi_darwin.c src/powerpc/darwin.S src/powerpc/darwin_closure.S + at POWERPC_FREEBSD_TRUE@am__append_19 = src/powerpc/ffi.c src/powerpc/sysv.S src/powerpc/ppc_closure.S + at AARCH64_TRUE@am__append_20 = src/aarch64/sysv.S src/aarch64/ffi.c + at ARM_TRUE@am__append_21 = src/arm/sysv.S src/arm/ffi.c + at ARM_TRUE@@FFI_EXEC_TRAMPOLINE_TABLE_TRUE at am__append_22 = src/arm/trampoline.S + at AVR32_TRUE@am__append_23 = src/avr32/sysv.S src/avr32/ffi.c + at LIBFFI_CRIS_TRUE@am__append_24 = src/cris/sysv.S src/cris/ffi.c + at FRV_TRUE@am__append_25 = src/frv/eabi.S src/frv/ffi.c + at S390_TRUE@am__append_26 = src/s390/sysv.S src/s390/ffi.c + at X86_64_TRUE@am__append_27 = src/x86/ffi64.c src/x86/unix64.S src/x86/ffi.c src/x86/sysv.S + at SH_TRUE@am__append_28 = src/sh/sysv.S src/sh/ffi.c + at SH64_TRUE@am__append_29 = src/sh64/sysv.S src/sh64/ffi.c + at PA_LINUX_TRUE@am__append_30 = src/pa/linux.S src/pa/ffi.c + at PA_HPUX_TRUE@am__append_31 = src/pa/hpux32.S src/pa/ffi.c + at TILE_TRUE@am__append_32 = src/tile/tile.S src/tile/ffi.c + at XTENSA_TRUE@am__append_33 = src/xtensa/sysv.S src/xtensa/ffi.c subdir = . DIST_COMMON = README $(am__configure_deps) $(srcdir)/Makefile.am \ $(srcdir)/Makefile.in $(srcdir)/doc/stamp-vti \ $(srcdir)/doc/version.texi $(srcdir)/fficonfig.h.in \ - $(srcdir)/fficonfig.py.in $(srcdir)/libffi.pc.in \ - $(top_srcdir)/configure ChangeLog compile config.guess \ - config.sub depcomp install-sh ltmain.sh mdate-sh missing \ - texinfo.tex + $(srcdir)/libffi.pc.in $(top_srcdir)/configure ChangeLog \ + compile config.guess config.sub depcomp install-sh ltmain.sh \ + mdate-sh missing texinfo.tex ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 -am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \ +am__aclocal_m4_deps = $(top_srcdir)/m4/asmcfi.m4 \ + $(top_srcdir)/m4/ax_append_flag.m4 \ + $(top_srcdir)/m4/ax_cc_maxopt.m4 \ + $(top_srcdir)/m4/ax_cflags_warn_all.m4 \ + $(top_srcdir)/m4/ax_check_compile_flag.m4 \ + $(top_srcdir)/m4/ax_compiler_vendor.m4 \ + $(top_srcdir)/m4/ax_configure_args.m4 \ + $(top_srcdir)/m4/ax_enable_builddir.m4 \ + $(top_srcdir)/m4/ax_gcc_archflag.m4 \ + $(top_srcdir)/m4/ax_gcc_x86_cpuid.m4 \ + $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ + $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ + $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/acinclude.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) @@ -84,7 +113,7 @@ configure.lineno config.status.lineno mkinstalldirs = $(install_sh) -d CONFIG_HEADER = fficonfig.h -CONFIG_CLEAN_FILES = libffi.pc fficonfig.py +CONFIG_CLEAN_FILES = libffi.pc CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ @@ -113,9 +142,9 @@ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } -am__installdirs = "$(DESTDIR)$(libdir)" "$(DESTDIR)$(infodir)" \ +am__installdirs = "$(DESTDIR)$(toolexeclibdir)" "$(DESTDIR)$(infodir)" \ "$(DESTDIR)$(pkgconfigdir)" -LTLIBRARIES = $(lib_LTLIBRARIES) $(noinst_LTLIBRARIES) +LTLIBRARIES = $(noinst_LTLIBRARIES) $(toolexeclib_LTLIBRARIES) libffi_la_LIBADD = am__dirstamp = $(am__leading_dot)dirstamp am_libffi_la_OBJECTS = src/prep_cif.lo src/types.lo src/raw_api.lo \ @@ -123,44 +152,50 @@ @FFI_DEBUG_TRUE at am__objects_1 = src/debug.lo @MIPS_TRUE at am__objects_2 = src/mips/ffi.lo src/mips/o32.lo \ @MIPS_TRUE@ src/mips/n32.lo - at X86_TRUE@am__objects_3 = src/x86/ffi.lo src/x86/sysv.lo - at X86_FREEBSD_TRUE@am__objects_4 = src/x86/ffi.lo src/x86/freebsd.lo - at X86_WIN32_TRUE@am__objects_5 = src/x86/ffi.lo src/x86/win32.lo - at X86_WIN64_TRUE@am__objects_6 = src/x86/ffi.lo src/x86/win64.lo - at X86_DARWIN_TRUE@am__objects_7 = src/x86/ffi.lo src/x86/darwin.lo \ + at BFIN_TRUE@am__objects_3 = src/bfin/ffi.lo src/bfin/sysv.lo + at X86_TRUE@am__objects_4 = src/x86/ffi.lo src/x86/sysv.lo + at X86_FREEBSD_TRUE@am__objects_5 = src/x86/ffi.lo src/x86/freebsd.lo + at X86_WIN32_TRUE@am__objects_6 = src/x86/ffi.lo src/x86/win32.lo + at X86_WIN64_TRUE@am__objects_7 = src/x86/ffi.lo src/x86/win64.lo + at X86_DARWIN_TRUE@am__objects_8 = src/x86/ffi.lo src/x86/darwin.lo \ @X86_DARWIN_TRUE@ src/x86/ffi64.lo src/x86/darwin64.lo - at SPARC_TRUE@am__objects_8 = src/sparc/ffi.lo src/sparc/v8.lo \ + at SPARC_TRUE@am__objects_9 = src/sparc/ffi.lo src/sparc/v8.lo \ @SPARC_TRUE@ src/sparc/v9.lo - at ALPHA_TRUE@am__objects_9 = src/alpha/ffi.lo src/alpha/osf.lo - at IA64_TRUE@am__objects_10 = src/ia64/ffi.lo src/ia64/unix.lo - at M32R_TRUE@am__objects_11 = src/m32r/sysv.lo src/m32r/ffi.lo - at M68K_TRUE@am__objects_12 = src/m68k/ffi.lo src/m68k/sysv.lo - at POWERPC_TRUE@am__objects_13 = src/powerpc/ffi.lo src/powerpc/sysv.lo \ + at ALPHA_TRUE@am__objects_10 = src/alpha/ffi.lo src/alpha/osf.lo + at IA64_TRUE@am__objects_11 = src/ia64/ffi.lo src/ia64/unix.lo + at M32R_TRUE@am__objects_12 = src/m32r/sysv.lo src/m32r/ffi.lo + at M68K_TRUE@am__objects_13 = src/m68k/ffi.lo src/m68k/sysv.lo + at MOXIE_TRUE@am__objects_14 = src/moxie/ffi.lo src/moxie/eabi.lo + at MICROBLAZE_TRUE@am__objects_15 = src/microblaze/ffi.lo \ + at MICROBLAZE_TRUE@ src/microblaze/sysv.lo + at POWERPC_TRUE@am__objects_16 = src/powerpc/ffi.lo src/powerpc/sysv.lo \ @POWERPC_TRUE@ src/powerpc/ppc_closure.lo \ @POWERPC_TRUE@ src/powerpc/linux64.lo \ @POWERPC_TRUE@ src/powerpc/linux64_closure.lo - at POWERPC_AIX_TRUE@am__objects_14 = src/powerpc/ffi_darwin.lo \ + at POWERPC_AIX_TRUE@am__objects_17 = src/powerpc/ffi_darwin.lo \ @POWERPC_AIX_TRUE@ src/powerpc/aix.lo \ @POWERPC_AIX_TRUE@ src/powerpc/aix_closure.lo - at POWERPC_DARWIN_TRUE@am__objects_15 = src/powerpc/ffi_darwin.lo \ + at POWERPC_DARWIN_TRUE@am__objects_18 = src/powerpc/ffi_darwin.lo \ @POWERPC_DARWIN_TRUE@ src/powerpc/darwin.lo \ @POWERPC_DARWIN_TRUE@ src/powerpc/darwin_closure.lo - at POWERPC_FREEBSD_TRUE@am__objects_16 = src/powerpc/ffi.lo \ + at POWERPC_FREEBSD_TRUE@am__objects_19 = src/powerpc/ffi.lo \ @POWERPC_FREEBSD_TRUE@ src/powerpc/sysv.lo \ @POWERPC_FREEBSD_TRUE@ src/powerpc/ppc_closure.lo - at ARM_TRUE@am__objects_17 = src/arm/sysv.lo src/arm/ffi.lo - at ARM_TRUE@@FFI_EXEC_TRAMPOLINE_TABLE_TRUE at am__objects_18 = src/arm/trampoline.lo - at AVR32_TRUE@am__objects_19 = src/avr32/sysv.lo src/avr32/ffi.lo - at LIBFFI_CRIS_TRUE@am__objects_20 = src/cris/sysv.lo src/cris/ffi.lo - at FRV_TRUE@am__objects_21 = src/frv/eabi.lo src/frv/ffi.lo - at MOXIE_TRUE@am__objects_22 = src/moxie/eabi.lo src/moxie/ffi.lo - at S390_TRUE@am__objects_23 = src/s390/sysv.lo src/s390/ffi.lo - at X86_64_TRUE@am__objects_24 = src/x86/ffi64.lo src/x86/unix64.lo \ + at AARCH64_TRUE@am__objects_20 = src/aarch64/sysv.lo src/aarch64/ffi.lo + at ARM_TRUE@am__objects_21 = src/arm/sysv.lo src/arm/ffi.lo + at ARM_TRUE@@FFI_EXEC_TRAMPOLINE_TABLE_TRUE at am__objects_22 = src/arm/trampoline.lo + at AVR32_TRUE@am__objects_23 = src/avr32/sysv.lo src/avr32/ffi.lo + at LIBFFI_CRIS_TRUE@am__objects_24 = src/cris/sysv.lo src/cris/ffi.lo + at FRV_TRUE@am__objects_25 = src/frv/eabi.lo src/frv/ffi.lo + at S390_TRUE@am__objects_26 = src/s390/sysv.lo src/s390/ffi.lo + at X86_64_TRUE@am__objects_27 = src/x86/ffi64.lo src/x86/unix64.lo \ @X86_64_TRUE@ src/x86/ffi.lo src/x86/sysv.lo - at SH_TRUE@am__objects_25 = src/sh/sysv.lo src/sh/ffi.lo - at SH64_TRUE@am__objects_26 = src/sh64/sysv.lo src/sh64/ffi.lo - at PA_LINUX_TRUE@am__objects_27 = src/pa/linux.lo src/pa/ffi.lo - at PA_HPUX_TRUE@am__objects_28 = src/pa/hpux32.lo src/pa/ffi.lo + at SH_TRUE@am__objects_28 = src/sh/sysv.lo src/sh/ffi.lo + at SH64_TRUE@am__objects_29 = src/sh64/sysv.lo src/sh64/ffi.lo + at PA_LINUX_TRUE@am__objects_30 = src/pa/linux.lo src/pa/ffi.lo + at PA_HPUX_TRUE@am__objects_31 = src/pa/hpux32.lo src/pa/ffi.lo + at TILE_TRUE@am__objects_32 = src/tile/tile.lo src/tile/ffi.lo + at XTENSA_TRUE@am__objects_33 = src/xtensa/sysv.lo src/xtensa/ffi.lo nodist_libffi_la_OBJECTS = $(am__objects_1) $(am__objects_2) \ $(am__objects_3) $(am__objects_4) $(am__objects_5) \ $(am__objects_6) $(am__objects_7) $(am__objects_8) \ @@ -170,17 +205,19 @@ $(am__objects_18) $(am__objects_19) $(am__objects_20) \ $(am__objects_21) $(am__objects_22) $(am__objects_23) \ $(am__objects_24) $(am__objects_25) $(am__objects_26) \ - $(am__objects_27) $(am__objects_28) + $(am__objects_27) $(am__objects_28) $(am__objects_29) \ + $(am__objects_30) $(am__objects_31) $(am__objects_32) \ + $(am__objects_33) libffi_la_OBJECTS = $(am_libffi_la_OBJECTS) \ $(nodist_libffi_la_OBJECTS) libffi_la_LINK = $(LIBTOOL) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(libffi_la_LDFLAGS) $(LDFLAGS) -o $@ libffi_convenience_la_LIBADD = -am__objects_29 = src/prep_cif.lo src/types.lo src/raw_api.lo \ +am__objects_34 = src/prep_cif.lo src/types.lo src/raw_api.lo \ src/java_raw_api.lo src/closures.lo -am_libffi_convenience_la_OBJECTS = $(am__objects_29) -am__objects_30 = $(am__objects_1) $(am__objects_2) $(am__objects_3) \ +am_libffi_convenience_la_OBJECTS = $(am__objects_34) +am__objects_35 = $(am__objects_1) $(am__objects_2) $(am__objects_3) \ $(am__objects_4) $(am__objects_5) $(am__objects_6) \ $(am__objects_7) $(am__objects_8) $(am__objects_9) \ $(am__objects_10) $(am__objects_11) $(am__objects_12) \ @@ -189,8 +226,9 @@ $(am__objects_19) $(am__objects_20) $(am__objects_21) \ $(am__objects_22) $(am__objects_23) $(am__objects_24) \ $(am__objects_25) $(am__objects_26) $(am__objects_27) \ - $(am__objects_28) -nodist_libffi_convenience_la_OBJECTS = $(am__objects_30) + $(am__objects_28) $(am__objects_29) $(am__objects_30) \ + $(am__objects_31) $(am__objects_32) $(am__objects_33) +nodist_libffi_convenience_la_OBJECTS = $(am__objects_35) libffi_convenience_la_OBJECTS = $(am_libffi_convenience_la_OBJECTS) \ $(nodist_libffi_convenience_la_OBJECTS) DEFAULT_INCLUDES = -I. at am__isrc@ @@ -234,14 +272,20 @@ install-pdf-recursive install-ps-recursive install-recursive \ installcheck-recursive installdirs-recursive pdf-recursive \ ps-recursive uninstall-recursive +am__can_run_installinfo = \ + case $$AM_UPDATE_INFO_DIR in \ + n|no|NO) false;; \ + *) (install-info --version) >/dev/null 2>&1;; \ + esac DATA = $(pkgconfig_DATA) RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive AM_RECURSIVE_TARGETS = $(RECURSIVE_TARGETS:-recursive=) \ $(RECURSIVE_CLEAN_TARGETS:-recursive=) tags TAGS ctags CTAGS \ - distdir dist dist-all distcheck + cscope distdir dist dist-all distcheck ETAGS = etags CTAGS = ctags +CSCOPE = cscope DIST_SUBDIRS = $(SUBDIRS) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) @@ -252,6 +296,7 @@ && rm -rf "$(distdir)" \ || { sleep 5 && rm -rf "$(distdir)"; }; \ else :; fi +am__post_remove_distdir = $(am__remove_distdir) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ @@ -279,6 +324,7 @@ reldir="$$dir2" DIST_ARCHIVES = $(distdir).tar.gz GZIP_ENV = --best +DIST_TARGETS = dist-gzip distuninstallcheck_listfiles = find . -type f -print am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \ | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$' @@ -347,6 +393,7 @@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ +PRTDIAG = @PRTDIAG@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ @@ -367,6 +414,7 @@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ +ax_enable_builddir_sed = @ax_enable_builddir_sed@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ @@ -402,6 +450,7 @@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ +sys_symbol_underscore = @sys_symbol_underscore@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ @@ -414,39 +463,47 @@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign subdir-objects +ACLOCAL_AMFLAGS = -I m4 SUBDIRS = include testsuite man -EXTRA_DIST = LICENSE ChangeLog.v1 ChangeLog.libgcj configure.host \ - src/alpha/ffi.c src/alpha/osf.S src/alpha/ffitarget.h \ - src/arm/ffi.c src/arm/sysv.S src/arm/ffitarget.h \ - src/avr32/ffi.c src/avr32/sysv.S src/avr32/ffitarget.h \ - src/cris/ffi.c src/cris/sysv.S src/cris/ffitarget.h \ - src/ia64/ffi.c src/ia64/ffitarget.h src/ia64/ia64_flags.h \ - src/ia64/unix.S src/mips/ffi.c src/mips/n32.S src/mips/o32.S \ - src/mips/ffitarget.h src/m32r/ffi.c src/m32r/sysv.S \ - src/m32r/ffitarget.h src/m68k/ffi.c src/m68k/sysv.S \ - src/m68k/ffitarget.h src/powerpc/ffi.c src/powerpc/sysv.S \ - src/powerpc/linux64.S src/powerpc/linux64_closure.S \ - src/powerpc/ppc_closure.S src/powerpc/asm.h src/powerpc/aix.S \ - src/powerpc/darwin.S src/powerpc/aix_closure.S \ - src/powerpc/darwin_closure.S src/powerpc/ffi_darwin.c \ - src/powerpc/ffitarget.h src/s390/ffi.c src/s390/sysv.S \ - src/s390/ffitarget.h src/sh/ffi.c src/sh/sysv.S \ - src/sh/ffitarget.h src/sh64/ffi.c src/sh64/sysv.S \ - src/sh64/ffitarget.h src/sparc/v8.S src/sparc/v9.S \ - src/sparc/ffitarget.h src/sparc/ffi.c src/x86/darwin64.S \ - src/x86/ffi.c src/x86/sysv.S src/x86/win32.S src/x86/darwin.S \ - src/x86/win64.S src/x86/freebsd.S src/x86/ffi64.c \ - src/x86/unix64.S src/x86/ffitarget.h src/pa/ffitarget.h \ - src/pa/ffi.c src/pa/linux.S src/pa/hpux32.S src/frv/ffi.c \ - src/frv/eabi.S src/frv/ffitarget.h src/dlmalloc.c \ - src/moxie/ffi.c src/moxie/eabi.S libtool-version \ - ChangeLog.libffi m4/libtool.m4 m4/lt~obsolete.m4 \ - m4/ltoptions.m4 m4/ltsugar.m4 m4/ltversion.m4 \ - m4/ltversion.m4 src/arm/gentramp.sh src/debug.c \ - msvcc.sh generate-ios-source-and-headers.py \ - generate-osx-source-and-headers.py \ - libffi.xcodeproj/project.pbxproj \ - src/arm/trampoline.S +EXTRA_DIST = LICENSE ChangeLog.v1 ChangeLog.libgcj configure.host \ + src/aarch64/ffi.c src/aarch64/ffitarget.h \ + src/aarch64/sysv.S build-ios.sh \ + src/alpha/ffi.c src/alpha/osf.S src/alpha/ffitarget.h \ + src/arm/ffi.c src/arm/sysv.S src/arm/ffitarget.h \ + src/avr32/ffi.c src/avr32/sysv.S src/avr32/ffitarget.h \ + src/cris/ffi.c src/cris/sysv.S src/cris/ffitarget.h \ + src/ia64/ffi.c src/ia64/ffitarget.h src/ia64/ia64_flags.h \ + src/ia64/unix.S src/mips/ffi.c src/mips/n32.S src/mips/o32.S \ + src/mips/ffitarget.h src/m32r/ffi.c src/m32r/sysv.S \ + src/m32r/ffitarget.h src/m68k/ffi.c src/m68k/sysv.S \ + src/m68k/ffitarget.h src/microblaze/ffi.c \ + src/microblaze/sysv.S src/microblaze/ffitarget.h \ + src/powerpc/ffi.c src/powerpc/sysv.S \ + src/powerpc/linux64.S src/powerpc/linux64_closure.S \ + src/powerpc/ppc_closure.S src/powerpc/asm.h \ + src/powerpc/aix.S src/powerpc/darwin.S \ + src/powerpc/aix_closure.S src/powerpc/darwin_closure.S \ + src/powerpc/ffi_darwin.c src/powerpc/ffitarget.h \ + src/s390/ffi.c src/s390/sysv.S src/s390/ffitarget.h \ + src/sh/ffi.c src/sh/sysv.S src/sh/ffitarget.h src/sh64/ffi.c \ + src/sh64/sysv.S src/sh64/ffitarget.h src/sparc/v8.S \ + src/sparc/v9.S src/sparc/ffitarget.h src/sparc/ffi.c \ + src/x86/darwin64.S src/x86/ffi.c src/x86/sysv.S \ + src/x86/win32.S src/x86/darwin.S src/x86/win64.S \ + src/x86/freebsd.S src/x86/ffi64.c src/x86/unix64.S \ + src/x86/ffitarget.h src/pa/ffitarget.h src/pa/ffi.c \ + src/pa/linux.S src/pa/hpux32.S src/frv/ffi.c src/bfin/ffi.c \ + src/bfin/ffitarget.h src/bfin/sysv.S src/frv/eabi.S \ + src/frv/ffitarget.h src/dlmalloc.c src/tile/ffi.c \ + src/tile/ffitarget.h src/tile/tile.S libtool-version \ + src/xtensa/ffitarget.h src/xtensa/ffi.c src/xtensa/sysv.S \ + ChangeLog.libffi m4/libtool.m4 m4/lt~obsolete.m4 \ + m4/ltoptions.m4 m4/ltsugar.m4 m4/ltversion.m4 \ + m4/ltversion.m4 src/arm/gentramp.sh src/debug.c msvcc.sh \ + generate-ios-source-and-headers.py \ + generate-osx-source-and-headers.py \ + libffi.xcodeproj/project.pbxproj src/arm/trampoline.S \ + libtool-ldflags info_TEXINFOS = doc/libffi.texi @@ -488,9 +545,11 @@ "RANLIB=$(RANLIB)" \ "DESTDIR=$(DESTDIR)" + +# Subdir rules rely on $(FLAGS_TO_PASS) +FLAGS_TO_PASS = $(AM_MAKEFLAGS) MAKEOVERRIDES = -ACLOCAL_AMFLAGS = $(ACLOCAL_AMFLAGS) -I m4 -lib_LTLIBRARIES = libffi.la +toolexeclib_LTLIBRARIES = libffi.la noinst_LTLIBRARIES = libffi_convenience.la libffi_la_SOURCES = src/prep_cif.c src/types.c \ src/raw_api.c src/java_raw_api.c src/closures.c @@ -506,13 +565,15 @@ $(am__append_18) $(am__append_19) $(am__append_20) \ $(am__append_21) $(am__append_22) $(am__append_23) \ $(am__append_24) $(am__append_25) $(am__append_26) \ - $(am__append_27) $(am__append_28) + $(am__append_27) $(am__append_28) $(am__append_29) \ + $(am__append_30) $(am__append_31) $(am__append_32) \ + $(am__append_33) libffi_convenience_la_SOURCES = $(libffi_la_SOURCES) nodist_libffi_convenience_la_SOURCES = $(nodist_libffi_la_SOURCES) -AM_CFLAGS = -g $(am__append_29) -libffi_la_LDFLAGS = -version-info `grep -v '^\#' $(srcdir)/libtool-version` $(LTLDFLAGS) $(AM_LTLDFLAGS) -AM_CPPFLAGS = -I. -I$(top_srcdir)/include -Iinclude -I$(top_srcdir)/src -DFFI_BUILDING -AM_CCASFLAGS = $(AM_CPPFLAGS) -g +LTLDFLAGS = $(shell $(SHELL) $(top_srcdir)/libtool-ldflags $(LDFLAGS)) +libffi_la_LDFLAGS = -no-undefined -version-info `grep -v '^\#' $(srcdir)/libtool-version` $(LTLDFLAGS) $(AM_LTLDFLAGS) +AM_CPPFLAGS = -I. -I$(top_srcdir)/include -Iinclude -I$(top_srcdir)/src +AM_CCASFLAGS = $(AM_CPPFLAGS) all: fficonfig.h $(MAKE) $(AM_MAKEFLAGS) all-recursive @@ -569,48 +630,51 @@ -rm -f fficonfig.h stamp-h1 libffi.pc: $(top_builddir)/config.status $(srcdir)/libffi.pc.in cd $(top_builddir) && $(SHELL) ./config.status $@ -fficonfig.py: $(top_builddir)/config.status $(srcdir)/fficonfig.py.in - cd $(top_builddir) && $(SHELL) ./config.status $@ -install-libLTLIBRARIES: $(lib_LTLIBRARIES) + +clean-noinstLTLIBRARIES: + -test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES) + @list='$(noinst_LTLIBRARIES)'; \ + locs=`for p in $$list; do echo $$p; done | \ + sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ + sort -u`; \ + test -z "$$locs" || { \ + echo rm -f $${locs}; \ + rm -f $${locs}; \ + } +install-toolexeclibLTLIBRARIES: $(toolexeclib_LTLIBRARIES) @$(NORMAL_INSTALL) - test -z "$(libdir)" || $(MKDIR_P) "$(DESTDIR)$(libdir)" - @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ + @list='$(toolexeclib_LTLIBRARIES)'; test -n "$(toolexeclibdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ - echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(libdir)'"; \ - $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(libdir)"; \ + echo " $(MKDIR_P) '$(DESTDIR)$(toolexeclibdir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(toolexeclibdir)" || exit 1; \ + echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(toolexeclibdir)'"; \ + $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(toolexeclibdir)"; \ } -uninstall-libLTLIBRARIES: +uninstall-toolexeclibLTLIBRARIES: @$(NORMAL_UNINSTALL) - @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ + @list='$(toolexeclib_LTLIBRARIES)'; test -n "$(toolexeclibdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ - echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(libdir)/$$f'"; \ - $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(libdir)/$$f"; \ + echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(toolexeclibdir)/$$f'"; \ + $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(toolexeclibdir)/$$f"; \ done -clean-libLTLIBRARIES: - -test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES) - @list='$(lib_LTLIBRARIES)'; for p in $$list; do \ - dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ - test "$$dir" != "$$p" || dir=.; \ - echo "rm -f \"$${dir}/so_locations\""; \ - rm -f "$${dir}/so_locations"; \ - done - -clean-noinstLTLIBRARIES: - -test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES) - @list='$(noinst_LTLIBRARIES)'; for p in $$list; do \ - dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ - test "$$dir" != "$$p" || dir=.; \ - echo "rm -f \"$${dir}/so_locations\""; \ - rm -f "$${dir}/so_locations"; \ - done +clean-toolexeclibLTLIBRARIES: + -test -z "$(toolexeclib_LTLIBRARIES)" || rm -f $(toolexeclib_LTLIBRARIES) + @list='$(toolexeclib_LTLIBRARIES)'; \ + locs=`for p in $$list; do echo $$p; done | \ + sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ + sort -u`; \ + test -z "$$locs" || { \ + echo rm -f $${locs}; \ + rm -f $${locs}; \ + } src/$(am__dirstamp): @$(MKDIR_P) src @: > src/$(am__dirstamp) @@ -635,6 +699,16 @@ src/mips/$(DEPDIR)/$(am__dirstamp) src/mips/n32.lo: src/mips/$(am__dirstamp) \ src/mips/$(DEPDIR)/$(am__dirstamp) +src/bfin/$(am__dirstamp): + @$(MKDIR_P) src/bfin + @: > src/bfin/$(am__dirstamp) +src/bfin/$(DEPDIR)/$(am__dirstamp): + @$(MKDIR_P) src/bfin/$(DEPDIR) + @: > src/bfin/$(DEPDIR)/$(am__dirstamp) +src/bfin/ffi.lo: src/bfin/$(am__dirstamp) \ + src/bfin/$(DEPDIR)/$(am__dirstamp) +src/bfin/sysv.lo: src/bfin/$(am__dirstamp) \ + src/bfin/$(DEPDIR)/$(am__dirstamp) src/x86/$(am__dirstamp): @$(MKDIR_P) src/x86 @: > src/x86/$(am__dirstamp) @@ -709,6 +783,26 @@ src/m68k/$(DEPDIR)/$(am__dirstamp) src/m68k/sysv.lo: src/m68k/$(am__dirstamp) \ src/m68k/$(DEPDIR)/$(am__dirstamp) +src/moxie/$(am__dirstamp): + @$(MKDIR_P) src/moxie + @: > src/moxie/$(am__dirstamp) +src/moxie/$(DEPDIR)/$(am__dirstamp): + @$(MKDIR_P) src/moxie/$(DEPDIR) + @: > src/moxie/$(DEPDIR)/$(am__dirstamp) +src/moxie/ffi.lo: src/moxie/$(am__dirstamp) \ + src/moxie/$(DEPDIR)/$(am__dirstamp) +src/moxie/eabi.lo: src/moxie/$(am__dirstamp) \ + src/moxie/$(DEPDIR)/$(am__dirstamp) +src/microblaze/$(am__dirstamp): + @$(MKDIR_P) src/microblaze + @: > src/microblaze/$(am__dirstamp) +src/microblaze/$(DEPDIR)/$(am__dirstamp): + @$(MKDIR_P) src/microblaze/$(DEPDIR) + @: > src/microblaze/$(DEPDIR)/$(am__dirstamp) +src/microblaze/ffi.lo: src/microblaze/$(am__dirstamp) \ + src/microblaze/$(DEPDIR)/$(am__dirstamp) +src/microblaze/sysv.lo: src/microblaze/$(am__dirstamp) \ + src/microblaze/$(DEPDIR)/$(am__dirstamp) src/powerpc/$(am__dirstamp): @$(MKDIR_P) src/powerpc @: > src/powerpc/$(am__dirstamp) @@ -735,6 +829,16 @@ src/powerpc/$(DEPDIR)/$(am__dirstamp) src/powerpc/darwin_closure.lo: src/powerpc/$(am__dirstamp) \ src/powerpc/$(DEPDIR)/$(am__dirstamp) +src/aarch64/$(am__dirstamp): + @$(MKDIR_P) src/aarch64 + @: > src/aarch64/$(am__dirstamp) +src/aarch64/$(DEPDIR)/$(am__dirstamp): + @$(MKDIR_P) src/aarch64/$(DEPDIR) + @: > src/aarch64/$(DEPDIR)/$(am__dirstamp) +src/aarch64/sysv.lo: src/aarch64/$(am__dirstamp) \ + src/aarch64/$(DEPDIR)/$(am__dirstamp) +src/aarch64/ffi.lo: src/aarch64/$(am__dirstamp) \ + src/aarch64/$(DEPDIR)/$(am__dirstamp) src/arm/$(am__dirstamp): @$(MKDIR_P) src/arm @: > src/arm/$(am__dirstamp) @@ -777,16 +881,6 @@ src/frv/$(DEPDIR)/$(am__dirstamp) src/frv/ffi.lo: src/frv/$(am__dirstamp) \ src/frv/$(DEPDIR)/$(am__dirstamp) -src/moxie/$(am__dirstamp): - @$(MKDIR_P) src/moxie - @: > src/moxie/$(am__dirstamp) -src/moxie/$(DEPDIR)/$(am__dirstamp): - @$(MKDIR_P) src/moxie/$(DEPDIR) - @: > src/moxie/$(DEPDIR)/$(am__dirstamp) -src/moxie/eabi.lo: src/moxie/$(am__dirstamp) \ - src/moxie/$(DEPDIR)/$(am__dirstamp) -src/moxie/ffi.lo: src/moxie/$(am__dirstamp) \ - src/moxie/$(DEPDIR)/$(am__dirstamp) src/s390/$(am__dirstamp): @$(MKDIR_P) src/s390 @: > src/s390/$(am__dirstamp) @@ -829,131 +923,79 @@ src/pa/ffi.lo: src/pa/$(am__dirstamp) src/pa/$(DEPDIR)/$(am__dirstamp) src/pa/hpux32.lo: src/pa/$(am__dirstamp) \ src/pa/$(DEPDIR)/$(am__dirstamp) +src/tile/$(am__dirstamp): + @$(MKDIR_P) src/tile + @: > src/tile/$(am__dirstamp) +src/tile/$(DEPDIR)/$(am__dirstamp): + @$(MKDIR_P) src/tile/$(DEPDIR) + @: > src/tile/$(DEPDIR)/$(am__dirstamp) +src/tile/tile.lo: src/tile/$(am__dirstamp) \ + src/tile/$(DEPDIR)/$(am__dirstamp) +src/tile/ffi.lo: src/tile/$(am__dirstamp) \ + src/tile/$(DEPDIR)/$(am__dirstamp) +src/xtensa/$(am__dirstamp): + @$(MKDIR_P) src/xtensa + @: > src/xtensa/$(am__dirstamp) +src/xtensa/$(DEPDIR)/$(am__dirstamp): + @$(MKDIR_P) src/xtensa/$(DEPDIR) + @: > src/xtensa/$(DEPDIR)/$(am__dirstamp) +src/xtensa/sysv.lo: src/xtensa/$(am__dirstamp) \ + src/xtensa/$(DEPDIR)/$(am__dirstamp) +src/xtensa/ffi.lo: src/xtensa/$(am__dirstamp) \ + src/xtensa/$(DEPDIR)/$(am__dirstamp) libffi.la: $(libffi_la_OBJECTS) $(libffi_la_DEPENDENCIES) $(EXTRA_libffi_la_DEPENDENCIES) - $(libffi_la_LINK) -rpath $(libdir) $(libffi_la_OBJECTS) $(libffi_la_LIBADD) $(LIBS) + $(libffi_la_LINK) -rpath $(toolexeclibdir) $(libffi_la_OBJECTS) $(libffi_la_LIBADD) $(LIBS) libffi_convenience.la: $(libffi_convenience_la_OBJECTS) $(libffi_convenience_la_DEPENDENCIES) $(EXTRA_libffi_convenience_la_DEPENDENCIES) $(LINK) $(libffi_convenience_la_OBJECTS) $(libffi_convenience_la_LIBADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) - -rm -f src/alpha/ffi.$(OBJEXT) - -rm -f src/alpha/ffi.lo - -rm -f src/alpha/osf.$(OBJEXT) - -rm -f src/alpha/osf.lo - -rm -f src/arm/ffi.$(OBJEXT) - -rm -f src/arm/ffi.lo - -rm -f src/arm/sysv.$(OBJEXT) - -rm -f src/arm/sysv.lo - -rm -f src/arm/trampoline.$(OBJEXT) - -rm -f src/arm/trampoline.lo - -rm -f src/avr32/ffi.$(OBJEXT) - -rm -f src/avr32/ffi.lo - -rm -f src/avr32/sysv.$(OBJEXT) - -rm -f src/avr32/sysv.lo - -rm -f src/closures.$(OBJEXT) - -rm -f src/closures.lo - -rm -f src/cris/ffi.$(OBJEXT) - -rm -f src/cris/ffi.lo - -rm -f src/cris/sysv.$(OBJEXT) - -rm -f src/cris/sysv.lo - -rm -f src/debug.$(OBJEXT) - -rm -f src/debug.lo - -rm -f src/frv/eabi.$(OBJEXT) - -rm -f src/frv/eabi.lo - -rm -f src/frv/ffi.$(OBJEXT) - -rm -f src/frv/ffi.lo - -rm -f src/ia64/ffi.$(OBJEXT) - -rm -f src/ia64/ffi.lo - -rm -f src/ia64/unix.$(OBJEXT) - -rm -f src/ia64/unix.lo - -rm -f src/java_raw_api.$(OBJEXT) - -rm -f src/java_raw_api.lo - -rm -f src/m32r/ffi.$(OBJEXT) - -rm -f src/m32r/ffi.lo - -rm -f src/m32r/sysv.$(OBJEXT) - -rm -f src/m32r/sysv.lo - -rm -f src/m68k/ffi.$(OBJEXT) - -rm -f src/m68k/ffi.lo - -rm -f src/m68k/sysv.$(OBJEXT) - -rm -f src/m68k/sysv.lo - -rm -f src/mips/ffi.$(OBJEXT) - -rm -f src/mips/ffi.lo - -rm -f src/mips/n32.$(OBJEXT) - -rm -f src/mips/n32.lo - -rm -f src/mips/o32.$(OBJEXT) - -rm -f src/mips/o32.lo - -rm -f src/moxie/eabi.$(OBJEXT) - -rm -f src/moxie/eabi.lo - -rm -f src/moxie/ffi.$(OBJEXT) - -rm -f src/moxie/ffi.lo - -rm -f src/pa/ffi.$(OBJEXT) - -rm -f src/pa/ffi.lo - -rm -f src/pa/hpux32.$(OBJEXT) - -rm -f src/pa/hpux32.lo - -rm -f src/pa/linux.$(OBJEXT) - -rm -f src/pa/linux.lo - -rm -f src/powerpc/aix.$(OBJEXT) - -rm -f src/powerpc/aix.lo - -rm -f src/powerpc/aix_closure.$(OBJEXT) - -rm -f src/powerpc/aix_closure.lo - -rm -f src/powerpc/darwin.$(OBJEXT) - -rm -f src/powerpc/darwin.lo - -rm -f src/powerpc/darwin_closure.$(OBJEXT) - -rm -f src/powerpc/darwin_closure.lo - -rm -f src/powerpc/ffi.$(OBJEXT) - -rm -f src/powerpc/ffi.lo - -rm -f src/powerpc/ffi_darwin.$(OBJEXT) - -rm -f src/powerpc/ffi_darwin.lo - -rm -f src/powerpc/linux64.$(OBJEXT) - -rm -f src/powerpc/linux64.lo - -rm -f src/powerpc/linux64_closure.$(OBJEXT) - -rm -f src/powerpc/linux64_closure.lo - -rm -f src/powerpc/ppc_closure.$(OBJEXT) - -rm -f src/powerpc/ppc_closure.lo - -rm -f src/powerpc/sysv.$(OBJEXT) - -rm -f src/powerpc/sysv.lo - -rm -f src/prep_cif.$(OBJEXT) - -rm -f src/prep_cif.lo - -rm -f src/raw_api.$(OBJEXT) - -rm -f src/raw_api.lo - -rm -f src/s390/ffi.$(OBJEXT) - -rm -f src/s390/ffi.lo - -rm -f src/s390/sysv.$(OBJEXT) - -rm -f src/s390/sysv.lo - -rm -f src/sh/ffi.$(OBJEXT) - -rm -f src/sh/ffi.lo - -rm -f src/sh/sysv.$(OBJEXT) - -rm -f src/sh/sysv.lo - -rm -f src/sh64/ffi.$(OBJEXT) - -rm -f src/sh64/ffi.lo - -rm -f src/sh64/sysv.$(OBJEXT) - -rm -f src/sh64/sysv.lo - -rm -f src/sparc/ffi.$(OBJEXT) - -rm -f src/sparc/ffi.lo - -rm -f src/sparc/v8.$(OBJEXT) - -rm -f src/sparc/v8.lo - -rm -f src/sparc/v9.$(OBJEXT) - -rm -f src/sparc/v9.lo - -rm -f src/types.$(OBJEXT) - -rm -f src/types.lo - -rm -f src/x86/darwin.$(OBJEXT) - -rm -f src/x86/darwin.lo - -rm -f src/x86/darwin64.$(OBJEXT) - -rm -f src/x86/darwin64.lo - -rm -f src/x86/ffi.$(OBJEXT) - -rm -f src/x86/ffi.lo - -rm -f src/x86/ffi64.$(OBJEXT) - -rm -f src/x86/ffi64.lo - -rm -f src/x86/freebsd.$(OBJEXT) - -rm -f src/x86/freebsd.lo - -rm -f src/x86/sysv.$(OBJEXT) - -rm -f src/x86/sysv.lo - -rm -f src/x86/unix64.$(OBJEXT) - -rm -f src/x86/unix64.lo - -rm -f src/x86/win32.$(OBJEXT) - -rm -f src/x86/win32.lo - -rm -f src/x86/win64.$(OBJEXT) - -rm -f src/x86/win64.lo + -rm -f src/*.$(OBJEXT) + -rm -f src/*.lo + -rm -f src/aarch64/*.$(OBJEXT) + -rm -f src/aarch64/*.lo + -rm -f src/alpha/*.$(OBJEXT) + -rm -f src/alpha/*.lo + -rm -f src/arm/*.$(OBJEXT) + -rm -f src/arm/*.lo + -rm -f src/avr32/*.$(OBJEXT) + -rm -f src/avr32/*.lo + -rm -f src/bfin/*.$(OBJEXT) + -rm -f src/bfin/*.lo + -rm -f src/cris/*.$(OBJEXT) + -rm -f src/cris/*.lo + -rm -f src/frv/*.$(OBJEXT) + -rm -f src/frv/*.lo + -rm -f src/ia64/*.$(OBJEXT) + -rm -f src/ia64/*.lo + -rm -f src/m32r/*.$(OBJEXT) + -rm -f src/m32r/*.lo + -rm -f src/m68k/*.$(OBJEXT) + -rm -f src/m68k/*.lo + -rm -f src/microblaze/*.$(OBJEXT) + -rm -f src/microblaze/*.lo + -rm -f src/mips/*.$(OBJEXT) + -rm -f src/mips/*.lo + -rm -f src/moxie/*.$(OBJEXT) + -rm -f src/moxie/*.lo + -rm -f src/pa/*.$(OBJEXT) + -rm -f src/pa/*.lo + -rm -f src/powerpc/*.$(OBJEXT) + -rm -f src/powerpc/*.lo + -rm -f src/s390/*.$(OBJEXT) + -rm -f src/s390/*.lo + -rm -f src/sh/*.$(OBJEXT) + -rm -f src/sh/*.lo + -rm -f src/sh64/*.$(OBJEXT) + -rm -f src/sh64/*.lo + -rm -f src/sparc/*.$(OBJEXT) + -rm -f src/sparc/*.lo + -rm -f src/tile/*.$(OBJEXT) + -rm -f src/tile/*.lo + -rm -f src/x86/*.$(OBJEXT) + -rm -f src/x86/*.lo + -rm -f src/xtensa/*.$(OBJEXT) + -rm -f src/xtensa/*.lo distclean-compile: -rm -f *.tab.c @@ -964,6 +1006,8 @@ @AMDEP_TRUE@@am__include@ @am__quote at src/$(DEPDIR)/prep_cif.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/$(DEPDIR)/raw_api.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/$(DEPDIR)/types.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/aarch64/$(DEPDIR)/ffi.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/aarch64/$(DEPDIR)/sysv.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/alpha/$(DEPDIR)/ffi.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/alpha/$(DEPDIR)/osf.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/arm/$(DEPDIR)/ffi.Plo at am__quote@ @@ -971,6 +1015,8 @@ @AMDEP_TRUE@@am__include@ @am__quote at src/arm/$(DEPDIR)/trampoline.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/avr32/$(DEPDIR)/ffi.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/avr32/$(DEPDIR)/sysv.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/bfin/$(DEPDIR)/ffi.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/bfin/$(DEPDIR)/sysv.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/cris/$(DEPDIR)/ffi.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/cris/$(DEPDIR)/sysv.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/frv/$(DEPDIR)/eabi.Plo at am__quote@ @@ -981,6 +1027,8 @@ @AMDEP_TRUE@@am__include@ @am__quote at src/m32r/$(DEPDIR)/sysv.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/m68k/$(DEPDIR)/ffi.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/m68k/$(DEPDIR)/sysv.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/microblaze/$(DEPDIR)/ffi.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/microblaze/$(DEPDIR)/sysv.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/mips/$(DEPDIR)/ffi.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/mips/$(DEPDIR)/n32.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/mips/$(DEPDIR)/o32.Plo at am__quote@ @@ -1008,6 +1056,8 @@ @AMDEP_TRUE@@am__include@ @am__quote at src/sparc/$(DEPDIR)/ffi.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/sparc/$(DEPDIR)/v8.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/sparc/$(DEPDIR)/v9.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/tile/$(DEPDIR)/ffi.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/tile/$(DEPDIR)/tile.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/x86/$(DEPDIR)/darwin.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/x86/$(DEPDIR)/darwin64.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/x86/$(DEPDIR)/ffi.Plo at am__quote@ @@ -1017,6 +1067,8 @@ @AMDEP_TRUE@@am__include@ @am__quote at src/x86/$(DEPDIR)/unix64.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/x86/$(DEPDIR)/win32.Plo at am__quote@ @AMDEP_TRUE@@am__include@ @am__quote at src/x86/$(DEPDIR)/win64.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/xtensa/$(DEPDIR)/ffi.Plo at am__quote@ + at AMDEP_TRUE@@am__include@ @am__quote at src/xtensa/$(DEPDIR)/sysv.Plo at am__quote@ .S.o: @am__fastdepCCAS_TRUE@ depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\ @@ -1072,14 +1124,17 @@ clean-libtool: -rm -rf .libs _libs -rm -rf src/.libs src/_libs + -rm -rf src/aarch64/.libs src/aarch64/_libs -rm -rf src/alpha/.libs src/alpha/_libs -rm -rf src/arm/.libs src/arm/_libs -rm -rf src/avr32/.libs src/avr32/_libs + -rm -rf src/bfin/.libs src/bfin/_libs -rm -rf src/cris/.libs src/cris/_libs -rm -rf src/frv/.libs src/frv/_libs -rm -rf src/ia64/.libs src/ia64/_libs -rm -rf src/m32r/.libs src/m32r/_libs -rm -rf src/m68k/.libs src/m68k/_libs + -rm -rf src/microblaze/.libs src/microblaze/_libs -rm -rf src/mips/.libs src/mips/_libs -rm -rf src/moxie/.libs src/moxie/_libs -rm -rf src/pa/.libs src/pa/_libs @@ -1088,7 +1143,9 @@ -rm -rf src/sh/.libs src/sh/_libs -rm -rf src/sh64/.libs src/sh64/_libs -rm -rf src/sparc/.libs src/sparc/_libs + -rm -rf src/tile/.libs src/tile/_libs -rm -rf src/x86/.libs src/x86/_libs + -rm -rf src/xtensa/.libs src/xtensa/_libs distclean-libtool: -rm -f libtool config.lt @@ -1121,12 +1178,12 @@ doc/libffi.dvi: doc/libffi.texi $(srcdir)/doc/version.texi doc/$(am__dirstamp) TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I doc -I $(srcdir)/doc' \ - $(TEXI2DVI) -o $@ `test -f 'doc/libffi.texi' || echo '$(srcdir)/'`doc/libffi.texi + $(TEXI2DVI) --clean -o $@ `test -f 'doc/libffi.texi' || echo '$(srcdir)/'`doc/libffi.texi doc/libffi.pdf: doc/libffi.texi $(srcdir)/doc/version.texi doc/$(am__dirstamp) TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I doc -I $(srcdir)/doc' \ - $(TEXI2PDF) -o $@ `test -f 'doc/libffi.texi' || echo '$(srcdir)/'`doc/libffi.texi + $(TEXI2PDF) --clean -o $@ `test -f 'doc/libffi.texi' || echo '$(srcdir)/'`doc/libffi.texi doc/libffi.html: doc/libffi.texi $(srcdir)/doc/version.texi doc/$(am__dirstamp) rm -rf $(@:.html=.htp) @@ -1163,7 +1220,7 @@ @MAINTAINER_MODE_TRUE@ -rm -f $(srcdir)/doc/stamp-vti $(srcdir)/doc/version.texi .dvi.ps: TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ - $(DVIPS) -o $@ $< + $(DVIPS) -o $@ $< uninstall-dvi-am: @$(NORMAL_UNINSTALL) @@ -1185,9 +1242,7 @@ uninstall-info-am: @$(PRE_UNINSTALL) - @if test -d '$(DESTDIR)$(infodir)' && \ - (install-info --version && \ - install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \ + @if test -d '$(DESTDIR)$(infodir)' && $(am__can_run_installinfo); then \ list='$(INFO_DEPS)'; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ @@ -1259,8 +1314,11 @@ done install-pkgconfigDATA: $(pkgconfig_DATA) @$(NORMAL_INSTALL) - test -z "$(pkgconfigdir)" || $(MKDIR_P) "$(DESTDIR)$(pkgconfigdir)" @list='$(pkgconfig_DATA)'; test -n "$(pkgconfigdir)" || list=; \ + if test -n "$$list"; then \ + echo " $(MKDIR_P) '$(DESTDIR)$(pkgconfigdir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(pkgconfigdir)" || exit 1; \ + fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ @@ -1277,12 +1335,12 @@ dir='$(DESTDIR)$(pkgconfigdir)'; $(am__uninstall_files_from_dir) # This directory's subdirectories are mostly independent; you can cd -# into them and run `make' without going through this Makefile. -# To change the values of `make' variables: instead of editing Makefiles, -# (1) if the variable is set in `config.status', edit `config.status' -# (which will cause the Makefiles to be regenerated when you run `make'); -# (2) otherwise, pass the desired values on the `make' command line. -$(RECURSIVE_TARGETS): +# into them and run 'make' without going through this Makefile. +# To change the values of 'make' variables: instead of editing Makefiles, +# (1) if the variable is set in 'config.status', edit 'config.status' +# (which will cause the Makefiles to be regenerated when you run 'make'); +# (2) otherwise, pass the desired values on the 'make' command line. +$(RECURSIVE_TARGETS) $(RECURSIVE_CLEAN_TARGETS): @fail= failcom='exit 1'; \ for f in x $$MAKEFLAGS; do \ case $$f in \ @@ -1292,7 +1350,11 @@ done; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ - list='$(SUBDIRS)'; for subdir in $$list; do \ + case "$@" in \ + distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ + *) list='$(SUBDIRS)' ;; \ + esac; \ + for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ @@ -1306,37 +1368,6 @@ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" - -$(RECURSIVE_CLEAN_TARGETS): - @fail= failcom='exit 1'; \ - for f in x $$MAKEFLAGS; do \ - case $$f in \ - *=* | --[!k]*);; \ - *k*) failcom='fail=yes';; \ - esac; \ - done; \ - dot_seen=no; \ - case "$@" in \ - distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ - *) list='$(SUBDIRS)' ;; \ - esac; \ - rev=''; for subdir in $$list; do \ - if test "$$subdir" = "."; then :; else \ - rev="$$subdir $$rev"; \ - fi; \ - done; \ - rev="$$rev ."; \ - target=`echo $@ | sed s/-recursive//`; \ - for subdir in $$rev; do \ - echo "Making $$target in $$subdir"; \ - if test "$$subdir" = "."; then \ - local_target="$$target-am"; \ - else \ - local_target="$$target"; \ - fi; \ - ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ - || eval $$failcom; \ - done && test -z "$$fail" tags-recursive: list='$(SUBDIRS)'; for subdir in $$list; do \ test "$$subdir" = . || ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \ @@ -1345,6 +1376,10 @@ list='$(SUBDIRS)'; for subdir in $$list; do \ test "$$subdir" = . || ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \ done +cscopelist-recursive: + list='$(SUBDIRS)'; for subdir in $$list; do \ + test "$$subdir" = . || ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) cscopelist); \ + done ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES) list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ @@ -1408,8 +1443,32 @@ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" +cscope: cscope.files + test ! -s cscope.files \ + || $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS) + +clean-cscope: + -rm -f cscope.files + +cscope.files: clean-cscope cscopelist-recursive cscopelist + +cscopelist: cscopelist-recursive $(HEADERS) $(SOURCES) $(LISP) + list='$(SOURCES) $(HEADERS) $(LISP)'; \ + case "$(srcdir)" in \ + [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ + *) sdir=$(subdir)/$(srcdir) ;; \ + esac; \ + for i in $$list; do \ + if test -f "$$i"; then \ + echo "$(subdir)/$$i"; \ + else \ + echo "$$sdir/$$i"; \ + fi; \ + done >> $(top_builddir)/cscope.files + distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags + -rm -f cscope.out cscope.in.out cscope.po.out cscope.files distdir: $(DISTFILES) $(am__remove_distdir) @@ -1445,13 +1504,10 @@ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ - test -d "$(distdir)/$$subdir" \ - || $(MKDIR_P) "$(distdir)/$$subdir" \ - || exit 1; \ - fi; \ - done - @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ - if test "$$subdir" = .; then :; else \ + $(am__make_dryrun) \ + || test -d "$(distdir)/$$subdir" \ + || $(MKDIR_P) "$(distdir)/$$subdir" \ + || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ @@ -1483,40 +1539,36 @@ || chmod -R a+r "$(distdir)" dist-gzip: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz - $(am__remove_distdir) + $(am__post_remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2 - $(am__remove_distdir) + $(am__post_remove_distdir) dist-lzip: distdir tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz - $(am__remove_distdir) - -dist-lzma: distdir - tardir=$(distdir) && $(am__tar) | lzma -9 -c >$(distdir).tar.lzma - $(am__remove_distdir) + $(am__post_remove_distdir) dist-xz: distdir tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz - $(am__remove_distdir) + $(am__post_remove_distdir) dist-tarZ: distdir tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z - $(am__remove_distdir) + $(am__post_remove_distdir) dist-shar: distdir shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz - $(am__remove_distdir) + $(am__post_remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) - $(am__remove_distdir) + $(am__post_remove_distdir) -dist dist-all: distdir - tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz - $(am__remove_distdir) +dist dist-all: + $(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:' + $(am__post_remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another @@ -1527,8 +1579,6 @@ GZIP=$(GZIP_ENV) gzip -dc $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\ - *.tar.lzma*) \ - lzma -dc $(distdir).tar.lzma | $(am__untar) ;;\ *.tar.lz*) \ lzip -dc $(distdir).tar.lz | $(am__untar) ;;\ *.tar.xz*) \ @@ -1540,7 +1590,7 @@ *.zip*) \ unzip $(distdir).zip ;;\ esac - chmod -R a-w $(distdir); chmod a+w $(distdir) + chmod -R a-w $(distdir); chmod u+w $(distdir) mkdir $(distdir)/_build mkdir $(distdir)/_inst chmod a-w $(distdir) @@ -1574,7 +1624,7 @@ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \ && cd "$$am__cwd" \ || exit 1 - $(am__remove_distdir) + $(am__post_remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x' @@ -1609,7 +1659,7 @@ all-am: Makefile $(INFO_DEPS) $(LTLIBRARIES) $(DATA) fficonfig.h installdirs: installdirs-recursive installdirs-am: - for dir in "$(DESTDIR)$(libdir)" "$(DESTDIR)$(infodir)" "$(DESTDIR)$(pkgconfigdir)"; do \ + for dir in "$(DESTDIR)$(toolexeclibdir)" "$(DESTDIR)$(infodir)" "$(DESTDIR)$(pkgconfigdir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-recursive @@ -1641,12 +1691,16 @@ -rm -f doc/$(am__dirstamp) -rm -f src/$(DEPDIR)/$(am__dirstamp) -rm -f src/$(am__dirstamp) + -rm -f src/aarch64/$(DEPDIR)/$(am__dirstamp) + -rm -f src/aarch64/$(am__dirstamp) -rm -f src/alpha/$(DEPDIR)/$(am__dirstamp) -rm -f src/alpha/$(am__dirstamp) -rm -f src/arm/$(DEPDIR)/$(am__dirstamp) -rm -f src/arm/$(am__dirstamp) -rm -f src/avr32/$(DEPDIR)/$(am__dirstamp) -rm -f src/avr32/$(am__dirstamp) + -rm -f src/bfin/$(DEPDIR)/$(am__dirstamp) + -rm -f src/bfin/$(am__dirstamp) -rm -f src/cris/$(DEPDIR)/$(am__dirstamp) -rm -f src/cris/$(am__dirstamp) -rm -f src/frv/$(DEPDIR)/$(am__dirstamp) @@ -1657,6 +1711,8 @@ -rm -f src/m32r/$(am__dirstamp) -rm -f src/m68k/$(DEPDIR)/$(am__dirstamp) -rm -f src/m68k/$(am__dirstamp) + -rm -f src/microblaze/$(DEPDIR)/$(am__dirstamp) + -rm -f src/microblaze/$(am__dirstamp) -rm -f src/mips/$(DEPDIR)/$(am__dirstamp) -rm -f src/mips/$(am__dirstamp) -rm -f src/moxie/$(DEPDIR)/$(am__dirstamp) @@ -1673,20 +1729,25 @@ -rm -f src/sh64/$(am__dirstamp) -rm -f src/sparc/$(DEPDIR)/$(am__dirstamp) -rm -f src/sparc/$(am__dirstamp) + -rm -f src/tile/$(DEPDIR)/$(am__dirstamp) + -rm -f src/tile/$(am__dirstamp) -rm -f src/x86/$(DEPDIR)/$(am__dirstamp) -rm -f src/x86/$(am__dirstamp) + -rm -f src/xtensa/$(DEPDIR)/$(am__dirstamp) + -rm -f src/xtensa/$(am__dirstamp) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive -clean-am: clean-aminfo clean-generic clean-libLTLIBRARIES \ - clean-libtool clean-noinstLTLIBRARIES mostlyclean-am +clean-am: clean-aminfo clean-generic clean-libtool \ + clean-noinstLTLIBRARIES clean-toolexeclibLTLIBRARIES \ + mostlyclean-am distclean: distclean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) - -rm -rf src/$(DEPDIR) src/alpha/$(DEPDIR) src/arm/$(DEPDIR) src/avr32/$(DEPDIR) src/cris/$(DEPDIR) src/frv/$(DEPDIR) src/ia64/$(DEPDIR) src/m32r/$(DEPDIR) src/m68k/$(DEPDIR) src/mips/$(DEPDIR) src/moxie/$(DEPDIR) src/pa/$(DEPDIR) src/powerpc/$(DEPDIR) src/s390/$(DEPDIR) src/sh/$(DEPDIR) src/sh64/$(DEPDIR) src/sparc/$(DEPDIR) src/x86/$(DEPDIR) + -rm -rf src/$(DEPDIR) src/aarch64/$(DEPDIR) src/alpha/$(DEPDIR) src/arm/$(DEPDIR) src/avr32/$(DEPDIR) src/bfin/$(DEPDIR) src/cris/$(DEPDIR) src/frv/$(DEPDIR) src/ia64/$(DEPDIR) src/m32r/$(DEPDIR) src/m68k/$(DEPDIR) src/microblaze/$(DEPDIR) src/mips/$(DEPDIR) src/moxie/$(DEPDIR) src/pa/$(DEPDIR) src/powerpc/$(DEPDIR) src/s390/$(DEPDIR) src/sh/$(DEPDIR) src/sh64/$(DEPDIR) src/sparc/$(DEPDIR) src/tile/$(DEPDIR) src/x86/$(DEPDIR) src/xtensa/$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-hdr distclean-libtool distclean-tags @@ -1709,8 +1770,11 @@ install-dvi-am: $(DVIS) @$(NORMAL_INSTALL) - test -z "$(dvidir)" || $(MKDIR_P) "$(DESTDIR)$(dvidir)" @list='$(DVIS)'; test -n "$(dvidir)" || list=; \ + if test -n "$$list"; then \ + echo " $(MKDIR_P) '$(DESTDIR)$(dvidir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(dvidir)" || exit 1; \ + fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ @@ -1719,12 +1783,17 @@ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(dvidir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(dvidir)" || exit $$?; \ done -install-exec-am: install-libLTLIBRARIES +install-exec-am: install-toolexeclibLTLIBRARIES + +install-html: install-html-recursive install-html-am: $(HTMLS) @$(NORMAL_INSTALL) - test -z "$(htmldir)" || $(MKDIR_P) "$(DESTDIR)$(htmldir)" @list='$(HTMLS)'; list2=; test -n "$(htmldir)" || list=; \ + if test -n "$$list"; then \ + echo " $(MKDIR_P) '$(DESTDIR)$(htmldir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(htmldir)" || exit 1; \ + fi; \ for p in $$list; do \ if test -f "$$p" || test -d "$$p"; then d=; else d="$(srcdir)/"; fi; \ $(am__strip_dir) \ @@ -1747,9 +1816,12 @@ install-info-am: $(INFO_DEPS) @$(NORMAL_INSTALL) - test -z "$(infodir)" || $(MKDIR_P) "$(DESTDIR)$(infodir)" @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ list='$(INFO_DEPS)'; test -n "$(infodir)" || list=; \ + if test -n "$$list"; then \ + echo " $(MKDIR_P) '$(DESTDIR)$(infodir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(infodir)" || exit 1; \ + fi; \ for file in $$list; do \ case $$file in \ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \ @@ -1767,13 +1839,7 @@ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(infodir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(infodir)" || exit $$?; done @$(POST_INSTALL) - @am__run_installinfo=yes; \ - case $$AM_UPDATE_INFO_DIR in \ - n|no|NO) am__run_installinfo=no;; \ - *) (install-info --version) >/dev/null 2>&1 \ - || am__run_installinfo=no;; \ - esac; \ - if test $$am__run_installinfo = yes; then \ + @if $(am__can_run_installinfo); then \ list='$(INFO_DEPS)'; test -n "$(infodir)" || list=; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ @@ -1783,10 +1849,15 @@ else : ; fi install-man: +install-pdf: install-pdf-recursive + install-pdf-am: $(PDFS) @$(NORMAL_INSTALL) - test -z "$(pdfdir)" || $(MKDIR_P) "$(DESTDIR)$(pdfdir)" @list='$(PDFS)'; test -n "$(pdfdir)" || list=; \ + if test -n "$$list"; then \ + echo " $(MKDIR_P) '$(DESTDIR)$(pdfdir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(pdfdir)" || exit 1; \ + fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ @@ -1798,8 +1869,11 @@ install-ps-am: $(PSS) @$(NORMAL_INSTALL) - test -z "$(psdir)" || $(MKDIR_P) "$(DESTDIR)$(psdir)" @list='$(PSS)'; test -n "$(psdir)" || list=; \ + if test -n "$$list"; then \ + echo " $(MKDIR_P) '$(DESTDIR)$(psdir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(psdir)" || exit 1; \ + fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ @@ -1812,7 +1886,7 @@ maintainer-clean: maintainer-clean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache - -rm -rf src/$(DEPDIR) src/alpha/$(DEPDIR) src/arm/$(DEPDIR) src/avr32/$(DEPDIR) src/cris/$(DEPDIR) src/frv/$(DEPDIR) src/ia64/$(DEPDIR) src/m32r/$(DEPDIR) src/m68k/$(DEPDIR) src/mips/$(DEPDIR) src/moxie/$(DEPDIR) src/pa/$(DEPDIR) src/powerpc/$(DEPDIR) src/s390/$(DEPDIR) src/sh/$(DEPDIR) src/sh64/$(DEPDIR) src/sparc/$(DEPDIR) src/x86/$(DEPDIR) + -rm -rf src/$(DEPDIR) src/aarch64/$(DEPDIR) src/alpha/$(DEPDIR) src/arm/$(DEPDIR) src/avr32/$(DEPDIR) src/bfin/$(DEPDIR) src/cris/$(DEPDIR) src/frv/$(DEPDIR) src/ia64/$(DEPDIR) src/m32r/$(DEPDIR) src/m68k/$(DEPDIR) src/microblaze/$(DEPDIR) src/mips/$(DEPDIR) src/moxie/$(DEPDIR) src/pa/$(DEPDIR) src/powerpc/$(DEPDIR) src/s390/$(DEPDIR) src/sh/$(DEPDIR) src/sh64/$(DEPDIR) src/sparc/$(DEPDIR) src/tile/$(DEPDIR) src/x86/$(DEPDIR) src/xtensa/$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-aminfo \ maintainer-clean-generic maintainer-clean-vti @@ -1831,41 +1905,39 @@ ps-am: $(PSS) uninstall-am: uninstall-dvi-am uninstall-html-am uninstall-info-am \ - uninstall-libLTLIBRARIES uninstall-pdf-am \ - uninstall-pkgconfigDATA uninstall-ps-am + uninstall-pdf-am uninstall-pkgconfigDATA uninstall-ps-am \ + uninstall-toolexeclibLTLIBRARIES .MAKE: $(RECURSIVE_CLEAN_TARGETS) $(RECURSIVE_TARGETS) all \ - ctags-recursive install-am install-strip tags-recursive + cscopelist-recursive ctags-recursive install-am install-strip \ + tags-recursive .PHONY: $(RECURSIVE_CLEAN_TARGETS) $(RECURSIVE_TARGETS) CTAGS GTAGS \ all all-am am--refresh check check-am clean clean-aminfo \ - clean-generic clean-libLTLIBRARIES clean-libtool \ - clean-noinstLTLIBRARIES ctags ctags-recursive dist dist-all \ - dist-bzip2 dist-gzip dist-info dist-lzip dist-lzma dist-shar \ + clean-cscope clean-generic clean-libtool \ + clean-noinstLTLIBRARIES clean-toolexeclibLTLIBRARIES cscope \ + cscopelist cscopelist-recursive ctags ctags-recursive dist \ + dist-all dist-bzip2 dist-gzip dist-info dist-lzip dist-shar \ dist-tarZ dist-xz dist-zip distcheck distclean \ distclean-compile distclean-generic distclean-hdr \ distclean-libtool distclean-tags distcleancheck distdir \ distuninstallcheck dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ - install-html-am install-info install-info-am \ - install-libLTLIBRARIES install-man install-pdf install-pdf-am \ - install-pkgconfigDATA install-ps install-ps-am install-strip \ + install-html-am install-info install-info-am install-man \ + install-pdf install-pdf-am install-pkgconfigDATA install-ps \ + install-ps-am install-strip install-toolexeclibLTLIBRARIES \ installcheck installcheck-am installdirs installdirs-am \ maintainer-clean maintainer-clean-aminfo \ maintainer-clean-generic maintainer-clean-vti mostlyclean \ mostlyclean-aminfo mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool mostlyclean-vti pdf pdf-am ps ps-am tags \ tags-recursive uninstall uninstall-am uninstall-dvi-am \ - uninstall-html-am uninstall-info-am uninstall-libLTLIBRARIES \ - uninstall-pdf-am uninstall-pkgconfigDATA uninstall-ps-am + uninstall-html-am uninstall-info-am uninstall-pdf-am \ + uninstall-pkgconfigDATA uninstall-ps-am \ + uninstall-toolexeclibLTLIBRARIES -# No install-html or install-pdf support in automake yet -.PHONY: install-html install-pdf -install-html: -install-pdf: - # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: diff --git a/Modules/_ctypes/libffi/README b/Modules/_ctypes/libffi/README --- a/Modules/_ctypes/libffi/README +++ b/Modules/_ctypes/libffi/README @@ -1,7 +1,7 @@ Status ====== -libffi-3.0.11 was released on April 11, 2012. Check the libffi web +libffi-3.0.12 was released on February 11, 2013. Check the libffi web page for updates: . @@ -43,54 +43,69 @@ For specific configuration details and testing status, please refer to the wiki page here: - http://www.moxielogic.org/wiki/index.php?title=Libffi_3.0.11 + http://www.moxielogic.org/wiki/index.php?title=Libffi_3.0.12 At the time of release, the following basic configurations have been tested: -|--------------+------------------| -| Architecture | Operating System | -|--------------+------------------| -| Alpha | Linux | -| Alpha | Tru64 | -| ARM | Linux | -| ARM | iOS | -| AVR32 | Linux | -| HPPA | HPUX | -| IA-64 | Linux | -| M68K | FreeMiNT | -| M68K | RTEMS | -| MIPS | IRIX | -| MIPS | Linux | -| MIPS | RTEMS | -| MIPS64 | Linux | -| PowerPC | AMIGA | -| PowerPC | Linux | -| PowerPC | Mac OSX | -| PowerPC | FreeBSD | -| PowerPC64 | Linux | -| S390 | Linux | -| S390X | Linux | -| SPARC | Linux | -| SPARC | Solaris | -| SPARC64 | Linux | -| SPARC64 | FreeBSD | -| X86 | FreeBSD | -| X86 | Interix | -| X86 | kFreeBSD | -| X86 | Linux | -| X86 | Mac OSX | -| X86 | OpenBSD | -| X86 | OS/2 | -| X86 | Solaris | -| X86 | Windows/Cygwin | -| X86 | Windows/MingW | -| X86-64 | FreeBSD | -| X86-64 | Linux | -| X86-64 | Linux/x32 | -| X86-64 | OpenBSD | -| X86-64 | Windows/MingW | -|--------------+------------------| +|-----------------+------------------+-------------------------| +| Architecture | Operating System | Compiler | +|-----------------+------------------+-------------------------| +| AArch64 | Linux | GCC | +| Alpha | Linux | GCC | +| Alpha | Tru64 | GCC | +| ARM | Linux | GCC | +| ARM | iOS | GCC | +| AVR32 | Linux | GCC | +| Blackfin | uClinux | GCC | +| HPPA | HPUX | GCC | +| IA-64 | Linux | GCC | +| M68K | FreeMiNT | GCC | +| M68K | Linux | GCC | +| M68K | RTEMS | GCC | +| MicroBlaze | Linux | GCC | +| MIPS | IRIX | GCC | +| MIPS | Linux | GCC | +| MIPS | RTEMS | GCC | +| MIPS64 | Linux | GCC | +| Moxie | Bare metal | GCC +| PowerPC 32-bit | AIX | IBM XL C | +| PowerPC 64-bit | AIX | IBM XL C | +| PowerPC | AMIGA | GCC | +| PowerPC | Linux | GCC | +| PowerPC | Mac OSX | GCC | +| PowerPC | FreeBSD | GCC | +| PowerPC 64-bit | FreeBSD | GCC | +| PowerPC 64-bit | Linux | GCC | +| S390 | Linux | GCC | +| S390X | Linux | GCC | +| SPARC | Linux | GCC | +| SPARC | Solaris | GCC | +| SPARC | Solaris | Oracle Solaris Studio C | +| SPARC64 | Linux | GCC | +| SPARC64 | FreeBSD | GCC | +| SPARC64 | Solaris | Oracle Solaris Studio C | +| TILE-Gx/TILEPro | Linux | GCC | +| X86 | FreeBSD | GCC | +| X86 | GNU HURD | GCC | +| X86 | Interix | GCC | +| X86 | kFreeBSD | GCC | +| X86 | Linux | GCC | +| X86 | Mac OSX | GCC | +| X86 | OpenBSD | GCC | +| X86 | OS/2 | GCC | +| X86 | Solaris | GCC | +| X86 | Solaris | Oracle Solaris Studio C | +| X86 | Windows/Cygwin | GCC | +| X86 | Windows/MingW | GCC | +| X86-64 | FreeBSD | GCC | +| X86-64 | Linux | GCC | +| X86-64 | Linux/x32 | GCC | +| X86-64 | OpenBSD | GCC | +| X86-64 | Solaris | Oracle Solaris Studio C | +| X86-64 | Windows/MingW | GCC | +| Xtensa | Linux | GCC | +|-----------------+------------------+-------------------------| Please send additional platform test results to libffi-discuss at sourceware.org and feel free to update the wiki page @@ -129,13 +144,12 @@ that sets 'fix_srcfile_path' to a 'cygpath' command. ('cygpath' is not present in MingW, and is not required when using MingW-style paths.) -For iOS builds, run generate-ios-source-and-headers.py and then -libffi.xcodeproj should work. +For iOS builds, the 'libffi.xcodeproj' Xcode project is available. Configure has many other options. Use "configure --help" to see them all. Once configure has finished, type "make". Note that you must be using -GNU make. You can ftp GNU make from prep.ai.mit.edu:/pub/gnu. +GNU make. You can ftp GNU make from ftp.gnu.org:/pub/gnu/make . To ensure that libffi is working as advertised, type "make check". This will require that you have DejaGNU installed. @@ -148,16 +162,29 @@ See the ChangeLog files for details. +3.0.12 Feb-11-13 + Add Moxie support. + Add AArch64 support. + Add Blackfin support. + Add TILE-Gx/TILEPro support. + Add MicroBlaze support. + Add Xtensa support. + Add support for PaX enabled kernels with MPROTECT. + Add support for native vendor compilers on + Solaris and AIX. + Work around LLVM/GCC interoperability issue on x86_64. + 3.0.11 Apr-11-12 - Add support for variadic functions (ffi_prep_cif_var). + Lots of build fixes. + Add Amiga newer MacOS support. + Add support for variadic functions (ffi_prep_cif_var). Add Linux/x32 support. Add thiscall, fastcall and MSVC cdecl support on Windows. - Add Amiga and newer MacOS support. + Add Amiga and newer MacOS support. Add m68k FreeMiNT support. Integration with iOS' xcode build tools. Fix Octeon and MC68881 support. Fix code pessimizations. - Lots of build fixes. 3.0.10 Aug-23-11 Add support for Apple's iOS. @@ -301,7 +328,7 @@ Authors & Credits ================= -libffi was originally written by Anthony Green . +libffi was originally written by Anthony Green . The developers of the GNU Compiler Collection project have made innumerable valuable contributions. See the ChangeLog file for @@ -316,15 +343,19 @@ Major processor architecture ports were contributed by the following developers: +aarch64 Marcus Shawcroft, James Greenhalgh alpha Richard Henderson arm Raffaele Sena +blackfin Alexandre Keunecke I. de Mendonca cris Simon Posnjak, Hans-Peter Nilsson frv Anthony Green ia64 Hans Boehm m32r Kazuhiro Inaoka m68k Andreas Schwab +microblaze Nathan Rossi mips Anthony Green, Casey Marshall mips64 David Daney +moxie Anthony Green pa Randolph Chung, Dave Anglin, Andreas Tobler powerpc Geoffrey Keating, Andreas Tobler, David Edelsohn, John Hornkvist @@ -333,8 +364,10 @@ sh Kaz Kojima sh64 Kaz Kojima sparc Anthony Green, Gordon Irlam +tile-gx/tilepro Walter Lee x86 Anthony Green, Jon Beniston x86-64 Bo Thorsen +xtensa Chris Zankel Jesper Skov and Andrew Haley both did more than their fair share of stepping through the code and tracking down bugs. diff --git a/Modules/_ctypes/libffi/aclocal.m4 b/Modules/_ctypes/libffi/aclocal.m4 --- a/Modules/_ctypes/libffi/aclocal.m4 +++ b/Modules/_ctypes/libffi/aclocal.m4 @@ -1,8 +1,7 @@ -# generated automatically by aclocal 1.11.3 -*- Autoconf -*- +# generated automatically by aclocal 1.12.2 -*- Autoconf -*- -# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, -# 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, -# Inc. +# Copyright (C) 1996-2012 Free Software Foundation, Inc. + # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. @@ -14,11 +13,11 @@ m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl -m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.68],, -[m4_warning([this file was generated for autoconf 2.68. +m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.69],, +[m4_warning([this file was generated for autoconf 2.69. You have another version of autoconf. It may work, but is not guaranteed to. If you have problems, you may need to regenerate the build system entirely. -To do so, use the procedure documented by the package, typically `autoreconf'.])]) +To do so, use the procedure documented by the package, typically 'autoreconf'.])]) # ltdl.m4 - Configure ltdl for the target system. -*-Autoconf-*- # @@ -515,7 +514,7 @@ # at 6.2 and later dlopen does load deplibs. lt_cv_sys_dlopen_deplibs=yes ;; - netbsd* | netbsdelf*-gnu) + netbsd*) lt_cv_sys_dlopen_deplibs=yes ;; openbsd*) @@ -838,14 +837,13 @@ dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LTDL_DLSYM_USCORE], []) -# Copyright (C) 2002, 2003, 2005, 2006, 2007, 2008, 2011 Free Software -# Foundation, Inc. +# Copyright (C) 2002-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 1 +# serial 8 # AM_AUTOMAKE_VERSION(VERSION) # ---------------------------- @@ -853,10 +851,10 @@ # generated from the m4 files accompanying Automake X.Y. # (This private macro should not be called outside this file.) AC_DEFUN([AM_AUTOMAKE_VERSION], -[am__api_version='1.11' +[am__api_version='1.12' dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to dnl require some minimum version. Point them to the right macro. -m4_if([$1], [1.11.3], [], +m4_if([$1], [1.12.2], [], [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl ]) @@ -872,14 +870,14 @@ # Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced. # This function is AC_REQUIREd by AM_INIT_AUTOMAKE. AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], -[AM_AUTOMAKE_VERSION([1.11.3])dnl +[AM_AUTOMAKE_VERSION([1.12.2])dnl m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl _AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))]) # Figure out how to run the assembler. -*- Autoconf -*- -# Copyright (C) 2001, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. +# Copyright (C) 2001-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, @@ -901,17 +899,17 @@ # AM_AUX_DIR_EXPAND -*- Autoconf -*- -# Copyright (C) 2001, 2003, 2005, 2011 Free Software Foundation, Inc. +# Copyright (C) 2001-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 1 +# serial 2 # For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets -# $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to -# `$srcdir', `$srcdir/..', or `$srcdir/../..'. +# $ac_aux_dir to '$srcdir/foo'. In other projects, it is set to +# '$srcdir', '$srcdir/..', or '$srcdir/../..'. # # Of course, Automake must honor this variable whenever it calls a # tool from the auxiliary directory. The problem is that $srcdir (and @@ -930,7 +928,7 @@ # # The reason of the latter failure is that $top_srcdir and $ac_aux_dir # are both prefixed by $srcdir. In an in-source build this is usually -# harmless because $srcdir is `.', but things will broke when you +# harmless because $srcdir is '.', but things will broke when you # start a VPATH build or use an absolute $srcdir. # # So we could use something similar to $top_srcdir/$ac_aux_dir/missing, @@ -956,22 +954,21 @@ # AM_CONDITIONAL -*- Autoconf -*- -# Copyright (C) 1997, 2000, 2001, 2003, 2004, 2005, 2006, 2008 -# Free Software Foundation, Inc. +# Copyright (C) 1997-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 9 +# serial 10 # AM_CONDITIONAL(NAME, SHELL-CONDITION) # ------------------------------------- # Define a conditional. AC_DEFUN([AM_CONDITIONAL], -[AC_PREREQ(2.52)dnl - ifelse([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], - [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl +[AC_PREREQ([2.52])dnl + m4_if([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], + [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl AC_SUBST([$1_TRUE])dnl AC_SUBST([$1_FALSE])dnl _AM_SUBST_NOTMAKE([$1_TRUE])dnl @@ -990,16 +987,15 @@ Usually this means the macro was only invoked conditionally.]]) fi])]) -# Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2009, -# 2010, 2011 Free Software Foundation, Inc. +# Copyright (C) 1999-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 12 +# serial 17 -# There are a few dirty hacks below to avoid letting `AC_PROG_CC' be +# There are a few dirty hacks below to avoid letting 'AC_PROG_CC' be # written in clear, in which case automake, when reading aclocal.m4, # will think it sees a *use*, and therefore will trigger all it's # C support machinery. Also note that it means that autoscan, seeing @@ -1009,7 +1005,7 @@ # _AM_DEPENDENCIES(NAME) # ---------------------- # See how the compiler implements dependency checking. -# NAME is "CC", "CXX", "GCJ", or "OBJC". +# NAME is "CC", "CXX", "OBJC", "OBJCXX", "UPC", or "GJC". # We try a few techniques and use that to set a single cache variable. # # We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was @@ -1022,12 +1018,13 @@ AC_REQUIRE([AM_MAKE_INCLUDE])dnl AC_REQUIRE([AM_DEP_TRACK])dnl -ifelse([$1], CC, [depcc="$CC" am_compiler_list=], - [$1], CXX, [depcc="$CXX" am_compiler_list=], - [$1], OBJC, [depcc="$OBJC" am_compiler_list='gcc3 gcc'], - [$1], UPC, [depcc="$UPC" am_compiler_list=], - [$1], GCJ, [depcc="$GCJ" am_compiler_list='gcc3 gcc'], - [depcc="$$1" am_compiler_list=]) +m4_if([$1], [CC], [depcc="$CC" am_compiler_list=], + [$1], [CXX], [depcc="$CXX" am_compiler_list=], + [$1], [OBJC], [depcc="$OBJC" am_compiler_list='gcc3 gcc'], + [$1], [OBJCXX], [depcc="$OBJCXX" am_compiler_list='gcc3 gcc'], + [$1], [UPC], [depcc="$UPC" am_compiler_list=], + [$1], [GCJ], [depcc="$GCJ" am_compiler_list='gcc3 gcc'], + [depcc="$$1" am_compiler_list=]) AC_CACHE_CHECK([dependency style of $depcc], [am_cv_$1_dependencies_compiler_type], @@ -1035,8 +1032,8 @@ # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up - # making a dummy file named `D' -- because `-MD' means `put the output - # in D'. + # making a dummy file named 'D' -- because '-MD' means "put the output + # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're @@ -1076,16 +1073,16 @@ : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c - # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with - # Solaris 8's {/usr,}/bin/sh. - touch sub/conftst$i.h + # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with + # Solaris 10 /bin/sh. + echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf - # We check with `-c' and `-o' for the sake of the "dashmstdout" + # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly - # handle `-M -o', and we need to detect this. Also, some Intel - # versions had trouble with output in subdirs + # handle '-M -o', and we need to detect this. Also, some Intel + # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in @@ -1094,8 +1091,8 @@ test "$am__universal" = false || continue ;; nosideeffect) - # after this tag, mechanisms are not by side-effect, so they'll - # only be used when explicitly requested + # After this tag, mechanisms are not by side-effect, so they'll + # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else @@ -1103,7 +1100,7 @@ fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) - # This compiler won't grok `-c -o', but also, the minuso test has + # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} @@ -1151,7 +1148,7 @@ # AM_SET_DEPDIR # ------------- # Choose a directory name for dependency files. -# This macro is AC_REQUIREd in _AM_DEPENDENCIES +# This macro is AC_REQUIREd in _AM_DEPENDENCIES. AC_DEFUN([AM_SET_DEPDIR], [AC_REQUIRE([AM_SET_LEADING_DOT])dnl AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl @@ -1161,9 +1158,13 @@ # AM_DEP_TRACK # ------------ AC_DEFUN([AM_DEP_TRACK], -[AC_ARG_ENABLE(dependency-tracking, -[ --disable-dependency-tracking speeds up one-time build - --enable-dependency-tracking do not reject slow dependency extractors]) +[AC_ARG_ENABLE([dependency-tracking], [dnl +AS_HELP_STRING( + [--enable-dependency-tracking], + [do not reject slow dependency extractors]) +AS_HELP_STRING( + [--disable-dependency-tracking], + [speeds up one-time build])]) if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' @@ -1178,14 +1179,13 @@ # Generate code to set up dependency tracking. -*- Autoconf -*- -# Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2008 -# Free Software Foundation, Inc. +# Copyright (C) 1999-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -#serial 5 +# serial 6 # _AM_OUTPUT_DEPENDENCY_COMMANDS # ------------------------------ @@ -1204,7 +1204,7 @@ # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. - # We used to match only the files named `Makefile.in', but + # We used to match only the files named 'Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. @@ -1216,21 +1216,19 @@ continue fi # Extract the definition of DEPDIR, am__include, and am__quote - # from the Makefile without running `make'. + # from the Makefile without running 'make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` - # When using ansi2knr, U may be empty or an underscore; expand it - U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ - sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do + sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`AS_DIRNAME(["$file"])` @@ -1248,7 +1246,7 @@ # This macro should only be invoked once -- use via AC_REQUIRE. # # This code is only required when automatic dependency tracking -# is enabled. FIXME. This creates each `.P' file that we will +# is enabled. FIXME. This creates each '.P' file that we will # need in order to bootstrap the dependency handling code. AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], [AC_CONFIG_COMMANDS([depfiles], @@ -1258,14 +1256,13 @@ # Do all the work for Automake. -*- Autoconf -*- -# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, -# 2005, 2006, 2008, 2009 Free Software Foundation, Inc. +# Copyright (C) 1996-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 16 +# serial 19 # This macro actually does too much. Some checks are only needed if # your package does certain things. But this isn't really a big deal. @@ -1311,31 +1308,41 @@ # Define the identity of the package. dnl Distinguish between old-style and new-style calls. m4_ifval([$2], -[m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl +[AC_DIAGNOSE([obsolete], +[$0: two- and three-arguments forms are deprecated. For more info, see: +http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_INIT_AUTOMAKE-invocation]) +m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl AC_SUBST([PACKAGE], [$1])dnl AC_SUBST([VERSION], [$2])], [_AM_SET_OPTIONS([$1])dnl dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT. -m4_if(m4_ifdef([AC_PACKAGE_NAME], 1)m4_ifdef([AC_PACKAGE_VERSION], 1), 11,, +m4_if( + m4_ifdef([AC_PACKAGE_NAME], [ok]):m4_ifdef([AC_PACKAGE_VERSION], [ok]), + [ok:ok],, [m4_fatal([AC_INIT should be called with package and version arguments])])dnl AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl _AM_IF_OPTION([no-define],, -[AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package]) - AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl +[AC_DEFINE_UNQUOTED([PACKAGE], ["$PACKAGE"], [Name of package]) + AC_DEFINE_UNQUOTED([VERSION], ["$VERSION"], [Version number of package])])dnl # Some tools Automake needs. AC_REQUIRE([AM_SANITY_CHECK])dnl AC_REQUIRE([AC_ARG_PROGRAM])dnl -AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version}) -AM_MISSING_PROG(AUTOCONF, autoconf) -AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version}) -AM_MISSING_PROG(AUTOHEADER, autoheader) -AM_MISSING_PROG(MAKEINFO, makeinfo) +AM_MISSING_PROG([ACLOCAL], [aclocal-${am__api_version}]) +AM_MISSING_PROG([AUTOCONF], [autoconf]) +AM_MISSING_PROG([AUTOMAKE], [automake-${am__api_version}]) +AM_MISSING_PROG([AUTOHEADER], [autoheader]) +AM_MISSING_PROG([MAKEINFO], [makeinfo]) AC_REQUIRE([AM_PROG_INSTALL_SH])dnl AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl -AC_REQUIRE([AM_PROG_MKDIR_P])dnl +AC_REQUIRE([AC_PROG_MKDIR_P])dnl +# For better backward compatibility. To be removed once Automake 1.9.x +# dies out for good. For more background, see: +# +# +AC_SUBST([mkdir_p], ['$(MKDIR_P)']) # We need awk for the "check" target. The system "awk" is bad on # some platforms. AC_REQUIRE([AC_PROG_AWK])dnl @@ -1346,28 +1353,35 @@ [_AM_PROG_TAR([v7])])]) _AM_IF_OPTION([no-dependencies],, [AC_PROVIDE_IFELSE([AC_PROG_CC], - [_AM_DEPENDENCIES(CC)], - [define([AC_PROG_CC], - defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl + [_AM_DEPENDENCIES([CC])], + [m4_define([AC_PROG_CC], + m4_defn([AC_PROG_CC])[_AM_DEPENDENCIES([CC])])])dnl AC_PROVIDE_IFELSE([AC_PROG_CXX], - [_AM_DEPENDENCIES(CXX)], - [define([AC_PROG_CXX], - defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl + [_AM_DEPENDENCIES([CXX])], + [m4_define([AC_PROG_CXX], + m4_defn([AC_PROG_CXX])[_AM_DEPENDENCIES([CXX])])])dnl AC_PROVIDE_IFELSE([AC_PROG_OBJC], - [_AM_DEPENDENCIES(OBJC)], - [define([AC_PROG_OBJC], - defn([AC_PROG_OBJC])[_AM_DEPENDENCIES(OBJC)])])dnl + [_AM_DEPENDENCIES([OBJC])], + [m4_define([AC_PROG_OBJC], + m4_defn([AC_PROG_OBJC])[_AM_DEPENDENCIES([OBJC])])])dnl +dnl Support for Objective C++ was only introduced in Autoconf 2.65, +dnl but we still cater to Autoconf 2.62. +m4_ifdef([AC_PROG_OBJCXX], +[AC_PROVIDE_IFELSE([AC_PROG_OBJCXX], + [_AM_DEPENDENCIES([OBJCXX])], + [m4_define([AC_PROG_OBJCXX], + m4_defn([AC_PROG_OBJCXX])[_AM_DEPENDENCIES([OBJCXX])])])])dnl ]) _AM_IF_OPTION([silent-rules], [AC_REQUIRE([AM_SILENT_RULES])])dnl -dnl The `parallel-tests' driver may need to know about EXEEXT, so add the -dnl `am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This macro +dnl The 'parallel-tests' driver may need to know about EXEEXT, so add the +dnl 'am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This macro dnl is hooked onto _AC_COMPILER_EXEEXT early, see below. AC_CONFIG_COMMANDS_PRE(dnl [m4_provide_if([_AM_COMPILER_EXEEXT], [AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl ]) -dnl Hook into `_AC_COMPILER_EXEEXT' early to learn its expansion. Do not +dnl Hook into '_AC_COMPILER_EXEEXT' early to learn its expansion. Do not dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further dnl mangled by Autoconf and run in a shell conditional statement. m4_define([_AC_COMPILER_EXEEXT], @@ -1395,14 +1409,13 @@ done echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count]) -# Copyright (C) 2001, 2003, 2005, 2008, 2011 Free Software Foundation, -# Inc. +# Copyright (C) 2001-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 1 +# serial 8 # AM_PROG_INSTALL_SH # ------------------ @@ -1417,9 +1430,9 @@ install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi -AC_SUBST(install_sh)]) +AC_SUBST([install_sh])]) -# Copyright (C) 2003, 2005 Free Software Foundation, Inc. +# Copyright (C) 2003-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, @@ -1443,20 +1456,19 @@ # Add --enable-maintainer-mode option to configure. -*- Autoconf -*- # From Jim Meyering -# Copyright (C) 1996, 1998, 2000, 2001, 2002, 2003, 2004, 2005, 2008, -# 2011 Free Software Foundation, Inc. +# Copyright (C) 1996-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 5 +# serial 7 # AM_MAINTAINER_MODE([DEFAULT-MODE]) # ---------------------------------- # Control maintainer-specific portions of Makefiles. -# Default is to disable them, unless `enable' is passed literally. -# For symmetry, `disable' may be passed as well. Anyway, the user +# Default is to disable them, unless 'enable' is passed literally. +# For symmetry, 'disable' may be passed as well. Anyway, the user # can override the default with the --enable/--disable switch. AC_DEFUN([AM_MAINTAINER_MODE], [m4_case(m4_default([$1], [disable]), @@ -1467,10 +1479,11 @@ AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles]) dnl maintainer-mode's default is 'disable' unless 'enable' is passed AC_ARG_ENABLE([maintainer-mode], -[ --][am_maintainer_other][-maintainer-mode am_maintainer_other make rules and dependencies not useful - (and sometimes confusing) to the casual installer], - [USE_MAINTAINER_MODE=$enableval], - [USE_MAINTAINER_MODE=]m4_if(am_maintainer_other, [enable], [no], [yes])) + [AS_HELP_STRING([--]am_maintainer_other[-maintainer-mode], + am_maintainer_other[ make rules and dependencies not useful + (and sometimes confusing) to the casual installer])], + [USE_MAINTAINER_MODE=$enableval], + [USE_MAINTAINER_MODE=]m4_if(am_maintainer_other, [enable], [no], [yes])) AC_MSG_RESULT([$USE_MAINTAINER_MODE]) AM_CONDITIONAL([MAINTAINER_MODE], [test $USE_MAINTAINER_MODE = yes]) MAINT=$MAINTAINER_MODE_TRUE @@ -1482,13 +1495,13 @@ # Check to see how 'make' treats includes. -*- Autoconf -*- -# Copyright (C) 2001, 2002, 2003, 2005, 2009 Free Software Foundation, Inc. +# Copyright (C) 2001-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 4 +# serial 5 # AM_MAKE_INCLUDE() # ----------------- @@ -1507,7 +1520,7 @@ _am_result=none # First try GNU make style include. echo "include confinc" > confmf -# Ignore all kinds of additional output from `make'. +# Ignore all kinds of additional output from 'make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include @@ -1532,8 +1545,7 @@ rm -f confinc confmf ]) -# Copyright (C) 1999, 2000, 2001, 2003, 2004, 2005, 2008 -# Free Software Foundation, Inc. +# Copyright (C) 1999-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, @@ -1569,14 +1581,13 @@ # Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- -# Copyright (C) 1997, 1999, 2000, 2001, 2003, 2004, 2005, 2008 -# Free Software Foundation, Inc. +# Copyright (C) 1997-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 6 +# serial 7 # AM_MISSING_PROG(NAME, PROGRAM) # ------------------------------ @@ -1606,49 +1617,19 @@ am_missing_run="$MISSING --run " else am_missing_run= - AC_MSG_WARN([`missing' script is too old or missing]) + AC_MSG_WARN(['missing' script is too old or missing]) fi ]) -# Copyright (C) 2003, 2004, 2005, 2006, 2011 Free Software Foundation, -# Inc. +# Helper functions for option handling. -*- Autoconf -*- + +# Copyright (C) 2001-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 1 - -# AM_PROG_MKDIR_P -# --------------- -# Check for `mkdir -p'. -AC_DEFUN([AM_PROG_MKDIR_P], -[AC_PREREQ([2.60])dnl -AC_REQUIRE([AC_PROG_MKDIR_P])dnl -dnl Automake 1.8 to 1.9.6 used to define mkdir_p. We now use MKDIR_P, -dnl while keeping a definition of mkdir_p for backward compatibility. -dnl @MKDIR_P@ is magic: AC_OUTPUT adjusts its value for each Makefile. -dnl However we cannot define mkdir_p as $(MKDIR_P) for the sake of -dnl Makefile.ins that do not define MKDIR_P, so we do our own -dnl adjustment using top_builddir (which is defined more often than -dnl MKDIR_P). -AC_SUBST([mkdir_p], ["$MKDIR_P"])dnl -case $mkdir_p in - [[\\/$]]* | ?:[[\\/]]*) ;; - */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; -esac -]) - -# Helper functions for option handling. -*- Autoconf -*- - -# Copyright (C) 2001, 2002, 2003, 2005, 2008, 2010 Free Software -# Foundation, Inc. -# -# This file is free software; the Free Software Foundation -# gives unlimited permission to copy and/or distribute it, -# with or without modifications, as long as this notice is preserved. - -# serial 5 +# serial 6 # _AM_MANGLE_OPTION(NAME) # ----------------------- @@ -1659,7 +1640,7 @@ # -------------------- # Set option NAME. Presently that only means defining a flag for this option. AC_DEFUN([_AM_SET_OPTION], -[m4_define(_AM_MANGLE_OPTION([$1]), 1)]) +[m4_define(_AM_MANGLE_OPTION([$1]), [1])]) # _AM_SET_OPTIONS(OPTIONS) # ------------------------ @@ -1675,22 +1656,18 @@ # Check to make sure that the build environment is sane. -*- Autoconf -*- -# Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005, 2008 -# Free Software Foundation, Inc. +# Copyright (C) 1996-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 5 +# serial 9 # AM_SANITY_CHECK # --------------- AC_DEFUN([AM_SANITY_CHECK], [AC_MSG_CHECKING([whether build environment is sane]) -# Just in case -sleep 1 -echo timestamp > conftest.file # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' @@ -1701,32 +1678,40 @@ esac case $srcdir in *[[\\\"\#\$\&\'\`$am_lf\ \ ]]*) - AC_MSG_ERROR([unsafe srcdir value: `$srcdir']);; + AC_MSG_ERROR([unsafe srcdir value: '$srcdir']);; esac -# Do `set' in a subshell so we don't clobber the current shell's +# Do 'set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( - set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` - if test "$[*]" = "X"; then - # -L didn't work. - set X `ls -t "$srcdir/configure" conftest.file` - fi - rm -f conftest.file - if test "$[*]" != "X $srcdir/configure conftest.file" \ - && test "$[*]" != "X conftest.file $srcdir/configure"; then + am_has_slept=no + for am_try in 1 2; do + echo "timestamp, slept: $am_has_slept" > conftest.file + set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` + if test "$[*]" = "X"; then + # -L didn't work. + set X `ls -t "$srcdir/configure" conftest.file` + fi + if test "$[*]" != "X $srcdir/configure conftest.file" \ + && test "$[*]" != "X conftest.file $srcdir/configure"; then - # If neither matched, then we have a broken ls. This can happen - # if, for instance, CONFIG_SHELL is bash and it inherits a - # broken ls alias from the environment. This has actually - # happened. Such a system could not be considered "sane". - AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken -alias in your environment]) - fi - + # If neither matched, then we have a broken ls. This can happen + # if, for instance, CONFIG_SHELL is bash and it inherits a + # broken ls alias from the environment. This has actually + # happened. Such a system could not be considered "sane". + AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken + alias in your environment]) + fi + if test "$[2]" = conftest.file || test $am_try -eq 2; then + break + fi + # Just in case. + sleep 1 + am_has_slept=yes + done test "$[2]" = conftest.file ) then @@ -1736,39 +1721,55 @@ AC_MSG_ERROR([newly created file is older than distributed files! Check your system clock]) fi -AC_MSG_RESULT(yes)]) +AC_MSG_RESULT([yes]) +# If we didn't sleep, we still need to ensure time stamps of config.status and +# generated files are strictly newer. +am_sleep_pid= +if grep 'slept: no' conftest.file >/dev/null 2>&1; then + ( sleep 1 ) & + am_sleep_pid=$! +fi +AC_CONFIG_COMMANDS_PRE( + [AC_MSG_CHECKING([that generated files are newer than configure]) + if test -n "$am_sleep_pid"; then + # Hide warnings about reused PIDs. + wait $am_sleep_pid 2>/dev/null + fi + AC_MSG_RESULT([done])]) +rm -f conftest.file +]) -# Copyright (C) 2001, 2003, 2005, 2011 Free Software Foundation, Inc. +# Copyright (C) 2001-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 1 +# serial 2 # AM_PROG_INSTALL_STRIP # --------------------- -# One issue with vendor `install' (even GNU) is that you can't +# One issue with vendor 'install' (even GNU) is that you can't # specify the program used to strip binaries. This is especially # annoying in cross-compiling environments, where the build's strip # is unlikely to handle the host's binaries. # Fortunately install-sh will honor a STRIPPROG variable, so we -# always use install-sh in `make install-strip', and initialize +# always use install-sh in "make install-strip", and initialize # STRIPPROG with the value of the STRIP variable (set by the user). AC_DEFUN([AM_PROG_INSTALL_STRIP], [AC_REQUIRE([AM_PROG_INSTALL_SH])dnl -# Installed binaries are usually stripped using `strip' when the user -# run `make install-strip'. However `strip' might not be the right +# Installed binaries are usually stripped using 'strip' when the user +# run "make install-strip". However 'strip' might not be the right # tool to use in cross-compilation environments, therefore Automake -# will honor the `STRIP' environment variable to overrule this program. -dnl Don't test for $cross_compiling = yes, because it might be `maybe'. +# will honor the 'STRIP' environment variable to overrule this program. +dnl Don't test for $cross_compiling = yes, because it might be 'maybe'. if test "$cross_compiling" != no; then AC_CHECK_TOOL([STRIP], [strip], :) fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" AC_SUBST([INSTALL_STRIP_PROGRAM])]) -# Copyright (C) 2006, 2008, 2010 Free Software Foundation, Inc. +# Copyright (C) 2006-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, @@ -1789,18 +1790,18 @@ # Check how to create a tarball. -*- Autoconf -*- -# Copyright (C) 2004, 2005, 2012 Free Software Foundation, Inc. +# Copyright (C) 2004-2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. -# serial 2 +# serial 3 # _AM_PROG_TAR(FORMAT) # -------------------- # Check how to create a tarball in format FORMAT. -# FORMAT should be one of `v7', `ustar', or `pax'. +# FORMAT should be one of 'v7', 'ustar', or 'pax'. # # Substitute a variable $(am__tar) that is a command # writing to stdout a FORMAT-tarball containing the directory @@ -1823,7 +1824,7 @@ _am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none' _am_tools=${am_cv_prog_tar_$1-$_am_tools} # Do not fold the above two line into one, because Tru64 sh and -# Solaris sh will not grok spaces in the rhs of `-'. +# Solaris sh will not grok spaces in the rhs of '-'. for _am_tool in $_am_tools do case $_am_tool in diff --git a/Modules/_ctypes/libffi/build-ios.sh b/Modules/_ctypes/libffi/build-ios.sh new file mode 100755 --- /dev/null +++ b/Modules/_ctypes/libffi/build-ios.sh @@ -0,0 +1,67 @@ +#!/bin/sh + +PLATFORM_IOS=/Developer/Platforms/iPhoneOS.platform/ +PLATFORM_IOS_SIM=/Developer/Platforms/iPhoneSimulator.platform/ +SDK_IOS_VERSION="4.2" +MIN_IOS_VERSION="3.0" +OUTPUT_DIR="universal-ios" + +build_target () { + local platform=$1 + local sdk=$2 + local arch=$3 + local triple=$4 + local builddir=$5 + + mkdir -p "${builddir}" + pushd "${builddir}" + export CC="${platform}"/Developer/usr/bin/gcc-4.2 + export CFLAGS="-arch ${arch} -isysroot ${sdk} -miphoneos-version-min=${MIN_IOS_VERSION}" + ../configure --host=${triple} && make + popd +} + +# Build all targets +build_target "${PLATFORM_IOS}" "${PLATFORM_IOS}/Developer/SDKs/iPhoneOS${SDK_IOS_VERSION}.sdk/" armv6 arm-apple-darwin10 armv6-ios +build_target "${PLATFORM_IOS}" "${PLATFORM_IOS}/Developer/SDKs/iPhoneOS${SDK_IOS_VERSION}.sdk/" armv7 arm-apple-darwin10 armv7-ios +build_target "${PLATFORM_IOS_SIM}" "${PLATFORM_IOS_SIM}/Developer/SDKs/iPhoneSimulator${SDK_IOS_VERSION}.sdk/" i386 i386-apple-darwin10 i386-ios-sim + +# Create universal output directories +mkdir -p "${OUTPUT_DIR}" +mkdir -p "${OUTPUT_DIR}/include" +mkdir -p "${OUTPUT_DIR}/include/armv6" +mkdir -p "${OUTPUT_DIR}/include/armv7" +mkdir -p "${OUTPUT_DIR}/include/i386" + +# Create the universal binary +lipo -create armv6-ios/.libs/libffi.a armv7-ios/.libs/libffi.a i386-ios-sim/.libs/libffi.a -output "${OUTPUT_DIR}/libffi.a" + +# Copy in the headers +copy_headers () { + local src=$1 + local dest=$2 + + # Fix non-relative header reference + sed 's//"ffitarget.h"/' < "${src}/include/ffi.h" > "${dest}/ffi.h" + cp "${src}/include/ffitarget.h" "${dest}" +} + +copy_headers armv6-ios "${OUTPUT_DIR}/include/armv6" +copy_headers armv7-ios "${OUTPUT_DIR}/include/armv7" +copy_headers i386-ios-sim "${OUTPUT_DIR}/include/i386" + +# Create top-level header +( +cat << EOF +#ifdef __arm__ + #include + #ifdef _ARM_ARCH_6 + #include "include/armv6/ffi.h" + #elif _ARM_ARCH_7 + #include "include/armv7/ffi.h" + #endif +#elif defined(__i386__) + #include "include/i386/ffi.h" +#endif +EOF +) > "${OUTPUT_DIR}/ffi.h" diff --git a/Modules/_ctypes/libffi/config.guess b/Modules/_ctypes/libffi/config.guess --- a/Modules/_ctypes/libffi/config.guess +++ b/Modules/_ctypes/libffi/config.guess @@ -2,13 +2,13 @@ # Attempt to guess a canonical system name. # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, # 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, -# 2011 Free Software Foundation, Inc. +# 2011, 2012, 2013 Free Software Foundation, Inc. -timestamp='2011-06-03' +timestamp='2012-12-29' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by -# the Free Software Foundation; either version 2 of the License, or +# the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but @@ -17,26 +17,22 @@ # General Public License for more details. # # You should have received a copy of the GNU General Public License -# along with this program; if not, write to the Free Software -# Foundation, Inc., 51 Franklin Street - Fifth Floor, Boston, MA -# 02110-1301, USA. +# along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under -# the same distribution terms that you use for the rest of that program. - - -# Originally written by Per Bothner. Please send patches (context -# diff format) to and include a ChangeLog -# entry. +# the same distribution terms that you use for the rest of that +# program. This Exception is an additional permission under section 7 +# of the GNU General Public License, version 3 ("GPLv3"). # -# This script attempts to guess a canonical system name similar to -# config.sub. If it succeeds, it prints the system name on stdout, and -# exits with 0. Otherwise, it exits with 1. +# Originally written by Per Bothner. # # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD +# +# Please send patches with a ChangeLog entry to config-patches at gnu.org. + me=`echo "$0" | sed -e 's,.*/,,'` @@ -57,8 +53,8 @@ Originally written by Per Bothner. Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, -2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free -Software Foundation, Inc. +2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, +2012, 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." @@ -145,7 +141,7 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or - # more of the tupples: *-*-netbsdelf*, *-*-netbsdaout*, + # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward @@ -202,6 +198,10 @@ # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. echo "${machine}-${os}${release}" exit ;; + *:Bitrig:*:*) + UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` + echo ${UNAME_MACHINE_ARCH}-unknown-bitrig${UNAME_RELEASE} + exit ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} @@ -304,7 +304,7 @@ arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) echo arm-acorn-riscix${UNAME_RELEASE} exit ;; - arm:riscos:*:*|arm:RISCOS:*:*) + arm*:riscos:*:*|arm*:RISCOS:*:*) echo arm-unknown-riscos exit ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) @@ -792,21 +792,26 @@ echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} exit ;; *:FreeBSD:*:*) - case ${UNAME_MACHINE} in - pc98) - echo i386-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; + UNAME_PROCESSOR=`/usr/bin/uname -p` + case ${UNAME_PROCESSOR} in amd64) echo x86_64-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; *) - echo ${UNAME_MACHINE}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; + echo ${UNAME_PROCESSOR}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; esac exit ;; i*:CYGWIN*:*) echo ${UNAME_MACHINE}-pc-cygwin exit ;; + *:MINGW64*:*) + echo ${UNAME_MACHINE}-pc-mingw64 + exit ;; *:MINGW*:*) echo ${UNAME_MACHINE}-pc-mingw32 exit ;; + i*:MSYS*:*) + echo ${UNAME_MACHINE}-pc-msys + exit ;; i*:windows32*:*) # uname -m includes "-pc" on this system. echo ${UNAME_MACHINE}-mingw32 @@ -861,6 +866,13 @@ i*86:Minix:*:*) echo ${UNAME_MACHINE}-pc-minix exit ;; + aarch64:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-gnu + exit ;; + aarch64_be:Linux:*:*) + UNAME_MACHINE=aarch64_be + echo ${UNAME_MACHINE}-unknown-linux-gnu + exit ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in EV5) UNAME_MACHINE=alphaev5 ;; @@ -895,13 +907,16 @@ echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; cris:Linux:*:*) - echo cris-axis-linux-gnu + echo ${UNAME_MACHINE}-axis-linux-gnu exit ;; crisv32:Linux:*:*) - echo crisv32-axis-linux-gnu + echo ${UNAME_MACHINE}-axis-linux-gnu exit ;; frv:Linux:*:*) - echo frv-unknown-linux-gnu + echo ${UNAME_MACHINE}-unknown-linux-gnu + exit ;; + hexagon:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; i*86:Linux:*:*) LIBC=gnu @@ -943,7 +958,7 @@ test x"${CPU}" != x && { echo "${CPU}-unknown-linux-gnu"; exit; } ;; or32:Linux:*:*) - echo or32-unknown-linux-gnu + echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; padre:Linux:*:*) echo sparc-unknown-linux-gnu @@ -984,7 +999,7 @@ echo ${UNAME_MACHINE}-dec-linux-gnu exit ;; x86_64:Linux:*:*) - echo x86_64-unknown-linux-gnu + echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; xtensa*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu @@ -1191,6 +1206,9 @@ BePC:Haiku:*:*) # Haiku running on Intel PC compatible. echo i586-pc-haiku exit ;; + x86_64:Haiku:*:*) + echo x86_64-unknown-haiku + exit ;; SX-4:SUPER-UX:*:*) echo sx4-nec-superux${UNAME_RELEASE} exit ;; @@ -1246,7 +1264,7 @@ NEO-?:NONSTOP_KERNEL:*:*) echo neo-tandem-nsk${UNAME_RELEASE} exit ;; - NSE-?:NONSTOP_KERNEL:*:*) + NSE-*:NONSTOP_KERNEL:*:*) echo nse-tandem-nsk${UNAME_RELEASE} exit ;; NSR-?:NONSTOP_KERNEL:*:*) @@ -1315,11 +1333,11 @@ i*86:AROS:*:*) echo ${UNAME_MACHINE}-pc-aros exit ;; + x86_64:VMkernel:*:*) + echo ${UNAME_MACHINE}-unknown-esx + exit ;; esac -#echo '(No uname command or uname output not recognized.)' 1>&2 -#echo "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" 1>&2 - eval $set_cc_for_build cat >$dummy.c <. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under -# the same distribution terms that you use for the rest of that program. +# the same distribution terms that you use for the rest of that +# program. This Exception is an additional permission under section 7 +# of the GNU General Public License, version 3 ("GPLv3"). -# Please send patches to . Submit a context -# diff and a properly formatted GNU ChangeLog entry. +# Please send patches with a ChangeLog entry to config-patches at gnu.org. # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. @@ -76,8 +71,8 @@ GNU config.sub ($timestamp) Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, -2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free -Software Foundation, Inc. +2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, +2012, 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." @@ -125,13 +120,17 @@ maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'` case $maybe_os in nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \ - linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ + linux-musl* | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ knetbsd*-gnu* | netbsd*-gnu* | \ kopensolaris*-gnu* | \ storm-chaos* | os2-emx* | rtmk-nova*) os=-$maybe_os basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` ;; + android-linux) + os=-linux-android + basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`-unknown + ;; *) basic_machine=`echo $1 | sed 's/-[^-]*$//'` if [ $basic_machine != $1 ] @@ -154,7 +153,7 @@ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \ - -apple | -axis | -knuth | -cray | -microblaze) + -apple | -axis | -knuth | -cray | -microblaze*) os= basic_machine=$1 ;; @@ -223,6 +222,12 @@ -isc*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; + -lynx*178) + os=-lynxos178 + ;; + -lynx*5) + os=-lynxos5 + ;; -lynx*) os=-lynxos ;; @@ -247,11 +252,14 @@ # Some are omitted here because they have special meanings below. 1750a | 580 \ | a29k \ + | aarch64 | aarch64_be \ | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \ | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \ | am33_2.0 \ - | arc | arm | arm[bl]e | arme[lb] | armv[2345] | armv[345][lb] | avr | avr32 \ - | be32 | be64 \ + | arc \ + | arm | arm[bl]e | arme[lb] | armv[2-8] | armv[3-8][lb] | armv7[arm] \ + | avr | avr32 \ + | be32 | be64 \ | bfin \ | c4x | clipper \ | d10v | d30v | dlx | dsp16xx \ @@ -264,7 +272,7 @@ | le32 | le64 \ | lm32 \ | m32c | m32r | m32rle | m68000 | m68k | m88k \ - | maxq | mb | microblaze | mcore | mep | metag \ + | maxq | mb | microblaze | microblazeel | mcore | mep | metag \ | mips | mipsbe | mipseb | mipsel | mipsle \ | mips16 \ | mips64 | mips64el \ @@ -319,8 +327,7 @@ c6x) basic_machine=tic6x-unknown ;; - m6811 | m68hc11 | m6812 | m68hc12 | picochip) - # Motorola 68HC11/12. + m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | picochip) basic_machine=$basic_machine-unknown os=-none ;; @@ -333,7 +340,10 @@ strongarm | thumb | xscale) basic_machine=arm-unknown ;; - + xgate) + basic_machine=$basic_machine-unknown + os=-none + ;; xscaleeb) basic_machine=armeb-unknown ;; @@ -356,6 +366,7 @@ # Recognize the basic CPU types with company name. 580-* \ | a29k-* \ + | aarch64-* | aarch64_be-* \ | alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \ | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* \ @@ -377,7 +388,8 @@ | lm32-* \ | m32c-* | m32r-* | m32rle-* \ | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \ - | m88110-* | m88k-* | maxq-* | mcore-* | metag-* | microblaze-* \ + | m88110-* | m88k-* | maxq-* | mcore-* | metag-* \ + | microblaze-* | microblazeel-* \ | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \ | mips16-* \ | mips64-* | mips64el-* \ @@ -719,7 +731,6 @@ i370-ibm* | ibm*) basic_machine=i370-ibm ;; -# I'm not sure what "Sysv32" means. Should this be sysv3.2? i*86v32) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv32 @@ -777,9 +788,13 @@ basic_machine=ns32k-utek os=-sysv ;; - microblaze) + microblaze*) basic_machine=microblaze-xilinx ;; + mingw64) + basic_machine=x86_64-pc + os=-mingw64 + ;; mingw32) basic_machine=i386-pc os=-mingw32 @@ -816,6 +831,10 @@ ms1-*) basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` ;; + msys) + basic_machine=i386-pc + os=-msys + ;; mvs) basic_machine=i370-ibm os=-mvs @@ -1004,7 +1023,11 @@ basic_machine=i586-unknown os=-pw32 ;; - rdos) + rdos | rdos64) + basic_machine=x86_64-pc + os=-rdos + ;; + rdos32) basic_machine=i386-pc os=-rdos ;; @@ -1337,15 +1360,15 @@ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ - | -openbsd* | -solidbsd* \ + | -bitrig* | -openbsd* | -solidbsd* \ | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* | -cegcc* \ - | -cygwin* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ - | -mingw32* | -linux-gnu* | -linux-android* \ - | -linux-newlib* | -linux-uclibc* \ + | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ + | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \ + | -linux-newlib* | -linux-musl* | -linux-uclibc* \ | -uxpv* | -beos* | -mpeix* | -udk* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ @@ -1528,6 +1551,9 @@ c4x-* | tic4x-*) os=-coff ;; + hexagon-*) + os=-elf + ;; tic54x-*) os=-coff ;; @@ -1555,9 +1581,6 @@ ;; m68000-sun) os=-sunos3 - # This also exists in the configure program, but was not the - # default. - # os=-sunos4 ;; m68*-cisco) os=-aout diff --git a/Modules/_ctypes/libffi/configure b/Modules/_ctypes/libffi/configure --- a/Modules/_ctypes/libffi/configure +++ b/Modules/_ctypes/libffi/configure @@ -1,13 +1,11 @@ #! /bin/sh # Guess values for system-dependent variables and create Makefiles. -# Generated by GNU Autoconf 2.68 for libffi 3.0.11. +# Generated by GNU Autoconf 2.69 for libffi 3.0.12. # # Report bugs to . # # -# Copyright (C) 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, -# 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Free Software -# Foundation, Inc. +# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc. # # # This configure script is free software; the Free Software Foundation @@ -136,6 +134,31 @@ # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH +# Use a proper internal environment variable to ensure we don't fall + # into an infinite loop, continuously re-executing ourselves. + if test x"${_as_can_reexec}" != xno && test "x$CONFIG_SHELL" != x; then + _as_can_reexec=no; export _as_can_reexec; + # We cannot yet assume a decent shell, so we have to provide a +# neutralization value for shells without unset; and this also +# works around shells that cannot unset nonexistent variables. +# Preserve -v and -x to the replacement shell. +BASH_ENV=/dev/null +ENV=/dev/null +(unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV +case $- in # (((( + *v*x* | *x*v* ) as_opts=-vx ;; + *v* ) as_opts=-v ;; + *x* ) as_opts=-x ;; + * ) as_opts= ;; +esac +exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} +# Admittedly, this is quite paranoid, since all the known shells bail +# out after a failed `exec'. +$as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 +as_fn_exit 255 + fi + # We don't want this to propagate to other subprocesses. + { _as_can_reexec=; unset _as_can_reexec;} if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then : emulate sh @@ -169,7 +192,8 @@ else exitcode=1; echo positional parameters were not saved. fi -test x\$exitcode = x0 || exit 1" +test x\$exitcode = x0 || exit 1 +test -x / || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && @@ -222,21 +246,25 @@ if test "x$CONFIG_SHELL" != x; then : - # We cannot yet assume a decent shell, so we have to provide a - # neutralization value for shells without unset; and this also - # works around shells that cannot unset nonexistent variables. - # Preserve -v and -x to the replacement shell. - BASH_ENV=/dev/null - ENV=/dev/null - (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV - export CONFIG_SHELL - case $- in # (((( - *v*x* | *x*v* ) as_opts=-vx ;; - *v* ) as_opts=-v ;; - *x* ) as_opts=-x ;; - * ) as_opts= ;; - esac - exec "$CONFIG_SHELL" $as_opts "$as_myself" ${1+"$@"} + export CONFIG_SHELL + # We cannot yet assume a decent shell, so we have to provide a +# neutralization value for shells without unset; and this also +# works around shells that cannot unset nonexistent variables. +# Preserve -v and -x to the replacement shell. +BASH_ENV=/dev/null +ENV=/dev/null +(unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV +case $- in # (((( + *v*x* | *x*v* ) as_opts=-vx ;; + *v* ) as_opts=-v ;; + *x* ) as_opts=-x ;; + * ) as_opts= ;; +esac +exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} +# Admittedly, this is quite paranoid, since all the known shells bail +# out after a failed `exec'. +$as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 +exit 255 fi if test x$as_have_required = xno; then : @@ -339,6 +367,14 @@ } # as_fn_mkdir_p + +# as_fn_executable_p FILE +# ----------------------- +# Test if FILE is an executable regular file. +as_fn_executable_p () +{ + test -f "$1" && test -x "$1" +} # as_fn_executable_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take @@ -460,6 +496,10 @@ chmod +x "$as_me.lineno" || { $as_echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } + # If we had to re-execute with $CONFIG_SHELL, we're ensured to have + # already done that, so ensure we don't try to do so again and fall + # in an infinite loop. This has already happened in practice. + _as_can_reexec=no; export _as_can_reexec # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). @@ -494,16 +534,16 @@ # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. - # In both cases, we have to default to `cp -p'. + # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || - as_ln_s='cp -p' + as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else - as_ln_s='cp -p' - fi -else - as_ln_s='cp -p' + as_ln_s='cp -pR' + fi +else + as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null @@ -515,28 +555,8 @@ as_mkdir_p=false fi -if test -x / >/dev/null 2>&1; then - as_test_x='test -x' -else - if ls -dL / >/dev/null 2>&1; then - as_ls_L_option=L - else - as_ls_L_option= - fi - as_test_x=' - eval sh -c '\'' - if test -d "$1"; then - test -d "$1/."; - else - case $1 in #( - -*)set "./$1";; - esac; - case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in #(( - ???[sx]*):;;*)false;;esac;fi - '\'' sh - ' -fi -as_executable_p=$as_test_x +as_test_x='test -x' +as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" @@ -570,8 +590,8 @@ # Identity of this package. PACKAGE_NAME='libffi' PACKAGE_TARNAME='libffi' -PACKAGE_VERSION='3.0.11' -PACKAGE_STRING='libffi 3.0.11' +PACKAGE_VERSION='3.0.12' +PACKAGE_STRING='libffi 3.0.12' PACKAGE_BUGREPORT='http://github.com/atgreen/libffi/issues' PACKAGE_URL='' @@ -627,6 +647,10 @@ sys_symbol_underscore HAVE_LONG_DOUBLE ALLOCA +XTENSA_FALSE +XTENSA_TRUE +TILE_FALSE +TILE_TRUE PA64_HPUX_FALSE PA64_HPUX_TRUE PA_HPUX_FALSE @@ -649,6 +673,8 @@ AVR32_TRUE ARM_FALSE ARM_TRUE +AARCH64_FALSE +AARCH64_TRUE POWERPC_FREEBSD_FALSE POWERPC_FREEBSD_TRUE POWERPC_DARWIN_FALSE @@ -659,6 +685,8 @@ POWERPC_TRUE MOXIE_FALSE MOXIE_TRUE +MICROBLAZE_FALSE +MICROBLAZE_TRUE M68K_FALSE M68K_TRUE M32R_FALSE @@ -679,6 +707,8 @@ X86_TRUE SPARC_FALSE SPARC_TRUE +BFIN_FALSE +BFIN_TRUE MIPS_FALSE MIPS_TRUE AM_LTLDFLAGS @@ -822,6 +852,7 @@ enable_portable_binary with_gcc_arch enable_maintainer_mode +enable_pax_emutramp enable_debug enable_structs enable_raw_api @@ -1289,8 +1320,6 @@ if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe - $as_echo "$as_me: WARNING: if you wanted to set the --build type, don't use --host. - If a cross compiler is detected then cross compile mode will be used" >&2 elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi @@ -1376,7 +1405,7 @@ # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF -\`configure' configures libffi 3.0.11 to adapt to many kinds of systems. +\`configure' configures libffi 3.0.12 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... @@ -1447,7 +1476,7 @@ if test -n "$ac_init_help"; then case $ac_init_help in - short | recursive ) echo "Configuration of libffi 3.0.11:";; + short | recursive ) echo "Configuration of libffi 3.0.12:";; esac cat <<\_ACEOF @@ -1457,8 +1486,10 @@ --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --disable-builddir disable automatic build in subdir of sources - --disable-dependency-tracking speeds up one-time build - --enable-dependency-tracking do not reject slow dependency extractors + --enable-dependency-tracking + do not reject slow dependency extractors + --disable-dependency-tracking + speeds up one-time build --enable-shared[=PKGS] build shared libraries [default=yes] --enable-static[=PKGS] build static libraries [default=yes] --enable-fast-install[=PKGS] @@ -1467,8 +1498,10 @@ --enable-portable-binary disable compiler optimizations that would produce unportable binaries - --enable-maintainer-mode enable make rules and dependencies not useful - (and sometimes confusing) to the casual installer + --enable-maintainer-mode + enable make rules and dependencies not useful (and + sometimes confusing) to the casual installer + --enable-pax_emutramp enable pax emulated trampolines, for we can't use PROT_EXEC --enable-debug debugging mode --disable-structs omit code for struct support --disable-raw-api make the raw api unavailable @@ -1563,10 +1596,10 @@ test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF -libffi configure 3.0.11 -generated by GNU Autoconf 2.68 - -Copyright (C) 2010 Free Software Foundation, Inc. +libffi configure 3.0.12 +generated by GNU Autoconf 2.69 + +Copyright (C) 2012 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF @@ -1642,7 +1675,7 @@ test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || - $as_test_x conftest$ac_exeext + test -x conftest$ac_exeext }; then : ac_retval=0 else @@ -1838,6 +1871,189 @@ } # ac_fn_c_check_func +# ac_fn_c_compute_int LINENO EXPR VAR INCLUDES +# -------------------------------------------- +# Tries to find the compile-time value of EXPR in a program that includes +# INCLUDES, setting VAR accordingly. Returns whether the value could be +# computed +ac_fn_c_compute_int () +{ + as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack + if test "$cross_compiling" = yes; then + # Depending upon the size, compute the lo and hi bounds. +cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +$4 +int +main () +{ +static int test_array [1 - 2 * !(($2) >= 0)]; +test_array [0] = 0; +return test_array [0]; + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + ac_lo=0 ac_mid=0 + while :; do + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +$4 +int +main () +{ +static int test_array [1 - 2 * !(($2) <= $ac_mid)]; +test_array [0] = 0; +return test_array [0]; + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + ac_hi=$ac_mid; break +else + as_fn_arith $ac_mid + 1 && ac_lo=$as_val + if test $ac_lo -le $ac_mid; then + ac_lo= ac_hi= + break + fi + as_fn_arith 2 '*' $ac_mid + 1 && ac_mid=$as_val +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + done +else + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +$4 +int +main () +{ +static int test_array [1 - 2 * !(($2) < 0)]; +test_array [0] = 0; +return test_array [0]; + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + ac_hi=-1 ac_mid=-1 + while :; do + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +$4 +int +main () +{ +static int test_array [1 - 2 * !(($2) >= $ac_mid)]; +test_array [0] = 0; +return test_array [0]; + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + ac_lo=$ac_mid; break +else + as_fn_arith '(' $ac_mid ')' - 1 && ac_hi=$as_val + if test $ac_mid -le $ac_hi; then + ac_lo= ac_hi= + break + fi + as_fn_arith 2 '*' $ac_mid && ac_mid=$as_val +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext + done +else + ac_lo= ac_hi= +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +# Binary search between lo and hi bounds. +while test "x$ac_lo" != "x$ac_hi"; do + as_fn_arith '(' $ac_hi - $ac_lo ')' / 2 + $ac_lo && ac_mid=$as_val + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +$4 +int +main () +{ +static int test_array [1 - 2 * !(($2) <= $ac_mid)]; +test_array [0] = 0; +return test_array [0]; + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_compile "$LINENO"; then : + ac_hi=$ac_mid +else + as_fn_arith '(' $ac_mid ')' + 1 && ac_lo=$as_val +fi +rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext +done +case $ac_lo in #(( +?*) eval "$3=\$ac_lo"; ac_retval=0 ;; +'') ac_retval=1 ;; +esac + else + cat confdefs.h - <<_ACEOF >conftest.$ac_ext +/* end confdefs.h. */ +$4 +static long int longval () { return $2; } +static unsigned long int ulongval () { return $2; } +#include +#include +int +main () +{ + + FILE *f = fopen ("conftest.val", "w"); + if (! f) + return 1; + if (($2) < 0) + { + long int i = longval (); + if (i != ($2)) + return 1; + fprintf (f, "%ld", i); + } + else + { + unsigned long int i = ulongval (); + if (i != ($2)) + return 1; + fprintf (f, "%lu", i); + } + /* Do not output a trailing newline, as this causes \r\n confusion + on some platforms. */ + return ferror (f) || fclose (f) != 0; + + ; + return 0; +} +_ACEOF +if ac_fn_c_try_run "$LINENO"; then : + echo >>conftest.val; read $3 conftest.$ac_ext -/* end confdefs.h. */ -$4 -int -main () -{ -static int test_array [1 - 2 * !(($2) >= 0)]; -test_array [0] = 0 - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ac_lo=0 ac_mid=0 - while :; do - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ -$4 -int -main () -{ -static int test_array [1 - 2 * !(($2) <= $ac_mid)]; -test_array [0] = 0 - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ac_hi=$ac_mid; break -else - as_fn_arith $ac_mid + 1 && ac_lo=$as_val - if test $ac_lo -le $ac_mid; then - ac_lo= ac_hi= - break - fi - as_fn_arith 2 '*' $ac_mid + 1 && ac_mid=$as_val -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext - done -else - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ -$4 -int -main () -{ -static int test_array [1 - 2 * !(($2) < 0)]; -test_array [0] = 0 - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ac_hi=-1 ac_mid=-1 - while :; do - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ -$4 -int -main () -{ -static int test_array [1 - 2 * !(($2) >= $ac_mid)]; -test_array [0] = 0 - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ac_lo=$ac_mid; break -else - as_fn_arith '(' $ac_mid ')' - 1 && ac_hi=$as_val - if test $ac_mid -le $ac_hi; then - ac_lo= ac_hi= - break - fi - as_fn_arith 2 '*' $ac_mid && ac_mid=$as_val -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext - done -else - ac_lo= ac_hi= -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext -# Binary search between lo and hi bounds. -while test "x$ac_lo" != "x$ac_hi"; do - as_fn_arith '(' $ac_hi - $ac_lo ')' / 2 + $ac_lo && ac_mid=$as_val - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ -$4 -int -main () -{ -static int test_array [1 - 2 * !(($2) <= $ac_mid)]; -test_array [0] = 0 - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ac_hi=$ac_mid -else - as_fn_arith '(' $ac_mid ')' + 1 && ac_lo=$as_val -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext -done -case $ac_lo in #(( -?*) eval "$3=\$ac_lo"; ac_retval=0 ;; -'') ac_retval=1 ;; -esac - else - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ -$4 -static long int longval () { return $2; } -static unsigned long int ulongval () { return $2; } -#include -#include -int -main () -{ - - FILE *f = fopen ("conftest.val", "w"); - if (! f) - return 1; - if (($2) < 0) - { - long int i = longval (); - if (i != ($2)) - return 1; - fprintf (f, "%ld", i); - } - else - { - unsigned long int i = ulongval (); - if (i != ($2)) - return 1; - fprintf (f, "%lu", i); - } - /* Do not output a trailing newline, as this causes \r\n confusion - on some platforms. */ - return ferror (f) || fclose (f) != 0; - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_run "$LINENO"; then : - echo >>conftest.val; read $3 config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. -It was created by libffi $as_me 3.0.11, which was -generated by GNU Autoconf 2.68. Invocation command line was +It was created by libffi $as_me 3.0.12, which was +generated by GNU Autoconf 2.69. Invocation command line was $ $0 $@ @@ -2736,7 +2774,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_ax_enable_builddir_sed="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -2763,7 +2801,7 @@ ac_config_commands="$ac_config_commands buildir" -am__api_version='1.11' +am__api_version='1.12' # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or @@ -2802,7 +2840,7 @@ # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. @@ -2860,9 +2898,6 @@ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5 $as_echo_n "checking whether build environment is sane... " >&6; } -# Just in case -sleep 1 -echo timestamp > conftest.file # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' @@ -2873,32 +2908,40 @@ esac case $srcdir in *[\\\"\#\$\&\'\`$am_lf\ \ ]*) - as_fn_error $? "unsafe srcdir value: \`$srcdir'" "$LINENO" 5;; -esac - -# Do `set' in a subshell so we don't clobber the current shell's + as_fn_error $? "unsafe srcdir value: '$srcdir'" "$LINENO" 5;; +esac + +# Do 'set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( - set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` - if test "$*" = "X"; then - # -L didn't work. - set X `ls -t "$srcdir/configure" conftest.file` - fi - rm -f conftest.file - if test "$*" != "X $srcdir/configure conftest.file" \ - && test "$*" != "X conftest.file $srcdir/configure"; then - - # If neither matched, then we have a broken ls. This can happen - # if, for instance, CONFIG_SHELL is bash and it inherits a - # broken ls alias from the environment. This has actually - # happened. Such a system could not be considered "sane". - as_fn_error $? "ls -t appears to fail. Make sure there is not a broken -alias in your environment" "$LINENO" 5 - fi - + am_has_slept=no + for am_try in 1 2; do + echo "timestamp, slept: $am_has_slept" > conftest.file + set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` + if test "$*" = "X"; then + # -L didn't work. + set X `ls -t "$srcdir/configure" conftest.file` + fi + if test "$*" != "X $srcdir/configure conftest.file" \ + && test "$*" != "X conftest.file $srcdir/configure"; then + + # If neither matched, then we have a broken ls. This can happen + # if, for instance, CONFIG_SHELL is bash and it inherits a + # broken ls alias from the environment. This has actually + # happened. Such a system could not be considered "sane". + as_fn_error $? "ls -t appears to fail. Make sure there is not a broken + alias in your environment" "$LINENO" 5 + fi + if test "$2" = conftest.file || test $am_try -eq 2; then + break + fi + # Just in case. + sleep 1 + am_has_slept=yes + done test "$2" = conftest.file ) then @@ -2910,6 +2953,16 @@ fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } +# If we didn't sleep, we still need to ensure time stamps of config.status and +# generated files are strictly newer. +am_sleep_pid= +if grep 'slept: no' conftest.file >/dev/null 2>&1; then + ( sleep 1 ) & + am_sleep_pid=$! +fi + +rm -f conftest.file + test "$program_prefix" != NONE && program_transform_name="s&^&$program_prefix&;$program_transform_name" # Use a double $ so make ignores it. @@ -2933,8 +2986,8 @@ am_missing_run="$MISSING --run " else am_missing_run= - { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`missing' script is too old or missing" >&5 -$as_echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;} + { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: 'missing' script is too old or missing" >&5 +$as_echo "$as_me: WARNING: 'missing' script is too old or missing" >&2;} fi if test x"${install_sh}" != xset; then @@ -2946,10 +2999,10 @@ esac fi -# Installed binaries are usually stripped using `strip' when the user -# run `make install-strip'. However `strip' might not be the right +# Installed binaries are usually stripped using 'strip' when the user +# run "make install-strip". However 'strip' might not be the right # tool to use in cross-compilation environments, therefore Automake -# will honor the `STRIP' environment variable to overrule this program. +# will honor the 'STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. @@ -2968,7 +3021,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3008,7 +3061,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3059,7 +3112,7 @@ test -z "$as_dir" && as_dir=. for ac_prog in mkdir gmkdir; do for ac_exec_ext in '' $ac_executable_extensions; do - { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; } || continue + as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext" || continue case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #( 'mkdir (GNU coreutils) '* | \ 'mkdir (coreutils) '* | \ @@ -3088,12 +3141,6 @@ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5 $as_echo "$MKDIR_P" >&6; } -mkdir_p="$MKDIR_P" -case $mkdir_p in - [\\/$]* | ?:[\\/]*) ;; - */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; -esac - for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. @@ -3112,7 +3159,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3198,7 +3245,7 @@ # Define the identity of the package. PACKAGE='libffi' - VERSION='3.0.11' + VERSION='3.0.12' cat >>confdefs.h <<_ACEOF @@ -3226,6 +3273,12 @@ MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} +# For better backward compatibility. To be removed once Automake 1.9.x +# dies out for good. For more background, see: +# +# +mkdir_p='$(MKDIR_P)' + # We need awk for the "check" target. The system "awk" is bad on # some platforms. # Always define AMTAR for backward compatibility. Yes, it's still used @@ -3271,7 +3324,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3311,7 +3364,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3364,7 +3417,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3405,7 +3458,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue @@ -3463,7 +3516,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3507,7 +3560,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -3953,8 +4006,7 @@ /* end confdefs.h. */ #include #include -#include -#include +struct stat; /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); @@ -4057,7 +4109,7 @@ _am_result=none # First try GNU make style include. echo "include confinc" > confmf -# Ignore all kinds of additional output from `make'. +# Ignore all kinds of additional output from 'make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include @@ -4113,8 +4165,8 @@ # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up - # making a dummy file named `D' -- because `-MD' means `put the output - # in D'. + # making a dummy file named 'D' -- because '-MD' means "put the output + # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're @@ -4149,16 +4201,16 @@ : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c - # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with - # Solaris 8's {/usr,}/bin/sh. - touch sub/conftst$i.h + # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with + # Solaris 10 /bin/sh. + echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf - # We check with `-c' and `-o' for the sake of the "dashmstdout" + # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly - # handle `-M -o', and we need to detect this. Also, some Intel - # versions had trouble with output in subdirs + # handle '-M -o', and we need to detect this. Also, some Intel + # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in @@ -4167,8 +4219,8 @@ test "$am__universal" = false || continue ;; nosideeffect) - # after this tag, mechanisms are not by side-effect, so they'll - # only be used when explicitly requested + # After this tag, mechanisms are not by side-effect, so they'll + # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else @@ -4176,7 +4228,7 @@ fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) - # This compiler won't grok `-c -o', but also, the minuso test has + # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} @@ -4254,8 +4306,8 @@ # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up - # making a dummy file named `D' -- because `-MD' means `put the output - # in D'. + # making a dummy file named 'D' -- because '-MD' means "put the output + # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're @@ -4288,16 +4340,16 @@ : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c - # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with - # Solaris 8's {/usr,}/bin/sh. - touch sub/conftst$i.h + # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with + # Solaris 10 /bin/sh. + echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf - # We check with `-c' and `-o' for the sake of the "dashmstdout" + # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly - # handle `-M -o', and we need to detect this. Also, some Intel - # versions had trouble with output in subdirs + # handle '-M -o', and we need to detect this. Also, some Intel + # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in @@ -4306,8 +4358,8 @@ test "$am__universal" = false || continue ;; nosideeffect) - # after this tag, mechanisms are not by side-effect, so they'll - # only be used when explicitly requested + # After this tag, mechanisms are not by side-effect, so they'll + # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else @@ -4315,7 +4367,7 @@ fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) - # This compiler won't grok `-c -o', but also, the minuso test has + # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} @@ -4611,7 +4663,7 @@ for ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_SED="$as_dir/$ac_prog$ac_exec_ext" - { test -f "$ac_path_SED" && $as_test_x "$ac_path_SED"; } || continue + as_fn_executable_p "$ac_path_SED" || continue # Check for GNU ac_path_SED and select it if it is found. # Check for GNU $ac_path_SED case `"$ac_path_SED" --version 2>&1` in @@ -4687,7 +4739,7 @@ for ac_prog in grep ggrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_GREP="$as_dir/$ac_prog$ac_exec_ext" - { test -f "$ac_path_GREP" && $as_test_x "$ac_path_GREP"; } || continue + as_fn_executable_p "$ac_path_GREP" || continue # Check for GNU ac_path_GREP and select it if it is found. # Check for GNU $ac_path_GREP case `"$ac_path_GREP" --version 2>&1` in @@ -4753,7 +4805,7 @@ for ac_prog in egrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir/$ac_prog$ac_exec_ext" - { test -f "$ac_path_EGREP" && $as_test_x "$ac_path_EGREP"; } || continue + as_fn_executable_p "$ac_path_EGREP" || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in @@ -4820,7 +4872,7 @@ for ac_prog in fgrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_FGREP="$as_dir/$ac_prog$ac_exec_ext" - { test -f "$ac_path_FGREP" && $as_test_x "$ac_path_FGREP"; } || continue + as_fn_executable_p "$ac_path_FGREP" || continue # Check for GNU ac_path_FGREP and select it if it is found. # Check for GNU $ac_path_FGREP case `"$ac_path_FGREP" --version 2>&1` in @@ -5076,7 +5128,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DUMPBIN="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -5120,7 +5172,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -5309,7 +5361,8 @@ ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` - if test -n "$lt_cv_sys_max_cmd_len"; then + if test -n "$lt_cv_sys_max_cmd_len" && \ + test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else @@ -5544,7 +5597,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -5584,7 +5637,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OBJDUMP="objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -5756,7 +5809,7 @@ lt_cv_deplibs_check_method=pass_all ;; -netbsd* | netbsdelf*-gnu) +netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' else @@ -5890,7 +5943,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -5930,7 +5983,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DLLTOOL="dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -6034,7 +6087,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AR="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -6078,7 +6131,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_AR="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -6203,7 +6256,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -6243,7 +6296,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -6302,7 +6355,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -6342,7 +6395,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_RANLIB="ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -6850,7 +6903,14 @@ LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) - LD="${LD-ld} -m elf_i386" + case `/usr/bin/file conftest.o` in + *x86-64*) + LD="${LD-ld} -m elf32_x86_64" + ;; + *) + LD="${LD-ld} -m elf_i386" + ;; + esac ;; ppc64-*linux*|powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" @@ -6991,7 +7051,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7031,7 +7091,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7111,7 +7171,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DSYMUTIL="${ac_tool_prefix}dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7151,7 +7211,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DSYMUTIL="dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7203,7 +7263,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_NMEDIT="${ac_tool_prefix}nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7243,7 +7303,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_NMEDIT="nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7295,7 +7355,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_LIPO="${ac_tool_prefix}lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7335,7 +7395,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_LIPO="lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7387,7 +7447,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL="${ac_tool_prefix}otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7427,7 +7487,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL="otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7479,7 +7539,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL64="${ac_tool_prefix}otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -7519,7 +7579,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL64="otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -9157,9 +9217,6 @@ openbsd*) with_gnu_ld=no ;; - linux* | k*bsd*-gnu | gnu*) - link_all_deplibs=no - ;; esac ld_shlibs=yes @@ -9381,7 +9438,7 @@ fi ;; - netbsd* | netbsdelf*-gnu) + netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= @@ -9558,7 +9615,6 @@ if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi - link_all_deplibs=no else # not using gcc if test "$host_cpu" = ia64; then @@ -10012,7 +10068,7 @@ link_all_deplibs=yes ;; - netbsd* | netbsdelf*-gnu) + netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else @@ -11025,10 +11081,14 @@ # before this can be enabled. hardcode_into_libs=yes + # Add ABI-specific directories to the system library path. + sys_lib_dlsearch_path_spec="/lib64 /usr/lib64 /lib /usr/lib" + # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` - sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" + sys_lib_dlsearch_path_spec="$sys_lib_dlsearch_path_spec $lt_ld_extra" + fi # We used to test for /lib/ld.so.1 and disable shared libraries on @@ -11040,18 +11100,6 @@ dynamic_linker='GNU/Linux ld.so' ;; -netbsdelf*-gnu) - version_type=linux - need_lib_prefix=no - need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' - shlibpath_var=LD_LIBRARY_PATH - shlibpath_overrides_runpath=no - hardcode_into_libs=yes - dynamic_linker='NetBSD ld.elf_so' - ;; - netbsd*) version_type=sunos need_lib_prefix=no @@ -12024,6 +12072,41 @@ +# Test for 64-bit build. +# The cast to long int works around a bug in the HP C Compiler +# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects +# declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'. +# This bug is HP SR number 8606223364. +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking size of size_t" >&5 +$as_echo_n "checking size of size_t... " >&6; } +if ${ac_cv_sizeof_size_t+:} false; then : + $as_echo_n "(cached) " >&6 +else + if ac_fn_c_compute_int "$LINENO" "(long int) (sizeof (size_t))" "ac_cv_sizeof_size_t" "$ac_includes_default"; then : + +else + if test "$ac_cv_type_size_t" = yes; then + { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 +$as_echo "$as_me: error: in \`$ac_pwd':" >&2;} +as_fn_error 77 "cannot compute sizeof (size_t) +See \`config.log' for more details" "$LINENO" 5; } + else + ac_cv_sizeof_size_t=0 + fi +fi + +fi +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_sizeof_size_t" >&5 +$as_echo "$ac_cv_sizeof_size_t" >&6; } + + + +cat >>confdefs.h <<_ACEOF +#define SIZEOF_SIZE_T $ac_cv_sizeof_size_t +_ACEOF + + + { $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler vendor" >&5 $as_echo_n "checking for C compiler vendor... " >&6; } if ${ax_cv_c_compiler_vendor+:} false; then : @@ -12087,7 +12170,7 @@ # Check whether --enable-portable-binary was given. if test "${enable_portable_binary+set}" = set; then : - enableval=$enable_portable_binary; acx_maxopt_portable=$withval + enableval=$enable_portable_binary; acx_maxopt_portable=$enableval else acx_maxopt_portable=no fi @@ -12348,41 +12431,8 @@ CFLAGS="-O3 -fomit-frame-pointer" # -malign-double for x86 systems - { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts -malign-double" >&5 -$as_echo_n "checking whether C compiler accepts -malign-double... " >&6; } -if ${ax_cv_check_cflags___malign_double+:} false; then : - $as_echo_n "(cached) " >&6 -else - - ax_check_save_flags=$CFLAGS - CFLAGS="$CFLAGS -malign-double" - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ - -int -main () -{ - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ax_cv_check_cflags___malign_double=yes -else - ax_cv_check_cflags___malign_double=no -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext - CFLAGS=$ax_check_save_flags -fi -{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_check_cflags___malign_double" >&5 -$as_echo "$ax_cv_check_cflags___malign_double" >&6; } -if test x"$ax_cv_check_cflags___malign_double" = xyes; then : - CFLAGS="$CFLAGS -malign-double" -else - : -fi - + # LIBFFI -- DON'T DO THIS - CHANGES ABI + # AX_CHECK_COMPILE_FLAG(-malign-double, CFLAGS="$CFLAGS -malign-double") # -fstrict-aliasing for gcc-2.95+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts -fstrict-aliasing" >&5 @@ -12486,7 +12536,7 @@ ax_gcc_arch="" if test "$cross_compiling" = no; then case $host_cpu in - i[3456]86*|x86_64*) # use cpuid codes, in part from x86info-1.7 by D. Jones + i[3456]86*|x86_64*) # use cpuid codes ac_ext=c ac_cpp='$CPP $CPPFLAGS' @@ -12602,18 +12652,24 @@ case $ax_cv_gcc_x86_cpuid_1 in *5[48]?:*:*:*) ax_gcc_arch="pentium-mmx pentium" ;; *5??:*:*:*) ax_gcc_arch=pentium ;; - *6[3456]?:*:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; - *6a?:*[01]:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; - *6a?:*[234]:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; - *6[9d]?:*:*:*) ax_gcc_arch="pentium-m pentium3 pentiumpro" ;; - *6[78b]?:*:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; - *6??:*:*:*) ax_gcc_arch=pentiumpro ;; - *f3[347]:*:*:*|*f41347:*:*:*) + *0?6[3456]?:*:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; + *0?6a?:*[01]:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; + *0?6a?:*[234]:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; + *0?6[9de]?:*:*:*) ax_gcc_arch="pentium-m pentium3 pentiumpro" ;; + *0?6[78b]?:*:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; + *0?6f?:*:*:*|*1?66?:*:*:*) ax_gcc_arch="core2 pentium-m pentium3 pentiumpro" ;; + *1?6[7d]?:*:*:*) ax_gcc_arch="penryn core2 pentium-m pentium3 pentiumpro" ;; + *1?6[aef]?:*:*:*|*2?6[5cef]?:*:*:*) ax_gcc_arch="corei7 core2 pentium-m pentium3 pentiumpro" ;; + *1?6c?:*:*:*|*[23]?66?:*:*:*) ax_gcc_arch="atom core2 pentium-m pentium3 pentiumpro" ;; + *2?6[ad]?:*:*:*) ax_gcc_arch="corei7-avx corei7 core2 pentium-m pentium3 pentiumpro" ;; + *0?6??:*:*:*) ax_gcc_arch=pentiumpro ;; + *6??:*:*:*) ax_gcc_arch="core2 pentiumpro" ;; + ?000?f3[347]:*:*:*|?000?f41347:*:*:*|?000?f6?:*:*:*) case $host_cpu in - x86_64*) ax_gcc_arch="nocona pentium4 pentiumpro" ;; - *) ax_gcc_arch="prescott pentium4 pentiumpro" ;; - esac ;; - *f??:*:*:*) ax_gcc_arch="pentium4 pentiumpro";; + x86_64*) ax_gcc_arch="nocona pentium4 pentiumpro" ;; + *) ax_gcc_arch="prescott pentium4 pentiumpro" ;; + esac ;; + ?000?f??:*:*:*) ax_gcc_arch="pentium4 pentiumpro";; esac ;; *:68747541:*:*) # AMD case $ax_cv_gcc_x86_cpuid_1 in @@ -12685,10 +12741,13 @@ ax_gcc_arch="athlon-xp athlon-4 athlon k7" ;; *) ax_gcc_arch="athlon-4 athlon k7" ;; esac ;; - *f[4cef8b]?:*:*:*) ax_gcc_arch="athlon64 k8" ;; - *f5?:*:*:*) ax_gcc_arch="opteron k8" ;; - *f7?:*:*:*) ax_gcc_arch="athlon-fx opteron k8" ;; - *f??:*:*:*) ax_gcc_arch="k8" ;; + ?00??f[4cef8b]?:*:*:*) ax_gcc_arch="athlon64 k8" ;; + ?00??f5?:*:*:*) ax_gcc_arch="opteron k8" ;; + ?00??f7?:*:*:*) ax_gcc_arch="athlon-fx opteron k8" ;; + ?00??f??:*:*:*) ax_gcc_arch="k8" ;; + ?05??f??:*:*:*) ax_gcc_arch="btver1 amdfam10 k8" ;; + ?06??f??:*:*:*) ax_gcc_arch="bdver1 amdfam10 k8" ;; + *f??:*:*:*) ax_gcc_arch="amdfam10 k8" ;; esac ;; *:746e6543:*:*) # IDT case $ax_cv_gcc_x86_cpuid_1 in @@ -12726,7 +12785,7 @@ IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do - if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then + if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_path_PRTDIAG="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 @@ -12921,6 +12980,31 @@ fi +# The AX_CFLAGS_WARN_ALL macro doesn't currently work for sunpro +# compiler. +if test "$ax_cv_c_compiler_vendor" != "sun"; then + if ${CFLAGS+:} false; then : + case " $CFLAGS " in + *" "*) + { { $as_echo "$as_me:${as_lineno-$LINENO}: : CFLAGS already contains "; } >&5 + (: CFLAGS already contains ) 2>&5 + ac_status=$? + $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 + test $ac_status = 0; } + ;; + *) + { { $as_echo "$as_me:${as_lineno-$LINENO}: : CFLAGS=\"\$CFLAGS \""; } >&5 + (: CFLAGS="$CFLAGS ") 2>&5 + ac_status=$? + $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 + test $ac_status = 0; } + CFLAGS="$CFLAGS " + ;; + esac +else + CFLAGS="" +fi + ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' @@ -12957,6 +13041,7 @@ fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cflags_warn_all" >&5 $as_echo "$ac_cv_cflags_warn_all" >&6; } + case ".$ac_cv_cflags_warn_all" in .ok|.ok,*) ;; .|.no|.no,*) ;; @@ -12991,8 +13076,15 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu +fi + if test "x$GCC" = "xyes"; then CFLAGS="$CFLAGS -fexceptions" + touch local.exp +else + cat > local.exp <&5 $as_echo_n "checking for ANSI C header files... " >&6; } @@ -13859,23 +14031,20 @@ /* end confdefs.h. */ $ac_includes_default int -find_stack_direction () -{ - static char *addr = 0; - auto char dummy; - if (addr == 0) - { - addr = &dummy; - return find_stack_direction (); - } - else - return (&dummy > addr) ? 1 : -1; -} - -int -main () -{ - return find_stack_direction () < 0; +find_stack_direction (int *addr, int depth) +{ + int dir, dummy = 0; + if (! addr) + addr = &dummy; + *addr = addr < &dummy ? 1 : addr == &dummy ? 0 : -1; + dir = depth ? find_stack_direction (addr, depth - 1) : 0; + return dir + dummy; +} + +int +main (int argc, char **argv) +{ + return find_stack_direction (0, argc + !argv + 20) < 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : @@ -14289,11 +14458,11 @@ # Check if we have .register cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ + +int +main () +{ asm (".register %g2, #scratch"); -int -main () -{ - ; return 0; } @@ -14347,11 +14516,11 @@ # Check if we have .ascii cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ + +int +main () +{ asm (".ascii \\"string\\""); -int -main () -{ - ; return 0; } @@ -14382,11 +14551,11 @@ # Check if we have .string cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ + +int +main () +{ asm (".string \\"string\\""); -int -main () -{ - ; return 0; } @@ -14408,6 +14577,17 @@ fi fi +# On PaX enable kernels that have MPROTECT enable we can't use PROT_EXEC. +# Check whether --enable-pax_emutramp was given. +if test "${enable_pax_emutramp+set}" = set; then : + enableval=$enable_pax_emutramp; if test "$enable_pax_emutramp" = "yes"; then + +$as_echo "#define FFI_MMAP_EXEC_EMUTRAMP_PAX 1" >>confdefs.h + + fi +fi + + if test x$TARGET = xX86_WIN64; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for _ prefix in compiled symbols" >&5 $as_echo_n "checking for _ prefix in compiled symbols... " >&6; } @@ -14463,7 +14643,6 @@ fi fi - FFI_EXEC_TRAMPOLINE_TABLE=0 case "$target" in *arm*-apple-darwin*) @@ -14472,7 +14651,7 @@ $as_echo "#define FFI_EXEC_TRAMPOLINE_TABLE 1" >>confdefs.h ;; - *-apple-darwin1[10]* | *-*-freebsd* | *-*-kfreebsd* | *-*-openbsd* | *-pc-solaris*) + *-apple-darwin1* | *-*-freebsd* | *-*-kfreebsd* | *-*-openbsd* | *-pc-solaris*) $as_echo "#define FFI_MMAP_EXEC_WRIT 1" >>confdefs.h @@ -14520,11 +14699,12 @@ libffi_cv_ro_eh_frame=no echo 'extern void foo (void); void bar (void) { foo (); foo (); }' > conftest.c - if $CC $CFLAGS -S -fpic -fexceptions -o conftest.s conftest.c > /dev/null 2>&1; then - if grep '.section.*eh_frame.*"a"' conftest.s > /dev/null; then - libffi_cv_ro_eh_frame=yes - elif grep '.section.*eh_frame.*#alloc' conftest.c \ - | grep -v '#write' > /dev/null; then + if $CC $CFLAGS -c -fpic -fexceptions -o conftest.o conftest.c > /dev/null 2>&1; then + objdump -h conftest.o > conftest.dump 2>&1 + libffi_eh_frame_line=`grep -n eh_frame conftest.dump | cut -d: -f 1` + libffi_test_line=`expr $libffi_eh_frame_line + 1`p + sed -n $libffi_test_line conftest.dump > conftest.line + if grep READONLY conftest.line > /dev/null; then libffi_cv_ro_eh_frame=yes fi fi @@ -14610,6 +14790,14 @@ fi fi + if test "$enable_debug" = "yes"; then + FFI_DEBUG_TRUE= + FFI_DEBUG_FALSE='#' +else + FFI_DEBUG_TRUE='#' + FFI_DEBUG_FALSE= +fi + # Check whether --enable-raw-api was given. if test "${enable_raw_api+set}" = set; then : @@ -14633,7 +14821,7 @@ # These variables are only ever used when we cross-build to X86_WIN32. # And we only support this with GCC, so... -if test x"$GCC" != x"no"; then +if test "x$GCC" = "xyes"; then if test -n "$with_cross_host" && test x"$with_cross_host" != x"no"; then toolexecdir='$(exec_prefix)/$(target_alias)' @@ -14648,14 +14836,10 @@ *) toolexeclibdir=$toolexeclibdir/$multi_os_directory ;; esac - -fi - -if test "${multilib}" = "yes"; then - multilib_arg="--enable-multilib" -else - multilib_arg= -fi +else + toolexeclibdir='$(libdir)' +fi + ac_config_commands="$ac_config_commands include" @@ -14783,6 +14967,14 @@ LTLIBOBJS=$ac_ltlibobjs +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking that generated files are newer than configure" >&5 +$as_echo_n "checking that generated files are newer than configure... " >&6; } + if test -n "$am_sleep_pid"; then + # Hide warnings about reused PIDs. + wait $am_sleep_pid 2>/dev/null + fi + { $as_echo "$as_me:${as_lineno-$LINENO}: result: done" >&5 +$as_echo "done" >&6; } if test -n "$EXEEXT"; then am__EXEEXT_TRUE= am__EXEEXT_FALSE='#' @@ -14815,6 +15007,10 @@ as_fn_error $? "conditional \"MIPS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi +if test -z "${BFIN_TRUE}" && test -z "${BFIN_FALSE}"; then + as_fn_error $? "conditional \"BFIN\" was never defined. +Usually this means the macro was only invoked conditionally." "$LINENO" 5 +fi if test -z "${SPARC_TRUE}" && test -z "${SPARC_FALSE}"; then as_fn_error $? "conditional \"SPARC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 @@ -14855,6 +15051,10 @@ as_fn_error $? "conditional \"M68K\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi +if test -z "${MICROBLAZE_TRUE}" && test -z "${MICROBLAZE_FALSE}"; then + as_fn_error $? "conditional \"MICROBLAZE\" was never defined. +Usually this means the macro was only invoked conditionally." "$LINENO" 5 +fi if test -z "${MOXIE_TRUE}" && test -z "${MOXIE_FALSE}"; then as_fn_error $? "conditional \"MOXIE\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 @@ -14875,6 +15075,10 @@ as_fn_error $? "conditional \"POWERPC_FREEBSD\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi +if test -z "${AARCH64_TRUE}" && test -z "${AARCH64_FALSE}"; then + as_fn_error $? "conditional \"AARCH64\" was never defined. +Usually this means the macro was only invoked conditionally." "$LINENO" 5 +fi if test -z "${ARM_TRUE}" && test -z "${ARM_FALSE}"; then as_fn_error $? "conditional \"ARM\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 @@ -14919,6 +15123,14 @@ as_fn_error $? "conditional \"PA64_HPUX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi +if test -z "${TILE_TRUE}" && test -z "${TILE_FALSE}"; then + as_fn_error $? "conditional \"TILE\" was never defined. +Usually this means the macro was only invoked conditionally." "$LINENO" 5 +fi +if test -z "${XTENSA_TRUE}" && test -z "${XTENSA_FALSE}"; then + as_fn_error $? "conditional \"XTENSA\" was never defined. +Usually this means the macro was only invoked conditionally." "$LINENO" 5 +fi if test -z "${FFI_EXEC_TRAMPOLINE_TABLE_TRUE}" && test -z "${FFI_EXEC_TRAMPOLINE_TABLE_FALSE}"; then as_fn_error $? "conditional \"FFI_EXEC_TRAMPOLINE_TABLE\" was never defined. @@ -14928,6 +15140,10 @@ as_fn_error $? "conditional \"FFI_DEBUG\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi +if test -z "${FFI_DEBUG_TRUE}" && test -z "${FFI_DEBUG_FALSE}"; then + as_fn_error $? "conditional \"FFI_DEBUG\" was never defined. +Usually this means the macro was only invoked conditionally." "$LINENO" 5 +fi : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 @@ -15226,16 +15442,16 @@ # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. - # In both cases, we have to default to `cp -p'. + # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || - as_ln_s='cp -p' + as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else - as_ln_s='cp -p' - fi -else - as_ln_s='cp -p' + as_ln_s='cp -pR' + fi +else + as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null @@ -15295,28 +15511,16 @@ as_mkdir_p=false fi -if test -x / >/dev/null 2>&1; then - as_test_x='test -x' -else - if ls -dL / >/dev/null 2>&1; then - as_ls_L_option=L - else - as_ls_L_option= - fi - as_test_x=' - eval sh -c '\'' - if test -d "$1"; then - test -d "$1/."; - else - case $1 in #( - -*)set "./$1";; - esac; - case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in #(( - ???[sx]*):;;*)false;;esac;fi - '\'' sh - ' -fi -as_executable_p=$as_test_x + +# as_fn_executable_p FILE +# ----------------------- +# Test if FILE is an executable regular file. +as_fn_executable_p () +{ + test -f "$1" && test -x "$1" +} # as_fn_executable_p +as_test_x='test -x' +as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" @@ -15337,8 +15541,8 @@ # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" -This file was extended by libffi $as_me 3.0.11, which was -generated by GNU Autoconf 2.68. Invocation command line was +This file was extended by libffi $as_me 3.0.12, which was +generated by GNU Autoconf 2.69. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS @@ -15407,11 +15611,11 @@ cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ -libffi config.status 3.0.11 -configured by $0, generated by GNU Autoconf 2.68, +libffi config.status 3.0.12 +configured by $0, generated by GNU Autoconf 2.69, with options \\"\$ac_cs_config\\" -Copyright (C) 2010 Free Software Foundation, Inc. +Copyright (C) 2012 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." @@ -15502,7 +15706,7 @@ _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then - set X '$SHELL' '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion + set X $SHELL '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \$as_echo "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' @@ -16622,7 +16826,7 @@ # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. - # We used to match only the files named `Makefile.in', but + # We used to match only the files named 'Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. @@ -16656,21 +16860,19 @@ continue fi # Extract the definition of DEPDIR, am__include, and am__quote - # from the Makefile without running `make'. + # from the Makefile without running 'make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` - # When using ansi2knr, U may be empty or an underscore; expand it - U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ - sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do + sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`$as_dirname -- "$file" || diff --git a/Modules/_ctypes/libffi/configure.ac b/Modules/_ctypes/libffi/configure.ac --- a/Modules/_ctypes/libffi/configure.ac +++ b/Modules/_ctypes/libffi/configure.ac @@ -5,7 +5,7 @@ AC_PREREQ(2.68) -AC_INIT([libffi], [3.0.11], [http://github.com/atgreen/libffi/issues]) +AC_INIT([libffi], [3.0.12], [http://github.com/atgreen/libffi/issues]) AC_CONFIG_HEADERS([fficonfig.h]) AC_CANONICAL_SYSTEM @@ -30,7 +30,7 @@ AC_PROG_CC CFLAGS=$save_CFLAGS m4_undefine([_AC_ARG_VAR_PRECIOUS]) -m4_rename([real_PRECIOUS],[_AC_ARG_VAR_PRECIOUS]) +m4_rename_force([real_PRECIOUS],[_AC_ARG_VAR_PRECIOUS]) AC_SUBST(CFLAGS) @@ -39,10 +39,24 @@ AC_PROG_LIBTOOL AC_CONFIG_MACRO_DIR([m4]) +# Test for 64-bit build. +AC_CHECK_SIZEOF([size_t]) + +AX_COMPILER_VENDOR AX_CC_MAXOPT -AX_CFLAGS_WARN_ALL +# The AX_CFLAGS_WARN_ALL macro doesn't currently work for sunpro +# compiler. +if test "$ax_cv_c_compiler_vendor" != "sun"; then + AX_CFLAGS_WARN_ALL +fi + if test "x$GCC" = "xyes"; then CFLAGS="$CFLAGS -fexceptions" + touch local.exp +else + cat > local.exp < /dev/null]) +AM_CONDITIONAL(BFIN, test x$TARGET = xBFIN) AM_CONDITIONAL(SPARC, test x$TARGET = xSPARC) AM_CONDITIONAL(X86, test x$TARGET = xX86) AM_CONDITIONAL(X86_FREEBSD, test x$TARGET = xX86_FREEBSD) @@ -229,11 +284,13 @@ AM_CONDITIONAL(IA64, test x$TARGET = xIA64) AM_CONDITIONAL(M32R, test x$TARGET = xM32R) AM_CONDITIONAL(M68K, test x$TARGET = xM68K) +AM_CONDITIONAL(MICROBLAZE, test x$TARGET = xMICROBLAZE) AM_CONDITIONAL(MOXIE, test x$TARGET = xMOXIE) AM_CONDITIONAL(POWERPC, test x$TARGET = xPOWERPC) AM_CONDITIONAL(POWERPC_AIX, test x$TARGET = xPOWERPC_AIX) AM_CONDITIONAL(POWERPC_DARWIN, test x$TARGET = xPOWERPC_DARWIN) AM_CONDITIONAL(POWERPC_FREEBSD, test x$TARGET = xPOWERPC_FREEBSD) +AM_CONDITIONAL(AARCH64, test x$TARGET = xAARCH64) AM_CONDITIONAL(ARM, test x$TARGET = xARM) AM_CONDITIONAL(AVR32, test x$TARGET = xAVR32) AM_CONDITIONAL(LIBFFI_CRIS, test x$TARGET = xLIBFFI_CRIS) @@ -245,6 +302,8 @@ AM_CONDITIONAL(PA_LINUX, test x$TARGET = xPA_LINUX) AM_CONDITIONAL(PA_HPUX, test x$TARGET = xPA_HPUX) AM_CONDITIONAL(PA64_HPUX, test x$TARGET = xPA64_HPUX) +AM_CONDITIONAL(TILE, test x$TARGET = xTILE) +AM_CONDITIONAL(XTENSA, test x$TARGET = xXTENSA) AC_HEADER_STDC AC_CHECK_FUNCS(memcpy) @@ -290,7 +349,7 @@ libffi_cv_as_register_pseudo_op, [ libffi_cv_as_register_pseudo_op=unknown # Check if we have .register - AC_TRY_COMPILE([asm (".register %g2, #scratch");],, + AC_TRY_COMPILE(,[asm (".register %g2, #scratch");], [libffi_cv_as_register_pseudo_op=yes], [libffi_cv_as_register_pseudo_op=no]) ]) @@ -318,7 +377,7 @@ libffi_cv_as_ascii_pseudo_op, [ libffi_cv_as_ascii_pseudo_op=unknown # Check if we have .ascii - AC_TRY_COMPILE([asm (".ascii \\"string\\"");],, + AC_TRY_COMPILE(,[asm (".ascii \\"string\\"");], [libffi_cv_as_ascii_pseudo_op=yes], [libffi_cv_as_ascii_pseudo_op=no]) ]) @@ -331,7 +390,7 @@ libffi_cv_as_string_pseudo_op, [ libffi_cv_as_string_pseudo_op=unknown # Check if we have .string - AC_TRY_COMPILE([asm (".string \\"string\\"");],, + AC_TRY_COMPILE(,[asm (".string \\"string\\"");], [libffi_cv_as_string_pseudo_op=yes], [libffi_cv_as_string_pseudo_op=no]) ]) @@ -341,6 +400,14 @@ fi fi +# On PaX enable kernels that have MPROTECT enable we can't use PROT_EXEC. +AC_ARG_ENABLE(pax_emutramp, + [ --enable-pax_emutramp enable pax emulated trampolines, for we can't use PROT_EXEC], + if test "$enable_pax_emutramp" = "yes"; then + AC_DEFINE(FFI_MMAP_EXEC_EMUTRAMP_PAX, 1, + [Define this if you want to enable pax emulated trampolines]) + fi) + if test x$TARGET = xX86_WIN64; then LT_SYS_SYMBOL_USCORE if test "x$sys_symbol_underscore" = xyes; then @@ -348,7 +415,6 @@ fi fi - FFI_EXEC_TRAMPOLINE_TABLE=0 case "$target" in *arm*-apple-darwin*) @@ -357,7 +423,7 @@ [Cannot use PROT_EXEC on this target, so, we revert to alternative means]) ;; - *-apple-darwin1[[10]]* | *-*-freebsd* | *-*-kfreebsd* | *-*-openbsd* | *-pc-solaris*) + *-apple-darwin1* | *-*-freebsd* | *-*-kfreebsd* | *-*-openbsd* | *-pc-solaris*) AC_DEFINE(FFI_MMAP_EXEC_WRIT, 1, [Cannot use malloc on this target, so, we revert to alternative means]) @@ -386,11 +452,12 @@ libffi_cv_ro_eh_frame, [ libffi_cv_ro_eh_frame=no echo 'extern void foo (void); void bar (void) { foo (); foo (); }' > conftest.c - if $CC $CFLAGS -S -fpic -fexceptions -o conftest.s conftest.c > /dev/null 2>&1; then - if grep '.section.*eh_frame.*"a"' conftest.s > /dev/null; then - libffi_cv_ro_eh_frame=yes - elif grep '.section.*eh_frame.*#alloc' conftest.c \ - | grep -v '#write' > /dev/null; then + if $CC $CFLAGS -c -fpic -fexceptions -o conftest.o conftest.c > /dev/null 2>&1; then + objdump -h conftest.o > conftest.dump 2>&1 + libffi_eh_frame_line=`grep -n eh_frame conftest.dump | cut -d: -f 1` + libffi_test_line=`expr $libffi_eh_frame_line + 1`p + sed -n $libffi_test_line conftest.dump > conftest.line + if grep READONLY conftest.line > /dev/null; then libffi_cv_ro_eh_frame=yes fi fi @@ -456,6 +523,7 @@ if test "$enable_structs" = "no"; then AC_DEFINE(FFI_NO_STRUCTS, 1, [Define this is you do not want support for aggregate types.]) fi) +AM_CONDITIONAL(FFI_DEBUG, test "$enable_debug" = "yes") AC_ARG_ENABLE(raw-api, [ --disable-raw-api make the raw api unavailable], @@ -471,7 +539,7 @@ # These variables are only ever used when we cross-build to X86_WIN32. # And we only support this with GCC, so... -if test x"$GCC" != x"no"; then +if test "x$GCC" = "xyes"; then if test -n "$with_cross_host" && test x"$with_cross_host" != x"no"; then toolexecdir='$(exec_prefix)/$(target_alias)' @@ -486,14 +554,10 @@ *) toolexeclibdir=$toolexeclibdir/$multi_os_directory ;; esac AC_SUBST(toolexecdir) - AC_SUBST(toolexeclibdir) +else + toolexeclibdir='$(libdir)' fi - -if test "${multilib}" = "yes"; then - multilib_arg="--enable-multilib" -else - multilib_arg= -fi +AC_SUBST(toolexeclibdir) AC_CONFIG_COMMANDS(include, [test -d include || mkdir include]) AC_CONFIG_COMMANDS(src, [ diff --git a/Modules/_ctypes/libffi/doc/libffi.info b/Modules/_ctypes/libffi/doc/libffi.info index 402f760804132933839c909e930855203b6b5264..7b3d7565751d015d0b0fd239f2473cf270fbebf4 GIT binary patch [stripped] diff --git a/Modules/_ctypes/libffi/doc/stamp-vti b/Modules/_ctypes/libffi/doc/stamp-vti --- a/Modules/_ctypes/libffi/doc/stamp-vti +++ b/Modules/_ctypes/libffi/doc/stamp-vti @@ -1,4 +1,4 @@ - at set UPDATED 11 April 2012 - at set UPDATED-MONTH April 2012 - at set EDITION 3.0.11 - at set VERSION 3.0.11 + at set UPDATED 6 February 2013 + at set UPDATED-MONTH February 2013 + at set EDITION 3.0.12 + at set VERSION 3.0.12 diff --git a/Modules/_ctypes/libffi/doc/version.texi b/Modules/_ctypes/libffi/doc/version.texi --- a/Modules/_ctypes/libffi/doc/version.texi +++ b/Modules/_ctypes/libffi/doc/version.texi @@ -1,4 +1,4 @@ - at set UPDATED 11 April 2012 - at set UPDATED-MONTH April 2012 - at set EDITION 3.0.11 - at set VERSION 3.0.11 + at set UPDATED 6 February 2013 + at set UPDATED-MONTH February 2013 + at set EDITION 3.0.12 + at set VERSION 3.0.12 diff --git a/Modules/_ctypes/libffi/fficonfig.h.in b/Modules/_ctypes/libffi/fficonfig.h.in --- a/Modules/_ctypes/libffi/fficonfig.h.in +++ b/Modules/_ctypes/libffi/fficonfig.h.in @@ -20,6 +20,9 @@ /* Cannot use PROT_EXEC on this target, so, we revert to alternative means */ #undef FFI_EXEC_TRAMPOLINE_TABLE +/* Define this if you want to enable pax emulated trampolines */ +#undef FFI_MMAP_EXEC_EMUTRAMP_PAX + /* Cannot use malloc on this target, so, we revert to alternative means */ #undef FFI_MMAP_EXEC_WRIT diff --git a/Modules/_ctypes/libffi/fficonfig.py.in b/Modules/_ctypes/libffi/fficonfig.py.in --- a/Modules/_ctypes/libffi/fficonfig.py.in +++ b/Modules/_ctypes/libffi/fficonfig.py.in @@ -11,6 +11,7 @@ 'X86_FREEBSD': ['src/x86/ffi.c', 'src/x86/freebsd.S'], 'X86_WIN32': ['src/x86/ffi.c', 'src/x86/win32.S'], 'SPARC': ['src/sparc/ffi.c', 'src/sparc/v8.S', 'src/sparc/v9.S'], + 'AARCH64': ['src/aarch64/ffi.c', 'src/aarch64/sysv.S'], 'ALPHA': ['src/alpha/ffi.c', 'src/alpha/osf.S'], 'IA64': ['src/ia64/ffi.c', 'src/ia64/unix.S'], 'M32R': ['src/m32r/sysv.S', 'src/m32r/ffi.c'], diff --git a/Modules/_ctypes/libffi/include/Makefile.in b/Modules/_ctypes/libffi/include/Makefile.in --- a/Modules/_ctypes/libffi/include/Makefile.in +++ b/Modules/_ctypes/libffi/include/Makefile.in @@ -1,9 +1,8 @@ -# Makefile.in generated by automake 1.11.3 from Makefile.am. +# Makefile.in generated by automake 1.12.2 from Makefile.am. # @configure_input@ -# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, -# 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software -# Foundation, Inc. +# Copyright (C) 1994-2012 Free Software Foundation, Inc. + # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. @@ -16,6 +15,23 @@ @SET_MAKE@ VPATH = @srcdir@ +am__make_dryrun = \ + { \ + am__dry=no; \ + case $$MAKEFLAGS in \ + *\\[\ \ ]*) \ + echo 'am--echo: ; @echo "AM" OK' | $(MAKE) -f - 2>/dev/null \ + | grep '^AM OK$$' >/dev/null || am__dry=yes;; \ + *) \ + for am__flg in $$MAKEFLAGS; do \ + case $$am__flg in \ + *=*|--*) ;; \ + *n*) am__dry=yes; break;; \ + esac; \ + done;; \ + esac; \ + test $$am__dry = yes; \ + } pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ @@ -37,18 +53,35 @@ target_triplet = @target@ subdir = include DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \ - $(srcdir)/ffi.h.in $(srcdir)/ffi_common.h + $(srcdir)/ffi.h.in ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 -am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \ +am__aclocal_m4_deps = $(top_srcdir)/m4/asmcfi.m4 \ + $(top_srcdir)/m4/ax_append_flag.m4 \ + $(top_srcdir)/m4/ax_cc_maxopt.m4 \ + $(top_srcdir)/m4/ax_cflags_warn_all.m4 \ + $(top_srcdir)/m4/ax_check_compile_flag.m4 \ + $(top_srcdir)/m4/ax_compiler_vendor.m4 \ + $(top_srcdir)/m4/ax_configure_args.m4 \ + $(top_srcdir)/m4/ax_enable_builddir.m4 \ + $(top_srcdir)/m4/ax_gcc_archflag.m4 \ + $(top_srcdir)/m4/ax_gcc_x86_cpuid.m4 \ + $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ + $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ + $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/acinclude.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/fficonfig.h CONFIG_CLEAN_FILES = ffi.h ffitarget.h -CONFIG_CLEAN_VPATH_FILES = ffi_common.h +CONFIG_CLEAN_VPATH_FILES = SOURCES = DIST_SOURCES = +am__can_run_installinfo = \ + case $$AM_UPDATE_INFO_DIR in \ + n|no|NO) false;; \ + *) (install-info --version) >/dev/null 2>&1;; \ + esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ @@ -145,6 +178,7 @@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ +PRTDIAG = @PRTDIAG@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ @@ -165,6 +199,7 @@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ +ax_enable_builddir_sed = @ax_enable_builddir_sed@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ @@ -200,6 +235,7 @@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ +sys_symbol_underscore = @sys_symbol_underscore@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ @@ -259,8 +295,11 @@ -rm -rf .libs _libs install-nodist_includesHEADERS: $(nodist_includes_HEADERS) @$(NORMAL_INSTALL) - test -z "$(includesdir)" || $(MKDIR_P) "$(DESTDIR)$(includesdir)" @list='$(nodist_includes_HEADERS)'; test -n "$(includesdir)" || list=; \ + if test -n "$$list"; then \ + echo " $(MKDIR_P) '$(DESTDIR)$(includesdir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(includesdir)" || exit 1; \ + fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ @@ -325,6 +364,20 @@ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" +cscopelist: $(HEADERS) $(SOURCES) $(LISP) + list='$(SOURCES) $(HEADERS) $(LISP)'; \ + case "$(srcdir)" in \ + [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ + *) sdir=$(subdir)/$(srcdir) ;; \ + esac; \ + for i in $$list; do \ + if test -f "$$i"; then \ + echo "$(subdir)/$$i"; \ + else \ + echo "$$sdir/$$i"; \ + fi; \ + done >> $(top_builddir)/cscope.files + distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags @@ -465,7 +518,7 @@ .MAKE: install-am install-strip .PHONY: CTAGS GTAGS all all-am check check-am clean clean-generic \ - clean-libtool ctags distclean distclean-generic \ + clean-libtool cscopelist ctags distclean distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ diff --git a/Modules/_ctypes/libffi/include/ffi_common.h b/Modules/_ctypes/libffi/include/ffi_common.h --- a/Modules/_ctypes/libffi/include/ffi_common.h +++ b/Modules/_ctypes/libffi/include/ffi_common.h @@ -87,7 +87,7 @@ } extended_cif; /* Terse sized type definitions. */ -#if defined(_MSC_VER) || defined(__sgi) +#if defined(_MSC_VER) || defined(__sgi) || defined(__SUNPRO_C) typedef unsigned char UINT8; typedef signed char SINT8; typedef unsigned short UINT16; diff --git a/Modules/_ctypes/libffi/libffi.xcodeproj/project.pbxproj b/Modules/_ctypes/libffi/libffi.xcodeproj/project.pbxproj --- a/Modules/_ctypes/libffi/libffi.xcodeproj/project.pbxproj +++ b/Modules/_ctypes/libffi/libffi.xcodeproj/project.pbxproj @@ -12,17 +12,12 @@ 6C43CBDE1534F76F00162364 /* trampoline.S in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CBC01534F76F00162364 /* trampoline.S */; }; 6C43CBE61534F76F00162364 /* darwin.S in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CBC91534F76F00162364 /* darwin.S */; }; 6C43CBE81534F76F00162364 /* ffi.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CBCB1534F76F00162364 /* ffi.c */; }; - 6C43CBE91534F76F00162364 /* ffi64.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CBCC1534F76F00162364 /* ffi64.c */; }; 6C43CC1F1534F77800162364 /* darwin.S in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC051534F77800162364 /* darwin.S */; }; 6C43CC201534F77800162364 /* darwin64.S in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC061534F77800162364 /* darwin64.S */; }; 6C43CC211534F77800162364 /* ffi.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC071534F77800162364 /* ffi.c */; }; 6C43CC221534F77800162364 /* ffi64.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC081534F77800162364 /* ffi64.c */; }; 6C43CC2F1534F7BE00162364 /* closures.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC281534F7BE00162364 /* closures.c */; }; 6C43CC301534F7BE00162364 /* closures.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC281534F7BE00162364 /* closures.c */; }; - 6C43CC311534F7BE00162364 /* debug.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC291534F7BE00162364 /* debug.c */; }; - 6C43CC321534F7BE00162364 /* debug.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC291534F7BE00162364 /* debug.c */; }; - 6C43CC331534F7BE00162364 /* dlmalloc.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC2A1534F7BE00162364 /* dlmalloc.c */; }; - 6C43CC341534F7BE00162364 /* dlmalloc.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC2A1534F7BE00162364 /* dlmalloc.c */; }; 6C43CC351534F7BE00162364 /* java_raw_api.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC2B1534F7BE00162364 /* java_raw_api.c */; }; 6C43CC361534F7BE00162364 /* java_raw_api.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC2B1534F7BE00162364 /* java_raw_api.c */; }; 6C43CC371534F7BE00162364 /* prep_cif.c in Sources */ = {isa = PBXBuildFile; fileRef = 6C43CC2C1534F7BE00162364 /* prep_cif.c */; }; @@ -61,14 +56,11 @@ 6C43CBC01534F76F00162364 /* trampoline.S */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.asm; path = trampoline.S; sourceTree = ""; }; 6C43CBC91534F76F00162364 /* darwin.S */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.asm; path = darwin.S; sourceTree = ""; }; 6C43CBCB1534F76F00162364 /* ffi.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = ffi.c; sourceTree = ""; }; - 6C43CBCC1534F76F00162364 /* ffi64.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = ffi64.c; sourceTree = ""; }; 6C43CC051534F77800162364 /* darwin.S */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.asm; path = darwin.S; sourceTree = ""; }; 6C43CC061534F77800162364 /* darwin64.S */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.asm; path = darwin64.S; sourceTree = ""; }; 6C43CC071534F77800162364 /* ffi.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = ffi.c; sourceTree = ""; }; 6C43CC081534F77800162364 /* ffi64.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = ffi64.c; sourceTree = ""; }; 6C43CC281534F7BE00162364 /* closures.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = closures.c; path = src/closures.c; sourceTree = SOURCE_ROOT; }; - 6C43CC291534F7BE00162364 /* debug.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = debug.c; path = src/debug.c; sourceTree = SOURCE_ROOT; }; - 6C43CC2A1534F7BE00162364 /* dlmalloc.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = dlmalloc.c; path = src/dlmalloc.c; sourceTree = SOURCE_ROOT; }; 6C43CC2B1534F7BE00162364 /* java_raw_api.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = java_raw_api.c; path = src/java_raw_api.c; sourceTree = SOURCE_ROOT; }; 6C43CC2C1534F7BE00162364 /* prep_cif.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = prep_cif.c; path = src/prep_cif.c; sourceTree = SOURCE_ROOT; }; 6C43CC2D1534F7BE00162364 /* raw_api.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = raw_api.c; path = src/raw_api.c; sourceTree = SOURCE_ROOT; }; @@ -149,7 +141,6 @@ children = ( 6C43CBC91534F76F00162364 /* darwin.S */, 6C43CBCB1534F76F00162364 /* ffi.c */, - 6C43CBCC1534F76F00162364 /* ffi64.c */, ); path = x86; sourceTree = ""; @@ -187,8 +178,6 @@ isa = PBXGroup; children = ( 6C43CC281534F7BE00162364 /* closures.c */, - 6C43CC291534F7BE00162364 /* debug.c */, - 6C43CC2A1534F7BE00162364 /* dlmalloc.c */, 6C43CC2B1534F7BE00162364 /* java_raw_api.c */, 6C43CC2C1534F7BE00162364 /* prep_cif.c */, 6C43CC2D1534F7BE00162364 /* raw_api.c */, @@ -412,8 +401,6 @@ 6C43CC211534F77800162364 /* ffi.c in Sources */, 6C43CC221534F77800162364 /* ffi64.c in Sources */, 6C43CC301534F7BE00162364 /* closures.c in Sources */, - 6C43CC321534F7BE00162364 /* debug.c in Sources */, - 6C43CC341534F7BE00162364 /* dlmalloc.c in Sources */, 6C43CC361534F7BE00162364 /* java_raw_api.c in Sources */, 6C43CC381534F7BE00162364 /* prep_cif.c in Sources */, 6C43CC3A1534F7BE00162364 /* raw_api.c in Sources */, @@ -430,10 +417,7 @@ 6C43CBDE1534F76F00162364 /* trampoline.S in Sources */, 6C43CBE61534F76F00162364 /* darwin.S in Sources */, 6C43CBE81534F76F00162364 /* ffi.c in Sources */, - 6C43CBE91534F76F00162364 /* ffi64.c in Sources */, 6C43CC2F1534F7BE00162364 /* closures.c in Sources */, - 6C43CC311534F7BE00162364 /* debug.c in Sources */, - 6C43CC331534F7BE00162364 /* dlmalloc.c in Sources */, 6C43CC351534F7BE00162364 /* java_raw_api.c in Sources */, 6C43CC371534F7BE00162364 /* prep_cif.c in Sources */, 6C43CC391534F7BE00162364 /* raw_api.c in Sources */, diff --git a/Modules/_ctypes/libffi/libtool-ldflags b/Modules/_ctypes/libffi/libtool-ldflags new file mode 100755 --- /dev/null +++ b/Modules/_ctypes/libffi/libtool-ldflags @@ -0,0 +1,106 @@ +#! /bin/sh + +# Script to translate LDFLAGS into a form suitable for use with libtool. + +# Copyright (C) 2005 Free Software Foundation, Inc. +# +# This file is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, +# MA 02110-1301, USA. + +# Contributed by CodeSourcery, LLC. + +# This script is designed to be used from a Makefile that uses libtool +# to build libraries as follows: +# +# LTLDFLAGS = $(shell libtool-ldflags $(LDFLAGS)) +# +# Then, use (LTLDFLAGS) in place of $(LDFLAGS) in your link line. + +# The output of the script. This string is built up as we process the +# arguments. +result= +prev_arg= + +for arg +do + case $arg in + -f*|--*) + # Libtool does not ascribe any special meaning options + # that begin with -f or with a double-dash. So, it will + # think these options are linker options, and prefix them + # with "-Wl,". Then, the compiler driver will ignore the + # options. So, we prefix these options with -Xcompiler to + # make clear to libtool that they are in fact compiler + # options. + case $prev_arg in + -Xpreprocessor|-Xcompiler|-Xlinker) + # This option is already prefixed; don't prefix it again. + ;; + *) + result="$result -Xcompiler" + ;; + esac + ;; + *) + # We do not want to add -Xcompiler to other options because + # that would prevent libtool itself from recognizing them. + ;; + esac + prev_arg=$arg + + # If $(LDFLAGS) is (say): + # a "b'c d" e + # then the user expects that: + # $(LD) $(LDFLAGS) + # will pass three arguments to $(LD): + # 1) a + # 2) b'c d + # 3) e + # We must ensure, therefore, that the arguments are appropriately + # quoted so that using: + # libtool --mode=link ... $(LTLDFLAGS) + # will result in the same number of arguments being passed to + # libtool. In other words, when this script was invoked, the shell + # removed one level of quoting, present in $(LDFLAGS); we have to put + # it back. + + # Quote any embedded single quotes. + case $arg in + *"'"*) + # The following command creates the script: + # 1s,^X,,;s|'|'"'"'|g + # which removes a leading X, and then quotes and embedded single + # quotes. + sed_script="1s,^X,,;s|'|'\"'\"'|g" + # Add a leading "X" so that if $arg starts with a dash, + # the echo command will not try to interpret the argument + # as a command-line option. + arg="X$arg" + # Generate the quoted string. + quoted_arg=`echo "$arg" | sed -e "$sed_script"` + ;; + *) + quoted_arg=$arg + ;; + esac + # Surround the entire argument with single quotes. + quoted_arg="'"$quoted_arg"'" + + # Add it to the string. + result="$result $quoted_arg" +done + +# Output the string we have built up. +echo "$result" diff --git a/Modules/_ctypes/libffi/libtool-version b/Modules/_ctypes/libffi/libtool-version --- a/Modules/_ctypes/libffi/libtool-version +++ b/Modules/_ctypes/libffi/libtool-version @@ -26,4 +26,4 @@ # release, then set age to 0. # # CURRENT:REVISION:AGE -6:0:0 +6:1:0 diff --git a/Modules/_ctypes/libffi/ltmain.sh b/Modules/_ctypes/libffi/ltmain.sh --- a/Modules/_ctypes/libffi/ltmain.sh +++ b/Modules/_ctypes/libffi/ltmain.sh @@ -70,7 +70,7 @@ # compiler: $LTCC # compiler flags: $LTCFLAGS # linker: $LD (gnu? $with_gnu_ld) -# $progname: (GNU libtool) 2.4.2 Debian-2.4.2-1ubuntu1 +# $progname: (GNU libtool) 2.4.2 # automake: $automake_version # autoconf: $autoconf_version # @@ -80,7 +80,7 @@ PROGRAM=libtool PACKAGE=libtool -VERSION="2.4.2 Debian-2.4.2-1ubuntu1" +VERSION=2.4.2 TIMESTAMP="" package_revision=1.3337 @@ -6124,10 +6124,7 @@ case $pass in dlopen) libs="$dlfiles" ;; dlpreopen) libs="$dlprefiles" ;; - link) - libs="$deplibs %DEPLIBS%" - test "X$link_all_deplibs" != Xno && libs="$libs $dependency_libs" - ;; + link) libs="$deplibs %DEPLIBS% $dependency_libs" ;; esac fi if test "$linkmode,$pass" = "lib,dlpreopen"; then @@ -6447,19 +6444,19 @@ # It is a libtool convenience library, so add in its objects. func_append convenience " $ladir/$objdir/$old_library" func_append old_convenience " $ladir/$objdir/$old_library" - tmp_libs= - for deplib in $dependency_libs; do - deplibs="$deplib $deplibs" - if $opt_preserve_dup_deps ; then - case "$tmp_libs " in - *" $deplib "*) func_append specialdeplibs " $deplib" ;; - esac - fi - func_append tmp_libs " $deplib" - done elif test "$linkmode" != prog && test "$linkmode" != lib; then func_fatal_error "\`$lib' is not a convenience library" fi + tmp_libs= + for deplib in $dependency_libs; do + deplibs="$deplib $deplibs" + if $opt_preserve_dup_deps ; then + case "$tmp_libs " in + *" $deplib "*) func_append specialdeplibs " $deplib" ;; + esac + fi + func_append tmp_libs " $deplib" + done continue fi # $pass = conv @@ -7352,9 +7349,6 @@ revision="$number_minor" lt_irix_increment=no ;; - *) - func_fatal_configuration "$modename: unknown library version type \`$version_type'" - ;; esac ;; no) diff --git a/Modules/_ctypes/libffi/m4/ax_cc_maxopt.m4 b/Modules/_ctypes/libffi/m4/ax_cc_maxopt.m4 --- a/Modules/_ctypes/libffi/m4/ax_cc_maxopt.m4 +++ b/Modules/_ctypes/libffi/m4/ax_cc_maxopt.m4 @@ -55,7 +55,7 @@ # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. -#serial 12 +#serial 13 AC_DEFUN([AX_CC_MAXOPT], [ @@ -64,7 +64,7 @@ AC_REQUIRE([AC_CANONICAL_HOST]) AC_ARG_ENABLE(portable-binary, [AS_HELP_STRING([--enable-portable-binary], [disable compiler optimizations that would produce unportable binaries])], - acx_maxopt_portable=$withval, acx_maxopt_portable=no) + acx_maxopt_portable=$enableval, acx_maxopt_portable=no) # Try to determine "good" native compiler flags if none specified via CFLAGS if test "$ac_test_CFLAGS" != "set"; then @@ -141,7 +141,8 @@ CFLAGS="-O3 -fomit-frame-pointer" # -malign-double for x86 systems - AX_CHECK_COMPILE_FLAG(-malign-double, CFLAGS="$CFLAGS -malign-double") + # LIBFFI -- DON'T DO THIS - CHANGES ABI + # AX_CHECK_COMPILE_FLAG(-malign-double, CFLAGS="$CFLAGS -malign-double") # -fstrict-aliasing for gcc-2.95+ AX_CHECK_COMPILE_FLAG(-fstrict-aliasing, diff --git a/Modules/_ctypes/libffi/m4/ax_cflags_warn_all.m4 b/Modules/_ctypes/libffi/m4/ax_cflags_warn_all.m4 --- a/Modules/_ctypes/libffi/m4/ax_cflags_warn_all.m4 +++ b/Modules/_ctypes/libffi/m4/ax_cflags_warn_all.m4 @@ -58,7 +58,7 @@ # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. -#serial 13 +#serial 14 AC_DEFUN([AX_FLAGS_WARN_ALL],[dnl AS_VAR_PUSHDEF([FLAGS],[_AC_LANG_PREFIX[]FLAGS])dnl @@ -84,6 +84,7 @@ FLAGS="$ac_save_[]FLAGS" ]) AS_VAR_POPDEF([FLAGS])dnl +AC_REQUIRE([AX_APPEND_FLAG]) case ".$VAR" in .ok|.ok,*) m4_ifvaln($3,$3) ;; .|.no|.no,*) m4_default($4,[m4_ifval($2,[AX_APPEND_FLAG([$2], [$1])])]) ;; diff --git a/Modules/_ctypes/libffi/m4/ax_gcc_archflag.m4 b/Modules/_ctypes/libffi/m4/ax_gcc_archflag.m4 --- a/Modules/_ctypes/libffi/m4/ax_gcc_archflag.m4 +++ b/Modules/_ctypes/libffi/m4/ax_gcc_archflag.m4 @@ -36,6 +36,7 @@ # # Copyright (c) 2008 Steven G. Johnson # Copyright (c) 2008 Matteo Frigo +# Copyright (c) 2012 Tsukasa Oi # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the @@ -63,7 +64,7 @@ # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. -#serial 10 +#serial 11 AC_DEFUN([AX_GCC_ARCHFLAG], [AC_REQUIRE([AC_PROG_CC]) @@ -84,7 +85,7 @@ ax_gcc_arch="" if test "$cross_compiling" = no; then case $host_cpu in - i[[3456]]86*|x86_64*) # use cpuid codes, in part from x86info-1.7 by D. Jones + i[[3456]]86*|x86_64*) # use cpuid codes AX_GCC_X86_CPUID(0) AX_GCC_X86_CPUID(1) case $ax_cv_gcc_x86_cpuid_0 in @@ -92,18 +93,24 @@ case $ax_cv_gcc_x86_cpuid_1 in *5[[48]]?:*:*:*) ax_gcc_arch="pentium-mmx pentium" ;; *5??:*:*:*) ax_gcc_arch=pentium ;; - *6[[3456]]?:*:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; - *6a?:*[[01]]:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; - *6a?:*[[234]]:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; - *6[[9d]]?:*:*:*) ax_gcc_arch="pentium-m pentium3 pentiumpro" ;; - *6[[78b]]?:*:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; - *6??:*:*:*) ax_gcc_arch=pentiumpro ;; - *f3[[347]]:*:*:*|*f4[1347]:*:*:*) + *0?6[[3456]]?:*:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; + *0?6a?:*[[01]]:*:*) ax_gcc_arch="pentium2 pentiumpro" ;; + *0?6a?:*[[234]]:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; + *0?6[[9de]]?:*:*:*) ax_gcc_arch="pentium-m pentium3 pentiumpro" ;; + *0?6[[78b]]?:*:*:*) ax_gcc_arch="pentium3 pentiumpro" ;; + *0?6f?:*:*:*|*1?66?:*:*:*) ax_gcc_arch="core2 pentium-m pentium3 pentiumpro" ;; + *1?6[[7d]]?:*:*:*) ax_gcc_arch="penryn core2 pentium-m pentium3 pentiumpro" ;; + *1?6[[aef]]?:*:*:*|*2?6[[5cef]]?:*:*:*) ax_gcc_arch="corei7 core2 pentium-m pentium3 pentiumpro" ;; + *1?6c?:*:*:*|*[[23]]?66?:*:*:*) ax_gcc_arch="atom core2 pentium-m pentium3 pentiumpro" ;; + *2?6[[ad]]?:*:*:*) ax_gcc_arch="corei7-avx corei7 core2 pentium-m pentium3 pentiumpro" ;; + *0?6??:*:*:*) ax_gcc_arch=pentiumpro ;; + *6??:*:*:*) ax_gcc_arch="core2 pentiumpro" ;; + ?000?f3[[347]]:*:*:*|?000?f4[1347]:*:*:*|?000?f6?:*:*:*) case $host_cpu in - x86_64*) ax_gcc_arch="nocona pentium4 pentiumpro" ;; - *) ax_gcc_arch="prescott pentium4 pentiumpro" ;; - esac ;; - *f??:*:*:*) ax_gcc_arch="pentium4 pentiumpro";; + x86_64*) ax_gcc_arch="nocona pentium4 pentiumpro" ;; + *) ax_gcc_arch="prescott pentium4 pentiumpro" ;; + esac ;; + ?000?f??:*:*:*) ax_gcc_arch="pentium4 pentiumpro";; esac ;; *:68747541:*:*) # AMD case $ax_cv_gcc_x86_cpuid_1 in @@ -121,10 +128,13 @@ ax_gcc_arch="athlon-xp athlon-4 athlon k7" ;; *) ax_gcc_arch="athlon-4 athlon k7" ;; esac ;; - *f[[4cef8b]]?:*:*:*) ax_gcc_arch="athlon64 k8" ;; - *f5?:*:*:*) ax_gcc_arch="opteron k8" ;; - *f7?:*:*:*) ax_gcc_arch="athlon-fx opteron k8" ;; - *f??:*:*:*) ax_gcc_arch="k8" ;; + ?00??f[[4cef8b]]?:*:*:*) ax_gcc_arch="athlon64 k8" ;; + ?00??f5?:*:*:*) ax_gcc_arch="opteron k8" ;; + ?00??f7?:*:*:*) ax_gcc_arch="athlon-fx opteron k8" ;; + ?00??f??:*:*:*) ax_gcc_arch="k8" ;; + ?05??f??:*:*:*) ax_gcc_arch="btver1 amdfam10 k8" ;; + ?06??f??:*:*:*) ax_gcc_arch="bdver1 amdfam10 k8" ;; + *f??:*:*:*) ax_gcc_arch="amdfam10 k8" ;; esac ;; *:746e6543:*:*) # IDT case $ax_cv_gcc_x86_cpuid_1 in diff --git a/Modules/_ctypes/libffi/m4/libtool.m4 b/Modules/_ctypes/libffi/m4/libtool.m4 --- a/Modules/_ctypes/libffi/m4/libtool.m4 +++ b/Modules/_ctypes/libffi/m4/libtool.m4 @@ -1324,7 +1324,14 @@ LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) - LD="${LD-ld} -m elf_i386" + case `/usr/bin/file conftest.o` in + *x86-64*) + LD="${LD-ld} -m elf32_x86_64" + ;; + *) + LD="${LD-ld} -m elf_i386" + ;; + esac ;; ppc64-*linux*|powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" @@ -1688,7 +1695,8 @@ ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` - if test -n "$lt_cv_sys_max_cmd_len"; then + if test -n "$lt_cv_sys_max_cmd_len" && \ + test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else @@ -2669,10 +2677,14 @@ # before this can be enabled. hardcode_into_libs=yes + # Add ABI-specific directories to the system library path. + sys_lib_dlsearch_path_spec="/lib64 /usr/lib64 /lib /usr/lib" + # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` - sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" + sys_lib_dlsearch_path_spec="$sys_lib_dlsearch_path_spec $lt_ld_extra" + fi # We used to test for /lib/ld.so.1 and disable shared libraries on @@ -2684,18 +2696,6 @@ dynamic_linker='GNU/Linux ld.so' ;; -netbsdelf*-gnu) - version_type=linux - need_lib_prefix=no - need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' - shlibpath_var=LD_LIBRARY_PATH - shlibpath_overrides_runpath=no - hardcode_into_libs=yes - dynamic_linker='NetBSD ld.elf_so' - ;; - netbsd*) version_type=sunos need_lib_prefix=no @@ -3301,7 +3301,7 @@ lt_cv_deplibs_check_method=pass_all ;; -netbsd* | netbsdelf*-gnu) +netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' else @@ -4113,7 +4113,7 @@ ;; esac ;; - netbsd* | netbsdelf*-gnu) + netbsd*) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise @@ -4590,9 +4590,6 @@ ;; esac ;; - linux* | k*bsd*-gnu | gnu*) - _LT_TAGVAR(link_all_deplibs, $1)=no - ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; @@ -4655,9 +4652,6 @@ openbsd*) with_gnu_ld=no ;; - linux* | k*bsd*-gnu | gnu*) - _LT_TAGVAR(link_all_deplibs, $1)=no - ;; esac _LT_TAGVAR(ld_shlibs, $1)=yes @@ -4879,7 +4873,7 @@ fi ;; - netbsd* | netbsdelf*-gnu) + netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= @@ -5056,7 +5050,6 @@ if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi - _LT_TAGVAR(link_all_deplibs, $1)=no else # not using gcc if test "$host_cpu" = ia64; then @@ -5361,7 +5354,7 @@ _LT_TAGVAR(link_all_deplibs, $1)=yes ;; - netbsd* | netbsdelf*-gnu) + netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else diff --git a/Modules/_ctypes/libffi/man/Makefile.in b/Modules/_ctypes/libffi/man/Makefile.in --- a/Modules/_ctypes/libffi/man/Makefile.in +++ b/Modules/_ctypes/libffi/man/Makefile.in @@ -1,9 +1,8 @@ -# Makefile.in generated by automake 1.11.3 from Makefile.am. +# Makefile.in generated by automake 1.12.2 from Makefile.am. # @configure_input@ -# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, -# 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software -# Foundation, Inc. +# Copyright (C) 1994-2012 Free Software Foundation, Inc. + # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. @@ -15,6 +14,23 @@ @SET_MAKE@ VPATH = @srcdir@ +am__make_dryrun = \ + { \ + am__dry=no; \ + case $$MAKEFLAGS in \ + *\\[\ \ ]*) \ + echo 'am--echo: ; @echo "AM" OK' | $(MAKE) -f - 2>/dev/null \ + | grep '^AM OK$$' >/dev/null || am__dry=yes;; \ + *) \ + for am__flg in $$MAKEFLAGS; do \ + case $$am__flg in \ + *=*|--*) ;; \ + *n*) am__dry=yes; break;; \ + esac; \ + done;; \ + esac; \ + test $$am__dry = yes; \ + } pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ @@ -37,7 +53,19 @@ subdir = man DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 -am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \ +am__aclocal_m4_deps = $(top_srcdir)/m4/asmcfi.m4 \ + $(top_srcdir)/m4/ax_append_flag.m4 \ + $(top_srcdir)/m4/ax_cc_maxopt.m4 \ + $(top_srcdir)/m4/ax_cflags_warn_all.m4 \ + $(top_srcdir)/m4/ax_check_compile_flag.m4 \ + $(top_srcdir)/m4/ax_compiler_vendor.m4 \ + $(top_srcdir)/m4/ax_configure_args.m4 \ + $(top_srcdir)/m4/ax_enable_builddir.m4 \ + $(top_srcdir)/m4/ax_gcc_archflag.m4 \ + $(top_srcdir)/m4/ax_gcc_x86_cpuid.m4 \ + $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ + $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ + $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/acinclude.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) @@ -47,6 +75,11 @@ CONFIG_CLEAN_VPATH_FILES = SOURCES = DIST_SOURCES = +am__can_run_installinfo = \ + case $$AM_UPDATE_INFO_DIR in \ + n|no|NO) false;; \ + *) (install-info --version) >/dev/null 2>&1;; \ + esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ @@ -143,6 +176,7 @@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ +PRTDIAG = @PRTDIAG@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ @@ -163,6 +197,7 @@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ +ax_enable_builddir_sed = @ax_enable_builddir_sed@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ @@ -198,6 +233,7 @@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ +sys_symbol_underscore = @sys_symbol_underscore@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ @@ -253,11 +289,18 @@ -rm -rf .libs _libs install-man3: $(man_MANS) @$(NORMAL_INSTALL) - test -z "$(man3dir)" || $(MKDIR_P) "$(DESTDIR)$(man3dir)" - @list=''; test -n "$(man3dir)" || exit 0; \ - { for i in $$list; do echo "$$i"; done; \ - l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \ - sed -n '/\.3[a-z]*$$/p'; \ + @list1=''; \ + list2='$(man_MANS)'; \ + test -n "$(man3dir)" \ + && test -n "`echo $$list1$$list2`" \ + || exit 0; \ + echo " $(MKDIR_P) '$(DESTDIR)$(man3dir)'"; \ + $(MKDIR_P) "$(DESTDIR)$(man3dir)" || exit 1; \ + { for i in $$list1; do echo "$$i"; done; \ + if test -n "$$list2"; then \ + for i in $$list2; do echo "$$i"; done \ + | sed -n '/\.3[a-z]*$$/p'; \ + fi; \ } | while read p; do \ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; echo "$$p"; \ @@ -293,6 +336,8 @@ ctags: CTAGS CTAGS: +cscope cscopelist: + distdir: $(DISTFILES) @list='$(MANS)'; if test -n "$$list"; then \ @@ -301,10 +346,10 @@ if test -f "$$d$$p"; then echo "$$d$$p"; else :; fi; done`; \ if test -n "$$list" && \ grep 'ab help2man is required to generate this page' $$list >/dev/null; then \ - echo "error: found man pages containing the \`missing help2man' replacement text:" >&2; \ + echo "error: found man pages containing the 'missing help2man' replacement text:" >&2; \ grep -l 'ab help2man is required to generate this page' $$list | sed 's/^/ /' >&2; \ echo " to fix them, install help2man, remove and regenerate the man pages;" >&2; \ - echo " typically \`make maintainer-clean' will remove them" >&2; \ + echo " typically 'make maintainer-clean' will remove them" >&2; \ exit 1; \ else :; fi; \ else :; fi diff --git a/Modules/_ctypes/libffi/man/ffi_prep_cif.3 b/Modules/_ctypes/libffi/man/ffi_prep_cif.3 --- a/Modules/_ctypes/libffi/man/ffi_prep_cif.3 +++ b/Modules/_ctypes/libffi/man/ffi_prep_cif.3 @@ -61,10 +61,8 @@ .Nm FFI_BAD_ABI will be returned. Available ABIs are defined in -.Nm -. +.Nm . .Sh SEE ALSO .Xr ffi 3 , .Xr ffi_call 3 , .Xr ffi_prep_cif_var 3 - diff --git a/Modules/_ctypes/libffi/mdate-sh b/Modules/_ctypes/libffi/mdate-sh old mode 100755 new mode 100644 diff --git a/Modules/_ctypes/libffi/src/aarch64/ffi.c b/Modules/_ctypes/libffi/src/aarch64/ffi.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/aarch64/ffi.c @@ -0,0 +1,1076 @@ +/* Copyright (c) 2009, 2010, 2011, 2012 ARM Ltd. + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +``Software''), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ + +#include + +#include +#include + +#include + +/* Stack alignment requirement in bytes */ +#define AARCH64_STACK_ALIGN 16 + +#define N_X_ARG_REG 8 +#define N_V_ARG_REG 8 + +#define AARCH64_FFI_WITH_V (1 << AARCH64_FFI_WITH_V_BIT) + +union _d +{ + UINT64 d; + UINT32 s[2]; +}; + +struct call_context +{ + UINT64 x [AARCH64_N_XREG]; + struct + { + union _d d[2]; + } v [AARCH64_N_VREG]; +}; + +static void * +get_x_addr (struct call_context *context, unsigned n) +{ + return &context->x[n]; +} + +static void * +get_s_addr (struct call_context *context, unsigned n) +{ +#if defined __AARCH64EB__ + return &context->v[n].d[1].s[1]; +#else + return &context->v[n].d[0].s[0]; +#endif +} + +static void * +get_d_addr (struct call_context *context, unsigned n) +{ +#if defined __AARCH64EB__ + return &context->v[n].d[1]; +#else + return &context->v[n].d[0]; +#endif +} + +static void * +get_v_addr (struct call_context *context, unsigned n) +{ + return &context->v[n]; +} + +/* Return the memory location at which a basic type would reside + were it to have been stored in register n. */ + +static void * +get_basic_type_addr (unsigned short type, struct call_context *context, + unsigned n) +{ + switch (type) + { + case FFI_TYPE_FLOAT: + return get_s_addr (context, n); + case FFI_TYPE_DOUBLE: + return get_d_addr (context, n); + case FFI_TYPE_LONGDOUBLE: + return get_v_addr (context, n); + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_INT: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + return get_x_addr (context, n); + default: + FFI_ASSERT (0); + return NULL; + } +} + +/* Return the alignment width for each of the basic types. */ + +static size_t +get_basic_type_alignment (unsigned short type) +{ + switch (type) + { + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + return sizeof (UINT64); + case FFI_TYPE_LONGDOUBLE: + return sizeof (long double); + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_INT: + case FFI_TYPE_SINT32: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + return sizeof (UINT64); + + default: + FFI_ASSERT (0); + return 0; + } +} + +/* Return the size in bytes for each of the basic types. */ + +static size_t +get_basic_type_size (unsigned short type) +{ + switch (type) + { + case FFI_TYPE_FLOAT: + return sizeof (UINT32); + case FFI_TYPE_DOUBLE: + return sizeof (UINT64); + case FFI_TYPE_LONGDOUBLE: + return sizeof (long double); + case FFI_TYPE_UINT8: + return sizeof (UINT8); + case FFI_TYPE_SINT8: + return sizeof (SINT8); + case FFI_TYPE_UINT16: + return sizeof (UINT16); + case FFI_TYPE_SINT16: + return sizeof (SINT16); + case FFI_TYPE_UINT32: + return sizeof (UINT32); + case FFI_TYPE_INT: + case FFI_TYPE_SINT32: + return sizeof (SINT32); + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + return sizeof (UINT64); + case FFI_TYPE_SINT64: + return sizeof (SINT64); + + default: + FFI_ASSERT (0); + return 0; + } +} + +extern void +ffi_call_SYSV (unsigned (*)(struct call_context *context, unsigned char *, + extended_cif *), + struct call_context *context, + extended_cif *, + unsigned, + void (*fn)(void)); + +extern void +ffi_closure_SYSV (ffi_closure *); + +/* Test for an FFI floating point representation. */ + +static unsigned +is_floating_type (unsigned short type) +{ + return (type == FFI_TYPE_FLOAT || type == FFI_TYPE_DOUBLE + || type == FFI_TYPE_LONGDOUBLE); +} + +/* Test for a homogeneous structure. */ + +static unsigned short +get_homogeneous_type (ffi_type *ty) +{ + if (ty->type == FFI_TYPE_STRUCT && ty->elements) + { + unsigned i; + unsigned short candidate_type + = get_homogeneous_type (ty->elements[0]); + for (i =1; ty->elements[i]; i++) + { + unsigned short iteration_type = 0; + /* If we have a nested struct, we must find its homogeneous type. + If that fits with our candidate type, we are still + homogeneous. */ + if (ty->elements[i]->type == FFI_TYPE_STRUCT + && ty->elements[i]->elements) + { + iteration_type = get_homogeneous_type (ty->elements[i]); + } + else + { + iteration_type = ty->elements[i]->type; + } + + /* If we are not homogeneous, return FFI_TYPE_STRUCT. */ + if (candidate_type != iteration_type) + return FFI_TYPE_STRUCT; + } + return candidate_type; + } + + /* Base case, we have no more levels of nesting, so we + are a basic type, and so, trivially homogeneous in that type. */ + return ty->type; +} + +/* Determine the number of elements within a STRUCT. + + Note, we must handle nested structs. + + If ty is not a STRUCT this function will return 0. */ + +static unsigned +element_count (ffi_type *ty) +{ + if (ty->type == FFI_TYPE_STRUCT && ty->elements) + { + unsigned n; + unsigned elems = 0; + for (n = 0; ty->elements[n]; n++) + { + if (ty->elements[n]->type == FFI_TYPE_STRUCT + && ty->elements[n]->elements) + elems += element_count (ty->elements[n]); + else + elems++; + } + return elems; + } + return 0; +} + +/* Test for a homogeneous floating point aggregate. + + A homogeneous floating point aggregate is a homogeneous aggregate of + a half- single- or double- precision floating point type with one + to four elements. Note that this includes nested structs of the + basic type. */ + +static int +is_hfa (ffi_type *ty) +{ + if (ty->type == FFI_TYPE_STRUCT + && ty->elements[0] + && is_floating_type (get_homogeneous_type (ty))) + { + unsigned n = element_count (ty); + return n >= 1 && n <= 4; + } + return 0; +} + +/* Test if an ffi_type is a candidate for passing in a register. + + This test does not check that sufficient registers of the + appropriate class are actually available, merely that IFF + sufficient registers are available then the argument will be passed + in register(s). + + Note that an ffi_type that is deemed to be a register candidate + will always be returned in registers. + + Returns 1 if a register candidate else 0. */ + +static int +is_register_candidate (ffi_type *ty) +{ + switch (ty->type) + { + case FFI_TYPE_VOID: + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + case FFI_TYPE_LONGDOUBLE: + case FFI_TYPE_UINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_UINT64: + case FFI_TYPE_POINTER: + case FFI_TYPE_SINT8: + case FFI_TYPE_SINT16: + case FFI_TYPE_SINT32: + case FFI_TYPE_INT: + case FFI_TYPE_SINT64: + return 1; + + case FFI_TYPE_STRUCT: + if (is_hfa (ty)) + { + return 1; + } + else if (ty->size > 16) + { + /* Too large. Will be replaced with a pointer to memory. The + pointer MAY be passed in a register, but the value will + not. This test specifically fails since the argument will + never be passed by value in registers. */ + return 0; + } + else + { + /* Might be passed in registers depending on the number of + registers required. */ + return (ty->size + 7) / 8 < N_X_ARG_REG; + } + break; + + default: + FFI_ASSERT (0); + break; + } + + return 0; +} + +/* Test if an ffi_type argument or result is a candidate for a vector + register. */ + +static int +is_v_register_candidate (ffi_type *ty) +{ + return is_floating_type (ty->type) + || (ty->type == FFI_TYPE_STRUCT && is_hfa (ty)); +} + +/* Representation of the procedure call argument marshalling + state. + + The terse state variable names match the names used in the AARCH64 + PCS. */ + +struct arg_state +{ + unsigned ngrn; /* Next general-purpose register number. */ + unsigned nsrn; /* Next vector register number. */ + unsigned nsaa; /* Next stack offset. */ +}; + +/* Initialize a procedure call argument marshalling state. */ +static void +arg_init (struct arg_state *state, unsigned call_frame_size) +{ + state->ngrn = 0; + state->nsrn = 0; + state->nsaa = 0; +} + +/* Return the number of available consecutive core argument + registers. */ + +static unsigned +available_x (struct arg_state *state) +{ + return N_X_ARG_REG - state->ngrn; +} + +/* Return the number of available consecutive vector argument + registers. */ + +static unsigned +available_v (struct arg_state *state) +{ + return N_V_ARG_REG - state->nsrn; +} + +static void * +allocate_to_x (struct call_context *context, struct arg_state *state) +{ + FFI_ASSERT (state->ngrn < N_X_ARG_REG) + return get_x_addr (context, (state->ngrn)++); +} + +static void * +allocate_to_s (struct call_context *context, struct arg_state *state) +{ + FFI_ASSERT (state->nsrn < N_V_ARG_REG) + return get_s_addr (context, (state->nsrn)++); +} + +static void * +allocate_to_d (struct call_context *context, struct arg_state *state) +{ + FFI_ASSERT (state->nsrn < N_V_ARG_REG) + return get_d_addr (context, (state->nsrn)++); +} + +static void * +allocate_to_v (struct call_context *context, struct arg_state *state) +{ + FFI_ASSERT (state->nsrn < N_V_ARG_REG) + return get_v_addr (context, (state->nsrn)++); +} + +/* Allocate an aligned slot on the stack and return a pointer to it. */ +static void * +allocate_to_stack (struct arg_state *state, void *stack, unsigned alignment, + unsigned size) +{ + void *allocation; + + /* Round up the NSAA to the larger of 8 or the natural + alignment of the argument's type. */ + state->nsaa = ALIGN (state->nsaa, alignment); + state->nsaa = ALIGN (state->nsaa, alignment); + state->nsaa = ALIGN (state->nsaa, 8); + + allocation = stack + state->nsaa; + + state->nsaa += size; + return allocation; +} + +static void +copy_basic_type (void *dest, void *source, unsigned short type) +{ + /* This is neccessary to ensure that basic types are copied + sign extended to 64-bits as libffi expects. */ + switch (type) + { + case FFI_TYPE_FLOAT: + *(float *) dest = *(float *) source; + break; + case FFI_TYPE_DOUBLE: + *(double *) dest = *(double *) source; + break; + case FFI_TYPE_LONGDOUBLE: + *(long double *) dest = *(long double *) source; + break; + case FFI_TYPE_UINT8: + *(ffi_arg *) dest = *(UINT8 *) source; + break; + case FFI_TYPE_SINT8: + *(ffi_sarg *) dest = *(SINT8 *) source; + break; + case FFI_TYPE_UINT16: + *(ffi_arg *) dest = *(UINT16 *) source; + break; + case FFI_TYPE_SINT16: + *(ffi_sarg *) dest = *(SINT16 *) source; + break; + case FFI_TYPE_UINT32: + *(ffi_arg *) dest = *(UINT32 *) source; + break; + case FFI_TYPE_INT: + case FFI_TYPE_SINT32: + *(ffi_sarg *) dest = *(SINT32 *) source; + break; + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + *(ffi_arg *) dest = *(UINT64 *) source; + break; + case FFI_TYPE_SINT64: + *(ffi_sarg *) dest = *(SINT64 *) source; + break; + + default: + FFI_ASSERT (0); + } +} + +static void +copy_hfa_to_reg_or_stack (void *memory, + ffi_type *ty, + struct call_context *context, + unsigned char *stack, + struct arg_state *state) +{ + unsigned elems = element_count (ty); + if (available_v (state) < elems) + { + /* There are insufficient V registers. Further V register allocations + are prevented, the NSAA is adjusted (by allocate_to_stack ()) + and the argument is copied to memory at the adjusted NSAA. */ + state->nsrn = N_V_ARG_REG; + memcpy (allocate_to_stack (state, stack, ty->alignment, ty->size), + memory, + ty->size); + } + else + { + int i; + unsigned short type = get_homogeneous_type (ty); + unsigned elems = element_count (ty); + for (i = 0; i < elems; i++) + { + void *reg = allocate_to_v (context, state); + copy_basic_type (reg, memory, type); + memory += get_basic_type_size (type); + } + } +} + +/* Either allocate an appropriate register for the argument type, or if + none are available, allocate a stack slot and return a pointer + to the allocated space. */ + +static void * +allocate_to_register_or_stack (struct call_context *context, + unsigned char *stack, + struct arg_state *state, + unsigned short type) +{ + size_t alignment = get_basic_type_alignment (type); + size_t size = alignment; + switch (type) + { + case FFI_TYPE_FLOAT: + /* This is the only case for which the allocated stack size + should not match the alignment of the type. */ + size = sizeof (UINT32); + /* Fall through. */ + case FFI_TYPE_DOUBLE: + if (state->nsrn < N_V_ARG_REG) + return allocate_to_d (context, state); + state->nsrn = N_V_ARG_REG; + break; + case FFI_TYPE_LONGDOUBLE: + if (state->nsrn < N_V_ARG_REG) + return allocate_to_v (context, state); + state->nsrn = N_V_ARG_REG; + break; + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_INT: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + if (state->ngrn < N_X_ARG_REG) + return allocate_to_x (context, state); + state->ngrn = N_X_ARG_REG; + break; + default: + FFI_ASSERT (0); + } + + return allocate_to_stack (state, stack, alignment, size); +} + +/* Copy a value to an appropriate register, or if none are + available, to the stack. */ + +static void +copy_to_register_or_stack (struct call_context *context, + unsigned char *stack, + struct arg_state *state, + void *value, + unsigned short type) +{ + copy_basic_type ( + allocate_to_register_or_stack (context, stack, state, type), + value, + type); +} + +/* Marshall the arguments from FFI representation to procedure call + context and stack. */ + +static unsigned +aarch64_prep_args (struct call_context *context, unsigned char *stack, + extended_cif *ecif) +{ + int i; + struct arg_state state; + + arg_init (&state, ALIGN(ecif->cif->bytes, 16)); + + for (i = 0; i < ecif->cif->nargs; i++) + { + ffi_type *ty = ecif->cif->arg_types[i]; + switch (ty->type) + { + case FFI_TYPE_VOID: + FFI_ASSERT (0); + break; + + /* If the argument is a basic type the argument is allocated to an + appropriate register, or if none are available, to the stack. */ + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + case FFI_TYPE_LONGDOUBLE: + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_INT: + case FFI_TYPE_SINT32: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + copy_to_register_or_stack (context, stack, &state, + ecif->avalue[i], ty->type); + break; + + case FFI_TYPE_STRUCT: + if (is_hfa (ty)) + { + copy_hfa_to_reg_or_stack (ecif->avalue[i], ty, context, + stack, &state); + } + else if (ty->size > 16) + { + /* If the argument is a composite type that is larger than 16 + bytes, then the argument has been copied to memory, and + the argument is replaced by a pointer to the copy. */ + + copy_to_register_or_stack (context, stack, &state, + &(ecif->avalue[i]), FFI_TYPE_POINTER); + } + else if (available_x (&state) >= (ty->size + 7) / 8) + { + /* If the argument is a composite type and the size in + double-words is not more than the number of available + X registers, then the argument is copied into consecutive + X registers. */ + int j; + for (j = 0; j < (ty->size + 7) / 8; j++) + { + memcpy (allocate_to_x (context, &state), + &(((UINT64 *) ecif->avalue[i])[j]), + sizeof (UINT64)); + } + } + else + { + /* Otherwise, there are insufficient X registers. Further X + register allocations are prevented, the NSAA is adjusted + (by allocate_to_stack ()) and the argument is copied to + memory at the adjusted NSAA. */ + state.ngrn = N_X_ARG_REG; + + memcpy (allocate_to_stack (&state, stack, ty->alignment, + ty->size), ecif->avalue + i, ty->size); + } + break; + + default: + FFI_ASSERT (0); + break; + } + } + + return ecif->cif->aarch64_flags; +} + +ffi_status +ffi_prep_cif_machdep (ffi_cif *cif) +{ + /* Round the stack up to a multiple of the stack alignment requirement. */ + cif->bytes = + (cif->bytes + (AARCH64_STACK_ALIGN - 1)) & ~ (AARCH64_STACK_ALIGN - 1); + + /* Initialize our flags. We are interested if this CIF will touch a + vector register, if so we will enable context save and load to + those registers, otherwise not. This is intended to be friendly + to lazy float context switching in the kernel. */ + cif->aarch64_flags = 0; + + if (is_v_register_candidate (cif->rtype)) + { + cif->aarch64_flags |= AARCH64_FFI_WITH_V; + } + else + { + int i; + for (i = 0; i < cif->nargs; i++) + if (is_v_register_candidate (cif->arg_types[i])) + { + cif->aarch64_flags |= AARCH64_FFI_WITH_V; + break; + } + } + + return FFI_OK; +} + +/* Call a function with the provided arguments and capture the return + value. */ +void +ffi_call (ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) +{ + extended_cif ecif; + + ecif.cif = cif; + ecif.avalue = avalue; + ecif.rvalue = rvalue; + + switch (cif->abi) + { + case FFI_SYSV: + { + struct call_context context; + unsigned stack_bytes; + + /* Figure out the total amount of stack space we need, the + above call frame space needs to be 16 bytes aligned to + ensure correct alignment of the first object inserted in + that space hence the ALIGN applied to cif->bytes.*/ + stack_bytes = ALIGN(cif->bytes, 16); + + memset (&context, 0, sizeof (context)); + if (is_register_candidate (cif->rtype)) + { + ffi_call_SYSV (aarch64_prep_args, &context, &ecif, stack_bytes, fn); + switch (cif->rtype->type) + { + case FFI_TYPE_VOID: + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + case FFI_TYPE_LONGDOUBLE: + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_INT: + case FFI_TYPE_SINT64: + { + void *addr = get_basic_type_addr (cif->rtype->type, + &context, 0); + copy_basic_type (rvalue, addr, cif->rtype->type); + break; + } + + case FFI_TYPE_STRUCT: + if (is_hfa (cif->rtype)) + { + int j; + unsigned short type = get_homogeneous_type (cif->rtype); + unsigned elems = element_count (cif->rtype); + for (j = 0; j < elems; j++) + { + void *reg = get_basic_type_addr (type, &context, j); + copy_basic_type (rvalue, reg, type); + rvalue += get_basic_type_size (type); + } + } + else if ((cif->rtype->size + 7) / 8 < N_X_ARG_REG) + { + unsigned size = ALIGN (cif->rtype->size, sizeof (UINT64)); + memcpy (rvalue, get_x_addr (&context, 0), size); + } + else + { + FFI_ASSERT (0); + } + break; + + default: + FFI_ASSERT (0); + break; + } + } + else + { + memcpy (get_x_addr (&context, 8), &rvalue, sizeof (UINT64)); + ffi_call_SYSV (aarch64_prep_args, &context, &ecif, + stack_bytes, fn); + } + break; + } + + default: + FFI_ASSERT (0); + break; + } +} + +static unsigned char trampoline [] = +{ 0x70, 0x00, 0x00, 0x58, /* ldr x16, 1f */ + 0x91, 0x00, 0x00, 0x10, /* adr x17, 2f */ + 0x00, 0x02, 0x1f, 0xd6 /* br x16 */ +}; + +/* Build a trampoline. */ + +#define FFI_INIT_TRAMPOLINE(TRAMP,FUN,CTX,FLAGS) \ + ({unsigned char *__tramp = (unsigned char*)(TRAMP); \ + UINT64 __fun = (UINT64)(FUN); \ + UINT64 __ctx = (UINT64)(CTX); \ + UINT64 __flags = (UINT64)(FLAGS); \ + memcpy (__tramp, trampoline, sizeof (trampoline)); \ + memcpy (__tramp + 12, &__fun, sizeof (__fun)); \ + memcpy (__tramp + 20, &__ctx, sizeof (__ctx)); \ + memcpy (__tramp + 28, &__flags, sizeof (__flags)); \ + __clear_cache(__tramp, __tramp + FFI_TRAMPOLINE_SIZE); \ + }) + +ffi_status +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*,void*,void**,void*), + void *user_data, + void *codeloc) +{ + if (cif->abi != FFI_SYSV) + return FFI_BAD_ABI; + + FFI_INIT_TRAMPOLINE (&closure->tramp[0], &ffi_closure_SYSV, codeloc, + cif->aarch64_flags); + + closure->cif = cif; + closure->user_data = user_data; + closure->fun = fun; + + return FFI_OK; +} + +/* Primary handler to setup and invoke a function within a closure. + + A closure when invoked enters via the assembler wrapper + ffi_closure_SYSV(). The wrapper allocates a call context on the + stack, saves the interesting registers (from the perspective of + the calling convention) into the context then passes control to + ffi_closure_SYSV_inner() passing the saved context and a pointer to + the stack at the point ffi_closure_SYSV() was invoked. + + On the return path the assembler wrapper will reload call context + regsiters. + + ffi_closure_SYSV_inner() marshalls the call context into ffi value + desriptors, invokes the wrapped function, then marshalls the return + value back into the call context. */ + +void +ffi_closure_SYSV_inner (ffi_closure *closure, struct call_context *context, + void *stack) +{ + ffi_cif *cif = closure->cif; + void **avalue = (void**) alloca (cif->nargs * sizeof (void*)); + void *rvalue = NULL; + int i; + struct arg_state state; + + arg_init (&state, ALIGN(cif->bytes, 16)); + + for (i = 0; i < cif->nargs; i++) + { + ffi_type *ty = cif->arg_types[i]; + + switch (ty->type) + { + case FFI_TYPE_VOID: + FFI_ASSERT (0); + break; + + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_INT: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + case FFI_TYPE_LONGDOUBLE: + avalue[i] = allocate_to_register_or_stack (context, stack, + &state, ty->type); + break; + + case FFI_TYPE_STRUCT: + if (is_hfa (ty)) + { + unsigned n = element_count (ty); + if (available_v (&state) < n) + { + state.nsrn = N_V_ARG_REG; + avalue[i] = allocate_to_stack (&state, stack, ty->alignment, + ty->size); + } + else + { + switch (get_homogeneous_type (ty)) + { + case FFI_TYPE_FLOAT: + { + /* Eeek! We need a pointer to the structure, + however the homogeneous float elements are + being passed in individual S registers, + therefore the structure is not represented as + a contiguous sequence of bytes in our saved + register context. We need to fake up a copy + of the structure layed out in memory + correctly. The fake can be tossed once the + closure function has returned hence alloca() + is sufficient. */ + int j; + UINT32 *p = avalue[i] = alloca (ty->size); + for (j = 0; j < element_count (ty); j++) + memcpy (&p[j], + allocate_to_s (context, &state), + sizeof (*p)); + break; + } + + case FFI_TYPE_DOUBLE: + { + /* Eeek! We need a pointer to the structure, + however the homogeneous float elements are + being passed in individual S registers, + therefore the structure is not represented as + a contiguous sequence of bytes in our saved + register context. We need to fake up a copy + of the structure layed out in memory + correctly. The fake can be tossed once the + closure function has returned hence alloca() + is sufficient. */ + int j; + UINT64 *p = avalue[i] = alloca (ty->size); + for (j = 0; j < element_count (ty); j++) + memcpy (&p[j], + allocate_to_d (context, &state), + sizeof (*p)); + break; + } + + case FFI_TYPE_LONGDOUBLE: + memcpy (&avalue[i], + allocate_to_v (context, &state), + sizeof (*avalue)); + break; + + default: + FFI_ASSERT (0); + break; + } + } + } + else if (ty->size > 16) + { + /* Replace Composite type of size greater than 16 with a + pointer. */ + memcpy (&avalue[i], + allocate_to_register_or_stack (context, stack, + &state, FFI_TYPE_POINTER), + sizeof (avalue[i])); + } + else if (available_x (&state) >= (ty->size + 7) / 8) + { + avalue[i] = get_x_addr (context, state.ngrn); + state.ngrn += (ty->size + 7) / 8; + } + else + { + state.ngrn = N_X_ARG_REG; + + avalue[i] = allocate_to_stack (&state, stack, ty->alignment, + ty->size); + } + break; + + default: + FFI_ASSERT (0); + break; + } + } + + /* Figure out where the return value will be passed, either in + registers or in a memory block allocated by the caller and passed + in x8. */ + + if (is_register_candidate (cif->rtype)) + { + /* Register candidates are *always* returned in registers. */ + + /* Allocate a scratchpad for the return value, we will let the + callee scrible the result into the scratch pad then move the + contents into the appropriate return value location for the + call convention. */ + rvalue = alloca (cif->rtype->size); + (closure->fun) (cif, rvalue, avalue, closure->user_data); + + /* Copy the return value into the call context so that it is returned + as expected to our caller. */ + switch (cif->rtype->type) + { + case FFI_TYPE_VOID: + break; + + case FFI_TYPE_UINT8: + case FFI_TYPE_UINT16: + case FFI_TYPE_UINT32: + case FFI_TYPE_POINTER: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT8: + case FFI_TYPE_SINT16: + case FFI_TYPE_INT: + case FFI_TYPE_SINT32: + case FFI_TYPE_SINT64: + case FFI_TYPE_FLOAT: + case FFI_TYPE_DOUBLE: + case FFI_TYPE_LONGDOUBLE: + { + void *addr = get_basic_type_addr (cif->rtype->type, context, 0); + copy_basic_type (addr, rvalue, cif->rtype->type); + break; + } + case FFI_TYPE_STRUCT: + if (is_hfa (cif->rtype)) + { + int i; + unsigned short type = get_homogeneous_type (cif->rtype); + unsigned elems = element_count (cif->rtype); + for (i = 0; i < elems; i++) + { + void *reg = get_basic_type_addr (type, context, i); + copy_basic_type (reg, rvalue, type); + rvalue += get_basic_type_size (type); + } + } + else if ((cif->rtype->size + 7) / 8 < N_X_ARG_REG) + { + unsigned size = ALIGN (cif->rtype->size, sizeof (UINT64)) ; + memcpy (get_x_addr (context, 0), rvalue, size); + } + else + { + FFI_ASSERT (0); + } + break; + default: + FFI_ASSERT (0); + break; + } + } + else + { + memcpy (&rvalue, get_x_addr (context, 8), sizeof (UINT64)); + (closure->fun) (cif, rvalue, avalue, closure->user_data); + } +} + diff --git a/Modules/_ctypes/libffi/src/aarch64/ffitarget.h b/Modules/_ctypes/libffi/src/aarch64/ffitarget.h new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/aarch64/ffitarget.h @@ -0,0 +1,59 @@ +/* Copyright (c) 2009, 2010, 2011, 2012 ARM Ltd. + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +``Software''), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +#ifndef LIBFFI_H +#error "Please do not include ffitarget.h directly into your source. Use ffi.h instead." +#endif + +#ifndef LIBFFI_ASM +typedef unsigned long ffi_arg; +typedef signed long ffi_sarg; + +typedef enum ffi_abi + { + FFI_FIRST_ABI = 0, + FFI_SYSV, + FFI_LAST_ABI, + FFI_DEFAULT_ABI = FFI_SYSV + } ffi_abi; +#endif + +/* ---- Definitions for closures ----------------------------------------- */ + +#define FFI_CLOSURES 1 +#define FFI_TRAMPOLINE_SIZE 36 +#define FFI_NATIVE_RAW_API 0 + +/* ---- Internal ---- */ + + +#define FFI_EXTRA_CIF_FIELDS unsigned aarch64_flags + +#define AARCH64_FFI_WITH_V_BIT 0 + +#define AARCH64_N_XREG 32 +#define AARCH64_N_VREG 32 +#define AARCH64_CALL_CONTEXT_SIZE (AARCH64_N_XREG * 8 + AARCH64_N_VREG * 16) + +#endif diff --git a/Modules/_ctypes/libffi/src/aarch64/sysv.S b/Modules/_ctypes/libffi/src/aarch64/sysv.S new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/aarch64/sysv.S @@ -0,0 +1,307 @@ +/* Copyright (c) 2009, 2010, 2011, 2012 ARM Ltd. + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +``Software''), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ + +#define LIBFFI_ASM +#include +#include + +#define cfi_adjust_cfa_offset(off) .cfi_adjust_cfa_offset off +#define cfi_rel_offset(reg, off) .cfi_rel_offset reg, off +#define cfi_restore(reg) .cfi_restore reg +#define cfi_def_cfa_register(reg) .cfi_def_cfa_register reg + + .text + .globl ffi_call_SYSV + .type ffi_call_SYSV, #function + +/* ffi_call_SYSV() + + Create a stack frame, setup an argument context, call the callee + and extract the result. + + The maximum required argument stack size is provided, + ffi_call_SYSV() allocates that stack space then calls the + prepare_fn to populate register context and stack. The + argument passing registers are loaded from the register + context and the callee called, on return the register passing + register are saved back to the context. Our caller will + extract the return value from the final state of the saved + register context. + + Prototype: + + extern unsigned + ffi_call_SYSV (void (*)(struct call_context *context, unsigned char *, + extended_cif *), + struct call_context *context, + extended_cif *, + unsigned required_stack_size, + void (*fn)(void)); + + Therefore on entry we have: + + x0 prepare_fn + x1 &context + x2 &ecif + x3 bytes + x4 fn + + This function uses the following stack frame layout: + + == + saved x30(lr) + x29(fp)-> saved x29(fp) + saved x24 + saved x23 + saved x22 + sp' -> saved x21 + ... + sp -> (constructed callee stack arguments) + == + + Voila! */ + +#define ffi_call_SYSV_FS (8 * 4) + + .cfi_startproc +ffi_call_SYSV: + stp x29, x30, [sp, #-16]! + cfi_adjust_cfa_offset (16) + cfi_rel_offset (x29, 0) + cfi_rel_offset (x30, 8) + + mov x29, sp + cfi_def_cfa_register (x29) + sub sp, sp, #ffi_call_SYSV_FS + + stp x21, x22, [sp, 0] + cfi_rel_offset (x21, 0 - ffi_call_SYSV_FS) + cfi_rel_offset (x22, 8 - ffi_call_SYSV_FS) + + stp x23, x24, [sp, 16] + cfi_rel_offset (x23, 16 - ffi_call_SYSV_FS) + cfi_rel_offset (x24, 24 - ffi_call_SYSV_FS) + + mov x21, x1 + mov x22, x2 + mov x24, x4 + + /* Allocate the stack space for the actual arguments, many + arguments will be passed in registers, but we assume + worst case and allocate sufficient stack for ALL of + the arguments. */ + sub sp, sp, x3 + + /* unsigned (*prepare_fn) (struct call_context *context, + unsigned char *stack, extended_cif *ecif); + */ + mov x23, x0 + mov x0, x1 + mov x1, sp + /* x2 already in place */ + blr x23 + + /* Preserve the flags returned. */ + mov x23, x0 + + /* Figure out if we should touch the vector registers. */ + tbz x23, #AARCH64_FFI_WITH_V_BIT, 1f + + /* Load the vector argument passing registers. */ + ldp q0, q1, [x21, #8*32 + 0] + ldp q2, q3, [x21, #8*32 + 32] + ldp q4, q5, [x21, #8*32 + 64] + ldp q6, q7, [x21, #8*32 + 96] +1: + /* Load the core argument passing registers. */ + ldp x0, x1, [x21, #0] + ldp x2, x3, [x21, #16] + ldp x4, x5, [x21, #32] + ldp x6, x7, [x21, #48] + + /* Don't forget x8 which may be holding the address of a return buffer. + */ + ldr x8, [x21, #8*8] + + blr x24 + + /* Save the core argument passing registers. */ + stp x0, x1, [x21, #0] + stp x2, x3, [x21, #16] + stp x4, x5, [x21, #32] + stp x6, x7, [x21, #48] + + /* Note nothing useful ever comes back in x8! */ + + /* Figure out if we should touch the vector registers. */ + tbz x23, #AARCH64_FFI_WITH_V_BIT, 1f + + /* Save the vector argument passing registers. */ + stp q0, q1, [x21, #8*32 + 0] + stp q2, q3, [x21, #8*32 + 32] + stp q4, q5, [x21, #8*32 + 64] + stp q6, q7, [x21, #8*32 + 96] +1: + /* All done, unwind our stack frame. */ + ldp x21, x22, [x29, # - ffi_call_SYSV_FS] + cfi_restore (x21) + cfi_restore (x22) + + ldp x23, x24, [x29, # - ffi_call_SYSV_FS + 16] + cfi_restore (x23) + cfi_restore (x24) + + mov sp, x29 + cfi_def_cfa_register (sp) + + ldp x29, x30, [sp], #16 + cfi_adjust_cfa_offset (-16) + cfi_restore (x29) + cfi_restore (x30) + + ret + + .cfi_endproc + .size ffi_call_SYSV, .-ffi_call_SYSV + +#define ffi_closure_SYSV_FS (8 * 2 + AARCH64_CALL_CONTEXT_SIZE) + +/* ffi_closure_SYSV + + Closure invocation glue. This is the low level code invoked directly by + the closure trampoline to setup and call a closure. + + On entry x17 points to a struct trampoline_data, x16 has been clobbered + all other registers are preserved. + + We allocate a call context and save the argument passing registers, + then invoked the generic C ffi_closure_SYSV_inner() function to do all + the real work, on return we load the result passing registers back from + the call context. + + On entry + + extern void + ffi_closure_SYSV (struct trampoline_data *); + + struct trampoline_data + { + UINT64 *ffi_closure; + UINT64 flags; + }; + + This function uses the following stack frame layout: + + == + saved x30(lr) + x29(fp)-> saved x29(fp) + saved x22 + saved x21 + ... + sp -> call_context + == + + Voila! */ + + .text + .globl ffi_closure_SYSV + .cfi_startproc +ffi_closure_SYSV: + stp x29, x30, [sp, #-16]! + cfi_adjust_cfa_offset (16) + cfi_rel_offset (x29, 0) + cfi_rel_offset (x30, 8) + + mov x29, sp + + sub sp, sp, #ffi_closure_SYSV_FS + cfi_adjust_cfa_offset (ffi_closure_SYSV_FS) + + stp x21, x22, [x29, #-16] + cfi_rel_offset (x21, 0) + cfi_rel_offset (x22, 8) + + /* Load x21 with &call_context. */ + mov x21, sp + /* Preserve our struct trampoline_data * */ + mov x22, x17 + + /* Save the rest of the argument passing registers. */ + stp x0, x1, [x21, #0] + stp x2, x3, [x21, #16] + stp x4, x5, [x21, #32] + stp x6, x7, [x21, #48] + /* Don't forget we may have been given a result scratch pad address. + */ + str x8, [x21, #64] + + /* Figure out if we should touch the vector registers. */ + ldr x0, [x22, #8] + tbz x0, #AARCH64_FFI_WITH_V_BIT, 1f + + /* Save the argument passing vector registers. */ + stp q0, q1, [x21, #8*32 + 0] + stp q2, q3, [x21, #8*32 + 32] + stp q4, q5, [x21, #8*32 + 64] + stp q6, q7, [x21, #8*32 + 96] +1: + /* Load &ffi_closure.. */ + ldr x0, [x22, #0] + mov x1, x21 + /* Compute the location of the stack at the point that the + trampoline was called. */ + add x2, x29, #16 + + bl ffi_closure_SYSV_inner + + /* Figure out if we should touch the vector registers. */ + ldr x0, [x22, #8] + tbz x0, #AARCH64_FFI_WITH_V_BIT, 1f + + /* Load the result passing vector registers. */ + ldp q0, q1, [x21, #8*32 + 0] + ldp q2, q3, [x21, #8*32 + 32] + ldp q4, q5, [x21, #8*32 + 64] + ldp q6, q7, [x21, #8*32 + 96] +1: + /* Load the result passing core registers. */ + ldp x0, x1, [x21, #0] + ldp x2, x3, [x21, #16] + ldp x4, x5, [x21, #32] + ldp x6, x7, [x21, #48] + /* Note nothing usefull is returned in x8. */ + + /* We are done, unwind our frame. */ + ldp x21, x22, [x29, #-16] + cfi_restore (x21) + cfi_restore (x22) + + mov sp, x29 + cfi_adjust_cfa_offset (-ffi_closure_SYSV_FS) + + ldp x29, x30, [sp], #16 + cfi_adjust_cfa_offset (-16) + cfi_restore (x29) + cfi_restore (x30) + + ret + .cfi_endproc + .size ffi_closure_SYSV, .-ffi_closure_SYSV diff --git a/Modules/_ctypes/libffi/src/arm/gentramp.sh b/Modules/_ctypes/libffi/src/arm/gentramp.sh old mode 100644 new mode 100755 diff --git a/Modules/_ctypes/libffi/src/bfin/ffi.c b/Modules/_ctypes/libffi/src/bfin/ffi.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/bfin/ffi.c @@ -0,0 +1,195 @@ +/* ----------------------------------------------------------------------- + ffi.c - Copyright (c) 2012 Alexandre K. I. de Mendonca + + Blackfin Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ +#include +#include + +#include +#include + +/* Maximum number of GPRs available for argument passing. */ +#define MAX_GPRARGS 3 + +/* + * Return types + */ +#define FFIBFIN_RET_VOID 0 +#define FFIBFIN_RET_BYTE 1 +#define FFIBFIN_RET_HALFWORD 2 +#define FFIBFIN_RET_INT64 3 +#define FFIBFIN_RET_INT32 4 + +/*====================================================================*/ +/* PROTOTYPE * + /*====================================================================*/ +void ffi_prep_args(unsigned char *, extended_cif *); + +/*====================================================================*/ +/* Externals */ +/* (Assembly) */ +/*====================================================================*/ + +extern void ffi_call_SYSV(unsigned, extended_cif *, void(*)(unsigned char *, extended_cif *), unsigned, void *, void(*fn)(void)); + +/*====================================================================*/ +/* Implementation */ +/* */ +/*====================================================================*/ + + +/* + * This function calculates the return type (size) based on type. + */ + +ffi_status ffi_prep_cif_machdep(ffi_cif *cif) +{ + /* --------------------------------------* + * Return handling * + * --------------------------------------*/ + switch (cif->rtype->type) { + case FFI_TYPE_VOID: + cif->flags = FFIBFIN_RET_VOID; + break; + case FFI_TYPE_UINT16: + case FFI_TYPE_SINT16: + cif->flags = FFIBFIN_RET_HALFWORD; + break; + case FFI_TYPE_UINT8: + cif->flags = FFIBFIN_RET_BYTE; + break; + case FFI_TYPE_INT: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_FLOAT: + case FFI_TYPE_POINTER: + case FFI_TYPE_SINT8: + cif->flags = FFIBFIN_RET_INT32; + break; + case FFI_TYPE_SINT64: + case FFI_TYPE_UINT64: + case FFI_TYPE_DOUBLE: + cif->flags = FFIBFIN_RET_INT64; + break; + case FFI_TYPE_STRUCT: + if (cif->rtype->size <= 4){ + cif->flags = FFIBFIN_RET_INT32; + }else if (cif->rtype->size == 8){ + cif->flags = FFIBFIN_RET_INT64; + }else{ + //it will return via a hidden pointer in P0 + cif->flags = FFIBFIN_RET_VOID; + } + break; + default: + FFI_ASSERT(0); + break; + } + return FFI_OK; +} + +/* + * This will prepare the arguments and will call the assembly routine + * cif = the call interface + * fn = the function to be called + * rvalue = the return value + * avalue = the arguments + */ +void ffi_call(ffi_cif *cif, void(*fn)(void), void *rvalue, void **avalue) +{ + int ret_type = cif->flags; + extended_cif ecif; + ecif.cif = cif; + ecif.avalue = avalue; + ecif.rvalue = rvalue; + + switch (cif->abi) { + case FFI_SYSV: + ffi_call_SYSV(cif->bytes, &ecif, ffi_prep_args, ret_type, ecif.rvalue, fn); + break; + default: + FFI_ASSERT(0); + break; + } +} + + +/* +* This function prepares the parameters (copies them from the ecif to the stack) +* to call the function (ffi_prep_args is called by the assembly routine in file +* sysv.S, which also calls the actual function) +*/ +void ffi_prep_args(unsigned char *stack, extended_cif *ecif) +{ + register unsigned int i = 0; + void **p_argv; + unsigned char *argp; + ffi_type **p_arg; + argp = stack; + p_argv = ecif->avalue; + for (i = ecif->cif->nargs, p_arg = ecif->cif->arg_types; + (i != 0); + i--, p_arg++) { + size_t z; + z = (*p_arg)->size; + if (z < sizeof(int)) { + z = sizeof(int); + switch ((*p_arg)->type) { + case FFI_TYPE_SINT8: { + signed char v = *(SINT8 *)(* p_argv); + signed int t = v; + *(signed int *) argp = t; + } + break; + case FFI_TYPE_UINT8: { + unsigned char v = *(UINT8 *)(* p_argv); + unsigned int t = v; + *(unsigned int *) argp = t; + } + break; + case FFI_TYPE_SINT16: + *(signed int *) argp = (signed int) * (SINT16 *)(* p_argv); + break; + case FFI_TYPE_UINT16: + *(unsigned int *) argp = (unsigned int) * (UINT16 *)(* p_argv); + break; + case FFI_TYPE_STRUCT: + memcpy(argp, *p_argv, (*p_arg)->size); + break; + default: + FFI_ASSERT(0); + break; + } + } else if (z == sizeof(int)) { + *(unsigned int *) argp = (unsigned int) * (UINT32 *)(* p_argv); + } else { + memcpy(argp, *p_argv, z); + } + p_argv++; + argp += z; + } +} + + + diff --git a/Modules/_ctypes/libffi/src/bfin/ffitarget.h b/Modules/_ctypes/libffi/src/bfin/ffitarget.h new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/bfin/ffitarget.h @@ -0,0 +1,43 @@ +/* ----------------------------------------------------------------------- + ffitarget.h - Copyright (c) 2012 Alexandre K. I. de Mendonca + + Blackfin Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +#ifndef LIBFFI_ASM +typedef unsigned long ffi_arg; +typedef signed long ffi_sarg; + +typedef enum ffi_abi { + FFI_FIRST_ABI = 0, + FFI_SYSV, + FFI_LAST_ABI, + FFI_DEFAULT_ABI = FFI_SYSV +} ffi_abi; +#endif + +#endif + diff --git a/Modules/_ctypes/libffi/src/bfin/sysv.S b/Modules/_ctypes/libffi/src/bfin/sysv.S new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/bfin/sysv.S @@ -0,0 +1,177 @@ +/* ----------------------------------------------------------------------- + sysv.S - Copyright (c) 2012 Alexandre K. I. de Mendonca + + Blackfin Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#define LIBFFI_ASM +#include +#include + +.text +.align 4 + + /* + There is a "feature" in the bfin toolchain that it puts a _ before funcion names + that's why the function here it's called _ffi_call_SYSV and not ffi_call_SYSV + */ + .global _ffi_call_SYSV; + .type _ffi_call_SYSV, STT_FUNC; + .func ffi_call_SYSV + + /* + cif->bytes = R0 (fp+8) + &ecif = R1 (fp+12) + ffi_prep_args = R2 (fp+16) + ret_type = stack (fp+20) + ecif.rvalue = stack (fp+24) + fn = stack (fp+28) + got (fp+32) + There is room for improvement here (we can use temporary registers + instead of saving the values in the memory) + REGS: + P5 => Stack pointer (function arguments) + R5 => cif->bytes + R4 => ret->type + + FP-20 = P3 + FP-16 = SP (parameters area) + FP-12 = SP (temp) + FP-08 = function return part 1 [R0] + FP-04 = function return part 2 [R1] + */ + +_ffi_call_SYSV: +.prologue: + LINK 20; + [FP-20] = P3; + [FP+8] = R0; + [FP+12] = R1; + [FP+16] = R2; + +.allocate_stack: + //alocate cif->bytes into the stack + R1 = [FP+8]; + R0 = SP; + R0 = R0 - R1; + R1 = 4; + R0 = R0 - R1; + [FP-12] = SP; + SP = R0; + [FP-16] = SP; + +.call_prep_args: + //get the addr of prep_args + P0 = [P3 + _ffi_prep_args at FUNCDESC_GOT17M4]; + P1 = [P0]; + P3 = [P0+4]; + R0 = [FP-16];//SP (parameter area) + R1 = [FP+12];//ecif + call (P1); + +.call_user_function: + //ajust SP so as to allow the user function access the parameters on the stack + SP = [FP-16]; //point to function parameters + R0 = [SP]; + R1 = [SP+4]; + R2 = [SP+8]; + //load user function address + P0 = FP; + P0 +=28; + P1 = [P0]; + P1 = [P1]; + P3 = [P0+4]; + /* + For functions returning aggregate values (struct) occupying more than 8 bytes, + the caller allocates the return value object on the stack and the address + of this object is passed to the callee as a hidden argument in register P0. + */ + P0 = [FP+24]; + + call (P1); + SP = [FP-12]; +.compute_return: + P2 = [FP-20]; + [FP-8] = R0; + [FP-4] = R1; + + R0 = [FP+20]; + R1 = R0 << 2; + + R0 = [P2+.rettable at GOT17M4]; + R0 = R1 + R0; + P2 = R0; + R1 = [P2]; + + P2 = [FP+-20]; + R0 = [P2+.rettable at GOT17M4]; + R0 = R1 + R0; + P2 = R0; + R0 = [FP-8]; + R1 = [FP-4]; + jump (P2); + +/* +#define FFIBFIN_RET_VOID 0 +#define FFIBFIN_RET_BYTE 1 +#define FFIBFIN_RET_HALFWORD 2 +#define FFIBFIN_RET_INT64 3 +#define FFIBFIN_RET_INT32 4 +*/ +.align 4 +.align 4 +.rettable: + .dd .epilogue - .rettable + .dd .rbyte - .rettable; + .dd .rhalfword - .rettable; + .dd .rint64 - .rettable; + .dd .rint32 - .rettable; + +.rbyte: + P0 = [FP+24]; + R0 = R0.B (Z); + [P0] = R0; + JUMP .epilogue +.rhalfword: + P0 = [FP+24]; + R0 = R0.L; + [P0] = R0; + JUMP .epilogue +.rint64: + P0 = [FP+24];// &rvalue + [P0] = R0; + [P0+4] = R1; + JUMP .epilogue +.rint32: + P0 = [FP+24]; + [P0] = R0; +.epilogue: + R0 = [FP+8]; + R1 = [FP+12]; + R2 = [FP+16]; + P3 = [FP-20]; + UNLINK; + RTS; + +.size _ffi_call_SYSV,.-_ffi_call_SYSV; +.endfunc diff --git a/Modules/_ctypes/libffi/src/closures.c b/Modules/_ctypes/libffi/src/closures.c --- a/Modules/_ctypes/libffi/src/closures.c +++ b/Modules/_ctypes/libffi/src/closures.c @@ -172,6 +172,27 @@ #endif /* !FFI_MMAP_EXEC_SELINUX */ +/* On PaX enable kernels that have MPROTECT enable we can't use PROT_EXEC. */ +#ifdef FFI_MMAP_EXEC_EMUTRAMP_PAX +#include + +static int emutramp_enabled = -1; + +static int +emutramp_enabled_check (void) +{ + if (getenv ("FFI_DISABLE_EMUTRAMP") == NULL) + return 1; + else + return 0; +} + +#define is_emutramp_enabled() (emutramp_enabled >= 0 ? emutramp_enabled \ + : (emutramp_enabled = emutramp_enabled_check ())) +#else +#define is_emutramp_enabled() 0 +#endif /* FFI_MMAP_EXEC_EMUTRAMP_PAX */ + #elif defined (__CYGWIN__) || defined(__INTERIX) #include @@ -458,6 +479,12 @@ printf ("mapping in %zi\n", length); #endif + if (execfd == -1 && is_emutramp_enabled ()) + { + ptr = mmap (start, length, prot & ~PROT_EXEC, flags, fd, offset); + return ptr; + } + if (execfd == -1 && !is_selinux_enabled ()) { ptr = mmap (start, length, prot | PROT_EXEC, flags, fd, offset); diff --git a/Modules/_ctypes/libffi/src/m68k/ffi.c b/Modules/_ctypes/libffi/src/m68k/ffi.c --- a/Modules/_ctypes/libffi/src/m68k/ffi.c +++ b/Modules/_ctypes/libffi/src/m68k/ffi.c @@ -123,6 +123,8 @@ #define CIF_FLAGS_POINTER 32 #define CIF_FLAGS_STRUCT1 64 #define CIF_FLAGS_STRUCT2 128 +#define CIF_FLAGS_SINT8 256 +#define CIF_FLAGS_SINT16 512 /* Perform machine dependent cif processing */ ffi_status @@ -200,6 +202,14 @@ cif->flags = CIF_FLAGS_DINT; break; + case FFI_TYPE_SINT16: + cif->flags = CIF_FLAGS_SINT16; + break; + + case FFI_TYPE_SINT8: + cif->flags = CIF_FLAGS_SINT8; + break; + default: cif->flags = CIF_FLAGS_INT; break; diff --git a/Modules/_ctypes/libffi/src/m68k/sysv.S b/Modules/_ctypes/libffi/src/m68k/sysv.S --- a/Modules/_ctypes/libffi/src/m68k/sysv.S +++ b/Modules/_ctypes/libffi/src/m68k/sysv.S @@ -2,9 +2,10 @@ sysv.S - Copyright (c) 2012 Alan Hourihane Copyright (c) 1998, 2012 Andreas Schwab - Copyright (c) 2008 Red Hat, Inc. - - m68k Foreign Function Interface + Copyright (c) 2008 Red Hat, Inc. + Copyright (c) 2012 Thorsten Glaser + + m68k Foreign Function Interface Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -168,8 +169,22 @@ retstruct2: btst #7,%d2 + jbeq retsint8 + move.w %d0,(%a1) + jbra epilogue + +retsint8: + btst #8,%d2 + jbeq retsint16 + extb.l %d0 + move.l %d0,(%a1) + jbra epilogue + +retsint16: + btst #9,%d2 jbeq noretval - move.w %d0,(%a1) + ext.l %d0 + move.l %d0,(%a1) noretval: epilogue: @@ -201,8 +216,10 @@ lsr.l #1,%d0 jne 1f jcc .Lcls_epilogue + | CIF_FLAGS_INT move.l -12(%fp),%d0 .Lcls_epilogue: + | no CIF_FLAGS_* unlk %fp rts 1: @@ -210,6 +227,7 @@ lsr.l #2,%d0 jne 1f jcs .Lcls_ret_float + | CIF_FLAGS_DINT move.l (%a0)+,%d0 move.l (%a0),%d1 jra .Lcls_epilogue @@ -224,6 +242,7 @@ lsr.l #2,%d0 jne 1f jcs .Lcls_ret_ldouble + | CIF_FLAGS_DOUBLE #if defined(__MC68881__) || defined(__HAVE_68881__) fmove.d (%a0),%fp0 #else @@ -242,17 +261,31 @@ jra .Lcls_epilogue 1: lsr.l #2,%d0 - jne .Lcls_ret_struct2 + jne 1f jcs .Lcls_ret_struct1 + | CIF_FLAGS_POINTER move.l (%a0),%a0 move.l %a0,%d0 jra .Lcls_epilogue .Lcls_ret_struct1: move.b (%a0),%d0 jra .Lcls_epilogue -.Lcls_ret_struct2: +1: + lsr.l #2,%d0 + jne 1f + jcs .Lcls_ret_sint8 + | CIF_FLAGS_STRUCT2 move.w (%a0),%d0 jra .Lcls_epilogue +.Lcls_ret_sint8: + move.l (%a0),%d0 + extb.l %d0 + jra .Lcls_epilogue +1: + | CIF_FLAGS_SINT16 + move.l (%a0),%d0 + ext.l %d0 + jra .Lcls_epilogue CFI_ENDPROC() .size CALLFUNC(ffi_closure_SYSV),.-CALLFUNC(ffi_closure_SYSV) diff --git a/Modules/_ctypes/libffi/src/microblaze/ffi.c b/Modules/_ctypes/libffi/src/microblaze/ffi.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/microblaze/ffi.c @@ -0,0 +1,321 @@ +/* ----------------------------------------------------------------------- + ffi.c - Copyright (c) 2012, 2013 Xilinx, Inc + + MicroBlaze Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#include +#include + +extern void ffi_call_SYSV(void (*)(void*, extended_cif*), extended_cif*, + unsigned int, unsigned int, unsigned int*, void (*fn)(void), + unsigned int, unsigned int); + +extern void ffi_closure_SYSV(void); + +#define WORD_SIZE sizeof(unsigned int) +#define ARGS_REGISTER_SIZE (WORD_SIZE * 6) +#define WORD_ALIGN(x) ALIGN(x, WORD_SIZE) + +/* ffi_prep_args is called by the assembly routine once stack space + has been allocated for the function's arguments */ +void ffi_prep_args(void* stack, extended_cif* ecif) +{ + unsigned int i; + ffi_type** p_arg; + void** p_argv; + void* stack_args_p = stack; + + p_argv = ecif->avalue; + + if (ecif == NULL || ecif->cif == NULL) { + return; /* no description to prepare */ + } + + if ((ecif->cif->rtype != NULL) && + (ecif->cif->rtype->type == FFI_TYPE_STRUCT)) + { + /* if return type is a struct which is referenced on the stack/reg5, + * by a pointer. Stored the return value pointer in r5. + */ + char* addr = stack_args_p; + memcpy(addr, &(ecif->rvalue), WORD_SIZE); + stack_args_p += WORD_SIZE; + } + + if (ecif->avalue == NULL) { + return; /* no arguments to prepare */ + } + + for (i = 0, p_arg = ecif->cif->arg_types; i < ecif->cif->nargs; + i++, p_arg++) + { + size_t size = (*p_arg)->size; + int type = (*p_arg)->type; + void* value = p_argv[i]; + char* addr = stack_args_p; + int aligned_size = WORD_ALIGN(size); + + /* force word alignment on the stack */ + stack_args_p += aligned_size; + + switch (type) + { + case FFI_TYPE_UINT8: + *(unsigned int *)addr = (unsigned int)*(UINT8*)(value); + break; + case FFI_TYPE_SINT8: + *(signed int *)addr = (signed int)*(SINT8*)(value); + break; + case FFI_TYPE_UINT16: + *(unsigned int *)addr = (unsigned int)*(UINT16*)(value); + break; + case FFI_TYPE_SINT16: + *(signed int *)addr = (signed int)*(SINT16*)(value); + break; + case FFI_TYPE_STRUCT: +#if __BIG_ENDIAN__ + /* + * MicroBlaze toolchain appears to emit: + * bsrli r5, r5, 8 (caller) + * ... + * + * ... + * bslli r5, r5, 8 (callee) + * + * For structs like "struct a { uint8_t a[3]; };", when passed + * by value. + * + * Structs like "struct b { uint16_t a; };" are also expected + * to be packed strangely in registers. + * + * This appears to be because the microblaze toolchain expects + * "struct b == uint16_t", which is only any issue for big + * endian. + * + * The following is a work around for big-endian only, for the + * above mentioned case, it will re-align the contents of a + * <= 3-byte struct value. + */ + if (size < WORD_SIZE) + { + memcpy (addr + (WORD_SIZE - size), value, size); + break; + } +#endif + case FFI_TYPE_SINT32: + case FFI_TYPE_UINT32: + case FFI_TYPE_FLOAT: + case FFI_TYPE_SINT64: + case FFI_TYPE_UINT64: + case FFI_TYPE_DOUBLE: + default: + memcpy(addr, value, aligned_size); + } + } +} + +ffi_status ffi_prep_cif_machdep(ffi_cif* cif) +{ + /* check ABI */ + switch (cif->abi) + { + case FFI_SYSV: + break; + default: + return FFI_BAD_ABI; + } + return FFI_OK; +} + +void ffi_call(ffi_cif* cif, void (*fn)(void), void* rvalue, void** avalue) +{ + extended_cif ecif; + ecif.cif = cif; + ecif.avalue = avalue; + + /* If the return value is a struct and we don't have a return */ + /* value address then we need to make one */ + if ((rvalue == NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { + ecif.rvalue = alloca(cif->rtype->size); + } else { + ecif.rvalue = rvalue; + } + + switch (cif->abi) + { + case FFI_SYSV: + ffi_call_SYSV(ffi_prep_args, &ecif, cif->bytes, cif->flags, + ecif.rvalue, fn, cif->rtype->type, cif->rtype->size); + break; + default: + FFI_ASSERT(0); + break; + } +} + +void ffi_closure_call_SYSV(void* register_args, void* stack_args, + ffi_closure* closure, void* rvalue, + unsigned int* rtype, unsigned int* rsize) +{ + /* prepare arguments for closure call */ + ffi_cif* cif = closure->cif; + ffi_type** arg_types = cif->arg_types; + + /* re-allocate data for the args. This needs to be done in order to keep + * multi-word objects (e.g. structs) in contigious memory. Callers are not + * required to store the value of args in the lower 6 words in the stack + * (although they are allocated in the stack). + */ + char* stackclone = alloca(cif->bytes); + void** avalue = alloca(cif->nargs * sizeof(void*)); + void* struct_rvalue = NULL; + char* ptr = stackclone; + int i; + + /* copy registers into stack clone */ + int registers_used = cif->bytes; + if (registers_used > ARGS_REGISTER_SIZE) { + registers_used = ARGS_REGISTER_SIZE; + } + memcpy(stackclone, register_args, registers_used); + + /* copy stack allocated args into stack clone */ + if (cif->bytes > ARGS_REGISTER_SIZE) { + int stack_used = cif->bytes - ARGS_REGISTER_SIZE; + memcpy(stackclone + ARGS_REGISTER_SIZE, stack_args, stack_used); + } + + /* preserve struct type return pointer passing */ + if ((cif->rtype != NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { + struct_rvalue = *((void**)ptr); + ptr += WORD_SIZE; + } + + /* populate arg pointer list */ + for (i = 0; i < cif->nargs; i++) + { + switch (arg_types[i]->type) + { + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT8: +#ifdef __BIG_ENDIAN__ + avalue[i] = ptr + 3; +#else + avalue[i] = ptr; +#endif + break; + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT16: +#ifdef __BIG_ENDIAN__ + avalue[i] = ptr + 2; +#else + avalue[i] = ptr; +#endif + break; + case FFI_TYPE_STRUCT: +#if __BIG_ENDIAN__ + /* + * Work around strange ABI behaviour. + * (see info in ffi_prep_args) + */ + if (arg_types[i]->size < WORD_SIZE) + { + memcpy (ptr, ptr + (WORD_SIZE - arg_types[i]->size), arg_types[i]->size); + } +#endif + avalue[i] = (void*)ptr; + break; + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + case FFI_TYPE_DOUBLE: + avalue[i] = ptr; + break; + case FFI_TYPE_SINT32: + case FFI_TYPE_UINT32: + case FFI_TYPE_FLOAT: + default: + /* default 4-byte argument */ + avalue[i] = ptr; + break; + } + ptr += WORD_ALIGN(arg_types[i]->size); + } + + /* set the return type info passed back to the wrapper */ + *rsize = cif->rtype->size; + *rtype = cif->rtype->type; + if (struct_rvalue != NULL) { + closure->fun(cif, struct_rvalue, avalue, closure->user_data); + /* copy struct return pointer value into function return value */ + *((void**)rvalue) = struct_rvalue; + } else { + closure->fun(cif, rvalue, avalue, closure->user_data); + } +} + +ffi_status ffi_prep_closure_loc( + ffi_closure* closure, ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void* user_data, void* codeloc) +{ + unsigned long* tramp = (unsigned long*)&(closure->tramp[0]); + unsigned long cls = (unsigned long)codeloc; + unsigned long fn = 0; + unsigned long fn_closure_call_sysv = (unsigned long)ffi_closure_call_SYSV; + + closure->cif = cif; + closure->fun = fun; + closure->user_data = user_data; + + switch (cif->abi) + { + case FFI_SYSV: + fn = (unsigned long)ffi_closure_SYSV; + + /* load r11 (temp) with fn */ + /* imm fn(upper) */ + tramp[0] = 0xb0000000 | ((fn >> 16) & 0xffff); + /* addik r11, r0, fn(lower) */ + tramp[1] = 0x31600000 | (fn & 0xffff); + + /* load r12 (temp) with cls */ + /* imm cls(upper) */ + tramp[2] = 0xb0000000 | ((cls >> 16) & 0xffff); + /* addik r12, r0, cls(lower) */ + tramp[3] = 0x31800000 | (cls & 0xffff); + + /* load r3 (temp) with ffi_closure_call_SYSV */ + /* imm fn_closure_call_sysv(upper) */ + tramp[4] = 0xb0000000 | ((fn_closure_call_sysv >> 16) & 0xffff); + /* addik r3, r0, fn_closure_call_sysv(lower) */ + tramp[5] = 0x30600000 | (fn_closure_call_sysv & 0xffff); + /* branch/jump to address stored in r11 (fn) */ + tramp[6] = 0x98085800; /* bra r11 */ + + break; + default: + return FFI_BAD_ABI; + } + return FFI_OK; +} diff --git a/Modules/_ctypes/libffi/src/microblaze/ffitarget.h b/Modules/_ctypes/libffi/src/microblaze/ffitarget.h new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/microblaze/ffitarget.h @@ -0,0 +1,53 @@ +/* ----------------------------------------------------------------------- + ffitarget.h - Copyright (c) 2012, 2013 Xilinx, Inc + + Target configuration macros for MicroBlaze. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +#ifndef LIBFFI_H +#error "Please do not include ffitarget.h directly into your source. Use ffi.h instead." +#endif + +#ifndef LIBFFI_ASM +typedef unsigned long ffi_arg; +typedef signed long ffi_sarg; + +typedef enum ffi_abi { + FFI_FIRST_ABI = 0, + FFI_SYSV, + FFI_LAST_ABI, + FFI_DEFAULT_ABI = FFI_SYSV +} ffi_abi; +#endif + +/* Definitions for closures */ + +#define FFI_CLOSURES 1 +#define FFI_NATIVE_RAW_API 0 + +#define FFI_TRAMPOLINE_SIZE (4*8) + +#endif diff --git a/Modules/_ctypes/libffi/src/microblaze/sysv.S b/Modules/_ctypes/libffi/src/microblaze/sysv.S new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/microblaze/sysv.S @@ -0,0 +1,302 @@ +/* ----------------------------------------------------------------------- + sysv.S - Copyright (c) 2012, 2013 Xilinx, Inc + + MicroBlaze Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#define LIBFFI_ASM +#include +#include + + /* + * arg[0] (r5) = ffi_prep_args, + * arg[1] (r6) = &ecif, + * arg[2] (r7) = cif->bytes, + * arg[3] (r8) = cif->flags, + * arg[4] (r9) = ecif.rvalue, + * arg[5] (r10) = fn + * arg[6] (sp[0]) = cif->rtype->type + * arg[7] (sp[4]) = cif->rtype->size + */ + .text + .globl ffi_call_SYSV + .type ffi_call_SYSV, @function +ffi_call_SYSV: + /* push callee saves */ + addik r1, r1, -20 + swi r19, r1, 0 /* Frame Pointer */ + swi r20, r1, 4 /* PIC register */ + swi r21, r1, 8 /* PIC register */ + swi r22, r1, 12 /* save for locals */ + swi r23, r1, 16 /* save for locals */ + + /* save the r5-r10 registers in the stack */ + addik r1, r1, -24 /* increment sp to store 6x 32-bit words */ + swi r5, r1, 0 + swi r6, r1, 4 + swi r7, r1, 8 + swi r8, r1, 12 + swi r9, r1, 16 + swi r10, r1, 20 + + /* save function pointer */ + addik r3, r5, 0 /* copy ffi_prep_args into r3 */ + addik r22, r1, 0 /* save sp for unallocated args into r22 (callee-saved) */ + addik r23, r10, 0 /* save function address into r23 (callee-saved) */ + + /* prepare stack with allocation for n (bytes = r7) args */ + rsub r1, r7, r1 /* subtract bytes from sp */ + + /* prep args for ffi_prep_args call */ + addik r5, r1, 0 /* store stack pointer into arg[0] */ + /* r6 still holds ecif for arg[1] */ + + /* Call ffi_prep_args(stack, &ecif). */ + addik r1, r1, -4 + swi r15, r1, 0 /* store the link register in the frame */ + brald r15, r3 + nop /* branch has delay slot */ + lwi r15, r1, 0 + addik r1, r1, 4 /* restore the link register from the frame */ + /* returns calling stack pointer location */ + + /* prepare args for fn call, prep_args populates them onto the stack */ + lwi r5, r1, 0 /* arg[0] */ + lwi r6, r1, 4 /* arg[1] */ + lwi r7, r1, 8 /* arg[2] */ + lwi r8, r1, 12 /* arg[3] */ + lwi r9, r1, 16 /* arg[4] */ + lwi r10, r1, 20 /* arg[5] */ + + /* call (fn) (...). */ + addik r1, r1, -4 + swi r15, r1, 0 /* store the link register in the frame */ + brald r15, r23 + nop /* branch has delay slot */ + lwi r15, r1, 0 + addik r1, r1, 4 /* restore the link register from the frame */ + + /* Remove the space we pushed for the args. */ + addik r1, r22, 0 /* restore old SP */ + + /* restore this functions parameters */ + lwi r5, r1, 0 /* arg[0] */ + lwi r6, r1, 4 /* arg[1] */ + lwi r7, r1, 8 /* arg[2] */ + lwi r8, r1, 12 /* arg[3] */ + lwi r9, r1, 16 /* arg[4] */ + lwi r10, r1, 20 /* arg[5] */ + addik r1, r1, 24 /* decrement sp to de-allocate 6x 32-bit words */ + + /* If the return value pointer is NULL, assume no return value. */ + beqi r9, ffi_call_SYSV_end + + lwi r22, r1, 48 /* get return type (20 for locals + 28 for arg[6]) */ + lwi r23, r1, 52 /* get return size (20 for locals + 32 for arg[7]) */ + + /* Check if return type is actually a struct, do nothing */ + rsubi r11, r22, FFI_TYPE_STRUCT + beqi r11, ffi_call_SYSV_end + + /* Return 8bit */ + rsubi r11, r23, 1 + beqi r11, ffi_call_SYSV_store8 + + /* Return 16bit */ + rsubi r11, r23, 2 + beqi r11, ffi_call_SYSV_store16 + + /* Return 32bit */ + rsubi r11, r23, 4 + beqi r11, ffi_call_SYSV_store32 + + /* Return 64bit */ + rsubi r11, r23, 8 + beqi r11, ffi_call_SYSV_store64 + + /* Didnt match anything */ + bri ffi_call_SYSV_end + +ffi_call_SYSV_store64: + swi r3, r9, 0 /* store word r3 into return value */ + swi r4, r9, 4 /* store word r4 into return value */ + bri ffi_call_SYSV_end + +ffi_call_SYSV_store32: + swi r3, r9, 0 /* store word r3 into return value */ + bri ffi_call_SYSV_end + +ffi_call_SYSV_store16: +#ifdef __BIG_ENDIAN__ + shi r3, r9, 2 /* store half-word r3 into return value */ +#else + shi r3, r9, 0 /* store half-word r3 into return value */ +#endif + bri ffi_call_SYSV_end + +ffi_call_SYSV_store8: +#ifdef __BIG_ENDIAN__ + sbi r3, r9, 3 /* store byte r3 into return value */ +#else + sbi r3, r9, 0 /* store byte r3 into return value */ +#endif + bri ffi_call_SYSV_end + +ffi_call_SYSV_end: + /* callee restores */ + lwi r19, r1, 0 /* frame pointer */ + lwi r20, r1, 4 /* PIC register */ + lwi r21, r1, 8 /* PIC register */ + lwi r22, r1, 12 + lwi r23, r1, 16 + addik r1, r1, 20 + + /* return from sub-routine (with delay slot) */ + rtsd r15, 8 + nop + + .size ffi_call_SYSV, . - ffi_call_SYSV + +/* ------------------------------------------------------------------------- */ + + /* + * args passed into this function, are passed down to the callee. + * this function is the target of the closure trampoline, as such r12 is + * a pointer to the closure object. + */ + .text + .globl ffi_closure_SYSV + .type ffi_closure_SYSV, @function +ffi_closure_SYSV: + /* push callee saves */ + addik r11, r1, 28 /* save stack args start location (excluding regs/link) */ + addik r1, r1, -12 + swi r19, r1, 0 /* Frame Pointer */ + swi r20, r1, 4 /* PIC register */ + swi r21, r1, 8 /* PIC register */ + + /* store register args on stack */ + addik r1, r1, -24 + swi r5, r1, 0 + swi r6, r1, 4 + swi r7, r1, 8 + swi r8, r1, 12 + swi r9, r1, 16 + swi r10, r1, 20 + + /* setup args */ + addik r5, r1, 0 /* register_args */ + addik r6, r11, 0 /* stack_args */ + addik r7, r12, 0 /* closure object */ + addik r1, r1, -8 /* allocate return value */ + addik r8, r1, 0 /* void* rvalue */ + addik r1, r1, -8 /* allocate for reutrn type/size values */ + addik r9, r1, 0 /* void* rtype */ + addik r10, r1, 4 /* void* rsize */ + + /* call the wrap_call function */ + addik r1, r1, -28 /* allocate args + link reg */ + swi r15, r1, 0 /* store the link register in the frame */ + brald r15, r3 + nop /* branch has delay slot */ + lwi r15, r1, 0 + addik r1, r1, 28 /* restore the link register from the frame */ + +ffi_closure_SYSV_prepare_return: + lwi r9, r1, 0 /* rtype */ + lwi r10, r1, 4 /* rsize */ + addik r1, r1, 8 /* de-allocate return info values */ + + /* Check if return type is actually a struct, store 4 bytes */ + rsubi r11, r9, FFI_TYPE_STRUCT + beqi r11, ffi_closure_SYSV_store32 + + /* Return 8bit */ + rsubi r11, r10, 1 + beqi r11, ffi_closure_SYSV_store8 + + /* Return 16bit */ + rsubi r11, r10, 2 + beqi r11, ffi_closure_SYSV_store16 + + /* Return 32bit */ + rsubi r11, r10, 4 + beqi r11, ffi_closure_SYSV_store32 + + /* Return 64bit */ + rsubi r11, r10, 8 + beqi r11, ffi_closure_SYSV_store64 + + /* Didnt match anything */ + bri ffi_closure_SYSV_end + +ffi_closure_SYSV_store64: + lwi r3, r1, 0 /* store word r3 into return value */ + lwi r4, r1, 4 /* store word r4 into return value */ + /* 64 bits == 2 words, no sign extend occurs */ + bri ffi_closure_SYSV_end + +ffi_closure_SYSV_store32: + lwi r3, r1, 0 /* store word r3 into return value */ + /* 32 bits == 1 word, no sign extend occurs */ + bri ffi_closure_SYSV_end + +ffi_closure_SYSV_store16: +#ifdef __BIG_ENDIAN__ + lhui r3, r1, 2 /* store half-word r3 into return value */ +#else + lhui r3, r1, 0 /* store half-word r3 into return value */ +#endif + rsubi r11, r9, FFI_TYPE_SINT16 + bnei r11, ffi_closure_SYSV_end + sext16 r3, r3 /* fix sign extend of sint8 */ + bri ffi_closure_SYSV_end + +ffi_closure_SYSV_store8: +#ifdef __BIG_ENDIAN__ + lbui r3, r1, 3 /* store byte r3 into return value */ +#else + lbui r3, r1, 0 /* store byte r3 into return value */ +#endif + rsubi r11, r9, FFI_TYPE_SINT8 + bnei r11, ffi_closure_SYSV_end + sext8 r3, r3 /* fix sign extend of sint8 */ + bri ffi_closure_SYSV_end + +ffi_closure_SYSV_end: + addik r1, r1, 8 /* de-allocate return value */ + + /* de-allocate stored args */ + addik r1, r1, 24 + + /* callee restores */ + lwi r19, r1, 0 /* frame pointer */ + lwi r20, r1, 4 /* PIC register */ + lwi r21, r1, 8 /* PIC register */ + addik r1, r1, 12 + + /* return from sub-routine (with delay slot) */ + rtsd r15, 8 + nop + + .size ffi_closure_SYSV, . - ffi_closure_SYSV diff --git a/Modules/_ctypes/libffi/src/mips/ffi.c b/Modules/_ctypes/libffi/src/mips/ffi.c --- a/Modules/_ctypes/libffi/src/mips/ffi.c +++ b/Modules/_ctypes/libffi/src/mips/ffi.c @@ -670,9 +670,16 @@ if (cif->abi != FFI_O32 && cif->abi != FFI_O32_SOFT_FLOAT) return FFI_BAD_ABI; fn = ffi_closure_O32; -#else /* FFI_MIPS_N32 */ - if (cif->abi != FFI_N32 && cif->abi != FFI_N64) +#else +#if _MIPS_SIM ==_ABIN32 + if (cif->abi != FFI_N32 + && cif->abi != FFI_N32_SOFT_FLOAT) return FFI_BAD_ABI; +#else + if (cif->abi != FFI_N64 + && cif->abi != FFI_N64_SOFT_FLOAT) + return FFI_BAD_ABI; +#endif fn = ffi_closure_N32; #endif /* FFI_MIPS_O32 */ diff --git a/Modules/_ctypes/libffi/src/moxie/eabi.S b/Modules/_ctypes/libffi/src/moxie/eabi.S --- a/Modules/_ctypes/libffi/src/moxie/eabi.S +++ b/Modules/_ctypes/libffi/src/moxie/eabi.S @@ -1,7 +1,7 @@ /* ----------------------------------------------------------------------- - eabi.S - Copyright (c) 2004 Anthony Green + eabi.S - Copyright (c) 2012, 2013 Anthony Green - FR-V Assembly glue. + Moxie Assembly glue. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the @@ -34,95 +34,68 @@ .globl ffi_call_EABI .type ffi_call_EABI, @function - # gr8 : ffi_prep_args - # gr9 : &ecif - # gr10: cif->bytes - # gr11: fig->flags - # gr12: ecif.rvalue - # gr13: fn + # $r0 : ffi_prep_args + # $r1 : &ecif + # $r2 : cif->bytes + # $r3 : fig->flags + # $r4 : ecif.rvalue + # $r5 : fn -ffi_call_EABI: - addi sp, #-80, sp - sti fp, @(sp, #24) - addi sp, #24, fp - movsg lr, gr5 +ffi_call_EABI: + push $sp, $r6 + push $sp, $r7 + push $sp, $r8 + dec $sp, 24 - /* Make room for the new arguments. */ - /* subi sp, fp, gr10 */ - - /* Store return address and incoming args on stack. */ - sti gr5, @(fp, #8) - sti gr8, @(fp, #-4) - sti gr9, @(fp, #-8) - sti gr10, @(fp, #-12) - sti gr11, @(fp, #-16) - sti gr12, @(fp, #-20) - sti gr13, @(fp, #-24) - - sub sp, gr10, sp + /* Store incoming args on stack. */ + sto.l 0($sp), $r0 /* ffi_prep_args */ + sto.l 4($sp), $r1 /* ecif */ + sto.l 8($sp), $r2 /* bytes */ + sto.l 12($sp), $r3 /* flags */ + sto.l 16($sp), $r4 /* &rvalue */ + sto.l 20($sp), $r5 /* fn */ /* Call ffi_prep_args. */ - ldi @(fp, #-4), gr4 - addi sp, #0, gr8 - ldi @(fp, #-8), gr9 -#ifdef __FRV_FDPIC__ - ldd @(gr4, gr0), gr14 - calll @(gr14, gr0) -#else - calll @(gr4, gr0) -#endif + mov $r6, $r4 /* Save result buffer */ + mov $r7, $r5 /* Save the target fn */ + mov $r8, $r3 /* Save the flags */ + sub.l $sp, $r2 /* Allocate stack space */ + mov $r0, $sp /* We can stomp over $r0 */ + /* $r1 is already set up */ + jsra ffi_prep_args - /* ffi_prep_args returns the new stack pointer. */ - mov gr8, gr4 - - ldi @(sp, #0), gr8 - ldi @(sp, #4), gr9 - ldi @(sp, #8), gr10 - ldi @(sp, #12), gr11 - ldi @(sp, #16), gr12 - ldi @(sp, #20), gr13 - - /* Always copy the return value pointer into the hidden - parameter register. This is only strictly necessary - when we're returning an aggregate type, but it doesn't - hurt to do this all the time, and it saves a branch. */ - ldi @(fp, #-20), gr3 - - /* Use the ffi_prep_args return value for the new sp. */ - mov gr4, sp + /* Load register arguments. */ + ldo.l $r0, 0($sp) + ldo.l $r1, 4($sp) + ldo.l $r2, 8($sp) + ldo.l $r3, 12($sp) + ldo.l $r4, 16($sp) + ldo.l $r5, 20($sp) /* Call the target function. */ - ldi @(fp, -24), gr4 -#ifdef __FRV_FDPIC__ - ldd @(gr4, gr0), gr14 - calll @(gr14, gr0) -#else - calll @(gr4, gr0) -#endif + jsr $r7 - /* Store the result. */ - ldi @(fp, #-16), gr10 /* fig->flags */ - ldi @(fp, #-20), gr4 /* ecif.rvalue */ + ldi.l $r7, 0xffffffff + cmp $r8, $r7 + beq retstruct - /* Is the return value stored in two registers? */ - cmpi gr10, #8, icc0 - bne icc0, 0, .L2 - /* Yes, save them. */ - sti gr8, @(gr4, #0) - sti gr9, @(gr4, #4) - bra .L3 -.L2: - /* Is the return value a structure? */ - cmpi gr10, #-1, icc0 - beq icc0, 0, .L3 - /* No, save a 4 byte return value. */ - sti gr8, @(gr4, #0) -.L3: + ldi.l $r7, 4 + cmp $r8, $r7 + bgt ret2reg - /* Restore the stack, and return. */ - ldi @(fp, 8), gr5 - ld @(fp, gr0), fp - addi sp,#80,sp - jmpl @(gr5,gr0) + st.l ($r6), $r0 + jmpa retdone + +ret2reg: + st.l ($r6), $r0 + sto.l 4($r6), $r1 + +retstruct: +retdone: + /* Return. */ + ldo.l $r6, -4($fp) + ldo.l $r7, -8($fp) + ldo.l $r8, -12($fp) + ret .size ffi_call_EABI, .-ffi_call_EABI diff --git a/Modules/_ctypes/libffi/src/moxie/ffi.c b/Modules/_ctypes/libffi/src/moxie/ffi.c --- a/Modules/_ctypes/libffi/src/moxie/ffi.c +++ b/Modules/_ctypes/libffi/src/moxie/ffi.c @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (C) 2009 Anthony Green + ffi.c - Copyright (C) 2012, 2013 Anthony Green Moxie Foreign Function Interface @@ -43,6 +43,12 @@ p_argv = ecif->avalue; argp = stack; + if (ecif->cif->rtype->type == FFI_TYPE_STRUCT) + { + *(void **) argp = ecif->rvalue; + argp += 4; + } + for (i = ecif->cif->nargs, p_arg = ecif->cif->arg_types; (i != 0); i--, p_arg++) @@ -56,17 +62,6 @@ z = sizeof(void*); *(void **) argp = *p_argv; } - /* if ((*p_arg)->type == FFI_TYPE_FLOAT) - { - if (count > 24) - { - // This is going on the stack. Turn it into a double. - *(double *) argp = (double) *(float*)(* p_argv); - z = sizeof(double); - } - else - *(void **) argp = *(void **)(* p_argv); - } */ else if (z < sizeof(int)) { z = sizeof(int); @@ -147,8 +142,7 @@ } else ecif.rvalue = rvalue; - - + switch (cif->abi) { case FFI_EABI: @@ -165,19 +159,25 @@ unsigned arg4, unsigned arg5, unsigned arg6) { /* This function is called by a trampoline. The trampoline stows a - pointer to the ffi_closure object in gr7. We must save this + pointer to the ffi_closure object in $r7. We must save this pointer in a place that will persist while we do our work. */ - register ffi_closure *creg __asm__ ("gr7"); + register ffi_closure *creg __asm__ ("$r12"); ffi_closure *closure = creg; /* Arguments that don't fit in registers are found on the stack at a fixed offset above the current frame pointer. */ - register char *frame_pointer __asm__ ("fp"); - char *stack_args = frame_pointer + 16; + register char *frame_pointer __asm__ ("$fp"); + + /* Pointer to a struct return value. */ + void *struct_rvalue = (void *) arg1; + + /* 6 words reserved for register args + 3 words from jsr */ + char *stack_args = frame_pointer + 9*4; /* Lay the register arguments down in a continuous chunk of memory. */ unsigned register_args[6] = { arg1, arg2, arg3, arg4, arg5, arg6 }; + char *register_args_ptr = (char *) register_args; ffi_cif *cif = closure->cif; ffi_type **arg_types = cif->arg_types; @@ -185,6 +185,12 @@ char *ptr = (char *) register_args; int i; + /* preserve struct type return pointer passing */ + if ((cif->rtype != NULL) && (cif->rtype->type == FFI_TYPE_STRUCT)) { + ptr += 4; + register_args_ptr = (char *)®ister_args[1]; + } + /* Find the address of each argument. */ for (i = 0; i < cif->nargs; i++) { @@ -201,6 +207,7 @@ case FFI_TYPE_SINT32: case FFI_TYPE_UINT32: case FFI_TYPE_FLOAT: + case FFI_TYPE_POINTER: avalue[i] = ptr; break; case FFI_TYPE_STRUCT: @@ -216,30 +223,21 @@ /* If we've handled more arguments than fit in registers, start looking at the those passed on the stack. */ - if (ptr == ((char *)register_args + (6*4))) + if (ptr == ®ister_args[6]) ptr = stack_args; } /* Invoke the closure. */ - if (cif->rtype->type == FFI_TYPE_STRUCT) + if (cif->rtype && (cif->rtype->type == FFI_TYPE_STRUCT)) { - /* The caller allocates space for the return structure, and - passes a pointer to this space in gr3. Use this value directly - as the return value. */ - register void *return_struct_ptr __asm__("gr3"); - (closure->fun) (cif, return_struct_ptr, avalue, closure->user_data); + (closure->fun) (cif, struct_rvalue, avalue, closure->user_data); } else { /* Allocate space for the return value and call the function. */ long long rvalue; (closure->fun) (cif, &rvalue, avalue, closure->user_data); - - /* Functions return 4-byte or smaller results in gr8. 8-byte - values also use gr9. We fill the both, even for small return - values, just to avoid a branch. */ - asm ("ldi @(%0, #0), gr8" : : "r" (&rvalue)); - asm ("ldi @(%0, #0), gr9" : : "r" (&((int *) &rvalue)[1])); + asm ("mov $r12, %0\n ld.l $r0, ($r12)\n ldo.l $r1, 4($r12)" : : "r" (&rvalue)); } } @@ -250,27 +248,25 @@ void *user_data, void *codeloc) { - unsigned int *tramp = (unsigned int *) &closure->tramp[0]; + unsigned short *tramp = (unsigned short *) &closure->tramp[0]; unsigned long fn = (long) ffi_closure_eabi; unsigned long cls = (long) codeloc; - int i; + + if (cif->abi != FFI_EABI) + return FFI_BAD_ABI; fn = (unsigned long) ffi_closure_eabi; - tramp[0] = 0x8cfc0000 + (fn & 0xffff); /* setlos lo(fn), gr6 */ - tramp[1] = 0x8efc0000 + (cls & 0xffff); /* setlos lo(cls), gr7 */ - tramp[2] = 0x8cf80000 + (fn >> 16); /* sethi hi(fn), gr6 */ - tramp[3] = 0x8ef80000 + (cls >> 16); /* sethi hi(cls), gr7 */ - tramp[4] = 0x80300006; /* jmpl @(gr0, gr6) */ + tramp[0] = 0x01e0; /* ldi.l $r7, .... */ + tramp[1] = cls >> 16; + tramp[2] = cls & 0xffff; + tramp[3] = 0x1a00; /* jmpa .... */ + tramp[4] = fn >> 16; + tramp[5] = fn & 0xffff; closure->cif = cif; closure->fun = fun; closure->user_data = user_data; - /* Cache flushing. */ - for (i = 0; i < FFI_TRAMPOLINE_SIZE; i++) - __asm__ volatile ("dcf @(%0,%1)\n\tici @(%2,%1)" :: "r" (tramp), "r" (i), - "r" (codeloc)); - return FFI_OK; } diff --git a/Modules/_ctypes/libffi/src/moxie/ffitarget.h b/Modules/_ctypes/libffi/src/moxie/ffitarget.h new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/moxie/ffitarget.h @@ -0,0 +1,52 @@ +/* -----------------------------------------------------------------*-C-*- + ffitarget.h - Copyright (c) 2012, 2013 Anthony Green + Target configuration macros for Moxie + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + + ----------------------------------------------------------------------- */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +/* ---- System specific configurations ----------------------------------- */ + +#ifndef LIBFFI_ASM +typedef unsigned long ffi_arg; +typedef signed long ffi_sarg; + +typedef enum ffi_abi { + FFI_FIRST_ABI = 0, + FFI_EABI, + FFI_DEFAULT_ABI = FFI_EABI, + FFI_LAST_ABI = FFI_DEFAULT_ABI + 1 +} ffi_abi; +#endif + +/* ---- Definitions for closures ----------------------------------------- */ + +#define FFI_CLOSURES 1 +#define FFI_NATIVE_RAW_API 0 + +/* Trampolines are 12-bytes long. See ffi_prep_closure_loc. */ +#define FFI_TRAMPOLINE_SIZE (12) + +#endif diff --git a/Modules/_ctypes/libffi/src/powerpc/aix.S b/Modules/_ctypes/libffi/src/powerpc/aix.S --- a/Modules/_ctypes/libffi/src/powerpc/aix.S +++ b/Modules/_ctypes/libffi/src/powerpc/aix.S @@ -137,7 +137,7 @@ mtcrf 0x40, r31 mtctr r0 /* Load all those argument registers. */ - // We have set up a nice stack frame, just load it into registers. + /* We have set up a nice stack frame, just load it into registers. */ ld r3, 40+(1*8)(r1) ld r4, 40+(2*8)(r1) ld r5, 40+(3*8)(r1) @@ -150,7 +150,7 @@ L1: /* Load all the FP registers. */ - bf 6,L2 // 2f + 0x18 + bf 6,L2 /* 2f + 0x18 */ lfd f1,-32-(13*8)(r28) lfd f2,-32-(12*8)(r28) lfd f3,-32-(11*8)(r28) @@ -239,7 +239,7 @@ mtcrf 0x40, r31 mtctr r0 /* Load all those argument registers. */ - // We have set up a nice stack frame, just load it into registers. + /* We have set up a nice stack frame, just load it into registers. */ lwz r3, 20+(1*4)(r1) lwz r4, 20+(2*4)(r1) lwz r5, 20+(3*4)(r1) @@ -252,7 +252,7 @@ L1: /* Load all the FP registers. */ - bf 6,L2 // 2f + 0x18 + bf 6,L2 /* 2f + 0x18 */ lfd f1,-16-(13*8)(r28) lfd f2,-16-(12*8)(r28) lfd f3,-16-(11*8)(r28) @@ -307,7 +307,7 @@ #endif .long 0 .byte 0,0,0,1,128,4,0,0 -//END(ffi_call_AIX) +/* END(ffi_call_AIX) */ .csect .text[PR] .align 2 @@ -325,4 +325,4 @@ blr .long 0 .byte 0,0,0,0,0,0,0,0 -//END(ffi_call_DARWIN) +/* END(ffi_call_DARWIN) */ diff --git a/Modules/_ctypes/libffi/src/powerpc/ffi.c b/Modules/_ctypes/libffi/src/powerpc/ffi.c --- a/Modules/_ctypes/libffi/src/powerpc/ffi.c +++ b/Modules/_ctypes/libffi/src/powerpc/ffi.c @@ -48,6 +48,11 @@ FLAG_RETURNS_128BITS = 1 << (31-27), /* cr6 */ + FLAG_SYSV_SMST_R4 = 1 << (31-26), /* use r4 for FFI_SYSV 8 byte + structs. */ + FLAG_SYSV_SMST_R3 = 1 << (31-25), /* use r3 for FFI_SYSV 4 byte + structs. */ + FLAG_ARG_NEEDS_COPY = 1 << (31- 7), #ifndef __NO_FPRS__ FLAG_FP_ARGUMENTS = 1 << (31- 6), /* cr1.eq; specified by ABI */ @@ -367,6 +372,12 @@ /* Check that we didn't overrun the stack... */ FFI_ASSERT (copy_space.c >= next_arg.c); FFI_ASSERT (gpr_base.u <= stacktop.u - ASM_NEEDS_REGISTERS); + /* The assert below is testing that the number of integer arguments agrees + with the number found in ffi_prep_cif_machdep(). However, intarg_count + is incremeneted whenever we place an FP arg on the stack, so account for + that before our assert test. */ + if (fparg_count > NUM_FPR_ARG_REGISTERS) + intarg_count -= fparg_count - NUM_FPR_ARG_REGISTERS; #ifndef __NO_FPRS__ FFI_ASSERT (fpr_base.u <= stacktop.u - ASM_NEEDS_REGISTERS - NUM_GPR_ARG_REGISTERS); @@ -664,9 +675,11 @@ switch (type) { #ifndef __NO_FPRS__ +#if FFI_TYPE_LONGDOUBLE != FFI_TYPE_DOUBLE case FFI_TYPE_LONGDOUBLE: flags |= FLAG_RETURNS_128BITS; /* Fall through. */ +#endif case FFI_TYPE_DOUBLE: flags |= FLAG_RETURNS_64BITS; /* Fall through. */ @@ -684,18 +697,35 @@ break; case FFI_TYPE_STRUCT: - /* - * The final SYSV ABI says that structures smaller or equal 8 bytes - * are returned in r3/r4. The FFI_GCC_SYSV ABI instead returns them - * in memory. - * - * NOTE: The assembly code can safely assume that it just needs to - * store both r3 and r4 into a 8-byte word-aligned buffer, as - * we allocate a temporary buffer in ffi_call() if this flag is - * set. - */ - if (cif->abi == FFI_SYSV && size <= 8) - flags |= FLAG_RETURNS_SMST; + if (cif->abi == FFI_SYSV) + { + /* The final SYSV ABI says that structures smaller or equal 8 bytes + are returned in r3/r4. The FFI_GCC_SYSV ABI instead returns them + in memory. */ + + /* Treat structs with size <= 8 bytes. */ + if (size <= 8) + { + flags |= FLAG_RETURNS_SMST; + /* These structs are returned in r3. We pack the type and the + precalculated shift value (needed in the sysv.S) into flags. + The same applies for the structs returned in r3/r4. */ + if (size <= 4) + { + flags |= FLAG_SYSV_SMST_R3; + flags |= 8 * (4 - size) << 8; + break; + } + /* These structs are returned in r3 and r4. See above. */ + if (size <= 8) + { + flags |= FLAG_SYSV_SMST_R3 | FLAG_SYSV_SMST_R4; + flags |= 8 * (8 - size) << 8; + break; + } + } + } + intarg_count++; flags |= FLAG_RETVAL_REFERENCE; /* Fall through. */ diff --git a/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c b/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c --- a/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c +++ b/Modules/_ctypes/libffi/src/powerpc/ffi_darwin.c @@ -302,10 +302,10 @@ } /* Check that we didn't overrun the stack... */ - //FFI_ASSERT(gpr_base <= stacktop - ASM_NEEDS_REGISTERS); - //FFI_ASSERT((unsigned *)fpr_base - // <= stacktop - ASM_NEEDS_REGISTERS - NUM_GPR_ARG_REGISTERS); - //FFI_ASSERT(flags & FLAG_4_GPR_ARGUMENTS || intarg_count <= 4); + /* FFI_ASSERT(gpr_base <= stacktop - ASM_NEEDS_REGISTERS); + FFI_ASSERT((unsigned *)fpr_base + <= stacktop - ASM_NEEDS_REGISTERS - NUM_GPR_ARG_REGISTERS); + FFI_ASSERT(flags & FLAG_4_GPR_ARGUMENTS || intarg_count <= 4); */ } #if defined(POWERPC_DARWIN64) diff --git a/Modules/_ctypes/libffi/src/powerpc/linux64.S b/Modules/_ctypes/libffi/src/powerpc/linux64.S --- a/Modules/_ctypes/libffi/src/powerpc/linux64.S +++ b/Modules/_ctypes/libffi/src/powerpc/linux64.S @@ -30,16 +30,25 @@ #include #ifdef __powerpc64__ - .hidden ffi_call_LINUX64, .ffi_call_LINUX64 - .globl ffi_call_LINUX64, .ffi_call_LINUX64 + .hidden ffi_call_LINUX64 + .globl ffi_call_LINUX64 .section ".opd","aw" .align 3 ffi_call_LINUX64: +#ifdef _CALL_LINUX + .quad .L.ffi_call_LINUX64,.TOC. at tocbase,0 + .type ffi_call_LINUX64, at function + .text +.L.ffi_call_LINUX64: +#else + .hidden .ffi_call_LINUX64 + .globl .ffi_call_LINUX64 .quad .ffi_call_LINUX64,.TOC. at tocbase,0 .size ffi_call_LINUX64,24 .type .ffi_call_LINUX64, at function .text .ffi_call_LINUX64: +#endif .LFB1: mflr %r0 std %r28, -32(%r1) @@ -58,7 +67,11 @@ /* Call ffi_prep_args64. */ mr %r4, %r1 +#ifdef _CALL_LINUX + bl ffi_prep_args64 +#else bl .ffi_prep_args64 +#endif ld %r0, 0(%r29) ld %r2, 8(%r29) @@ -137,7 +150,11 @@ .LFE1: .long 0 .byte 0,12,0,1,128,4,0,0 +#ifdef _CALL_LINUX + .size ffi_call_LINUX64,.-.L.ffi_call_LINUX64 +#else .size .ffi_call_LINUX64,.-.ffi_call_LINUX64 +#endif .section .eh_frame,EH_FRAME_FLAGS, at progbits .Lframe1: diff --git a/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S b/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S --- a/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S +++ b/Modules/_ctypes/libffi/src/powerpc/linux64_closure.S @@ -32,16 +32,24 @@ #ifdef __powerpc64__ FFI_HIDDEN (ffi_closure_LINUX64) - FFI_HIDDEN (.ffi_closure_LINUX64) - .globl ffi_closure_LINUX64, .ffi_closure_LINUX64 + .globl ffi_closure_LINUX64 .section ".opd","aw" .align 3 ffi_closure_LINUX64: +#ifdef _CALL_LINUX + .quad .L.ffi_closure_LINUX64,.TOC. at tocbase,0 + .type ffi_closure_LINUX64, at function + .text +.L.ffi_closure_LINUX64: +#else + FFI_HIDDEN (.ffi_closure_LINUX64) + .globl .ffi_closure_LINUX64 .quad .ffi_closure_LINUX64,.TOC. at tocbase,0 .size ffi_closure_LINUX64,24 .type .ffi_closure_LINUX64, at function .text .ffi_closure_LINUX64: +#endif .LFB1: # save general regs into parm save area std %r3, 48(%r1) @@ -91,7 +99,11 @@ addi %r6, %r1, 128 # make the call +#ifdef _CALL_LINUX + bl ffi_closure_helper_LINUX64 +#else bl .ffi_closure_helper_LINUX64 +#endif .Lret: # now r3 contains the return type @@ -194,7 +206,11 @@ .LFE1: .long 0 .byte 0,12,0,1,128,0,0,0 +#ifdef _CALL_LINUX + .size ffi_closure_LINUX64,.-.L.ffi_closure_LINUX64 +#else .size .ffi_closure_LINUX64,.-.ffi_closure_LINUX64 +#endif .section .eh_frame,EH_FRAME_FLAGS, at progbits .Lframe1: diff --git a/Modules/_ctypes/libffi/src/powerpc/sysv.S b/Modules/_ctypes/libffi/src/powerpc/sysv.S --- a/Modules/_ctypes/libffi/src/powerpc/sysv.S +++ b/Modules/_ctypes/libffi/src/powerpc/sysv.S @@ -142,14 +142,19 @@ #endif L(small_struct_return_value): - /* - * The C code always allocates a properly-aligned 8-byte bounce - * buffer to make this assembly code very simple. Just write out - * r3 and r4 to the buffer to allow the C code to handle the rest. - */ - stw %r3, 0(%r30) - stw %r4, 4(%r30) - b L(done_return_value) + extrwi %r6,%r31,2,19 /* number of bytes padding = shift/8 */ + mtcrf 0x02,%r31 /* copy flags to cr[24:27] (cr6) */ + extrwi %r5,%r31,5,19 /* r5 <- number of bits of padding */ + subfic %r6,%r6,4 /* r6 <- number of useful bytes in r3 */ + bf- 25,L(done_return_value) /* struct in r3 ? if not, done. */ +/* smst_one_register: */ + slw %r3,%r3,%r5 /* Left-justify value in r3 */ + mtxer %r6 /* move byte count to XER ... */ + stswx %r3,0,%r30 /* ... and store that many bytes */ + bf+ 26,L(done_return_value) /* struct in r3:r4 ? */ + add %r6,%r6,%r30 /* adjust pointer */ + stswi %r4,%r6,4 /* store last four bytes */ + b L(done_return_value) .LFE1: END(ffi_call_SYSV) diff --git a/Modules/_ctypes/libffi/src/prep_cif.c b/Modules/_ctypes/libffi/src/prep_cif.c --- a/Modules/_ctypes/libffi/src/prep_cif.c +++ b/Modules/_ctypes/libffi/src/prep_cif.c @@ -140,6 +140,13 @@ #ifdef SPARC && (cif->abi != FFI_V9 || cif->rtype->size > 32) #endif +#ifdef TILE + && (cif->rtype->size > 10 * FFI_SIZEOF_ARG) +#endif +#ifdef XTENSA + && (cif->rtype->size > 16) +#endif + ) bytes = STACK_ARG_SIZE(sizeof(void*)); #endif @@ -169,6 +176,20 @@ if (((*ptr)->alignment - 1) & bytes) bytes = ALIGN(bytes, (*ptr)->alignment); +#ifdef TILE + if (bytes < 10 * FFI_SIZEOF_ARG && + bytes + STACK_ARG_SIZE((*ptr)->size) > 10 * FFI_SIZEOF_ARG) + { + /* An argument is never split between the 10 parameter + registers and the stack. */ + bytes = 10 * FFI_SIZEOF_ARG; + } +#endif +#ifdef XTENSA + if (bytes <= 6*4 && bytes + STACK_ARG_SIZE((*ptr)->size) > 6*4) + bytes = 6*4; +#endif + bytes += STACK_ARG_SIZE((*ptr)->size); } #endif diff --git a/Modules/_ctypes/libffi/src/s390/ffi.c b/Modules/_ctypes/libffi/src/s390/ffi.c --- a/Modules/_ctypes/libffi/src/s390/ffi.c +++ b/Modules/_ctypes/libffi/src/s390/ffi.c @@ -750,7 +750,8 @@ void *user_data, void *codeloc) { - FFI_ASSERT (cif->abi == FFI_SYSV); + if (cif->abi != FFI_SYSV) + return FFI_BAD_ABI; #ifndef __s390x__ *(short *)&closure->tramp [0] = 0x0d10; /* basr %r1,0 */ diff --git a/Modules/_ctypes/libffi/src/sparc/ffi.c b/Modules/_ctypes/libffi/src/sparc/ffi.c --- a/Modules/_ctypes/libffi/src/sparc/ffi.c +++ b/Modules/_ctypes/libffi/src/sparc/ffi.c @@ -1,5 +1,5 @@ /* ----------------------------------------------------------------------- - ffi.c - Copyright (c) 2011 Anthony Green + ffi.c - Copyright (c) 2011, 2013 Anthony Green Copyright (c) 1996, 2003-2004, 2007-2008 Red Hat, Inc. SPARC Foreign Function Interface @@ -376,6 +376,10 @@ unsigned, unsigned *, void (*fn)(void)); #endif +#ifndef __GNUC__ +void ffi_flush_icache (void *, size_t); +#endif + void ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) { extended_cif ecif; @@ -417,7 +421,7 @@ /* behind "call", so we alloc some executable space for it. */ /* l7 is used, we need to make sure v8.S doesn't use %l7. */ unsigned int *call_struct = NULL; - ffi_closure_alloc(32, &call_struct); + ffi_closure_alloc(32, (void **)&call_struct); if (call_struct) { unsigned long f = (unsigned long)fn; @@ -432,10 +436,14 @@ call_struct[5] = 0x01000000; /* nop */ call_struct[6] = 0x81c7e008; /* ret */ call_struct[7] = 0xbe100017; /* mov %l7, %i7 */ +#ifdef __GNUC__ asm volatile ("iflush %0; iflush %0+8; iflush %0+16; iflush %0+24" : : "r" (call_struct) : "memory"); /* SPARC v8 requires 5 instructions for flush to be visible */ asm volatile ("nop; nop; nop; nop; nop"); +#else + ffi_flush_icache (call_struct, 32); +#endif ffi_call_v8(ffi_prep_args_v8, &ecif, cif->bytes, cif->flags, rvalue, call_struct); ffi_closure_free(call_struct); @@ -513,6 +521,7 @@ closure->user_data = user_data; /* Flush the Icache. closure is 8 bytes aligned. */ +#ifdef __GNUC__ #ifdef SPARC64 asm volatile ("flush %0; flush %0+8" : : "r" (closure) : "memory"); #else @@ -520,6 +529,9 @@ /* SPARC v8 requires 5 instructions for flush to be visible */ asm volatile ("nop; nop; nop; nop; nop"); #endif +#else + ffi_flush_icache (closure, 16); +#endif return FFI_OK; } diff --git a/Modules/_ctypes/libffi/src/sparc/v8.S b/Modules/_ctypes/libffi/src/sparc/v8.S --- a/Modules/_ctypes/libffi/src/sparc/v8.S +++ b/Modules/_ctypes/libffi/src/sparc/v8.S @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - v8.S - Copyright (c) 1996, 1997, 2003, 2004, 2008 Red Hat, Inc. + v8.S - Copyright (c) 2013 The Written Word, Inc. + Copyright (c) 1996, 1997, 2003, 2004, 2008 Red Hat, Inc. SPARC Foreign Function Interface @@ -31,11 +32,39 @@ #define STACKFRAME 96 /* Minimum stack framesize for SPARC */ #define ARGS (64+4) /* Offset of register area in frame */ -.text +#ifndef __GNUC__ + .text + .align 8 +.globl ffi_flush_icache +.globl _ffi_flush_icache + +ffi_flush_icache: +_ffi_flush_icache: + add %o0, %o1, %o2 +#ifdef SPARC64 +1: flush %o0 +#else +1: iflush %o0 +#endif + add %o0, 8, %o0 + cmp %o0, %o2 + blt 1b + nop + nop + nop + nop + nop + retl + nop +.ffi_flush_icache_end: + .size ffi_flush_icache,.ffi_flush_icache_end-ffi_flush_icache +#endif + + .text .align 8 .globl ffi_call_v8 .globl _ffi_call_v8 - + ffi_call_v8: _ffi_call_v8: .LLFB1: diff --git a/Modules/_ctypes/libffi/src/tile/ffi.c b/Modules/_ctypes/libffi/src/tile/ffi.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/tile/ffi.c @@ -0,0 +1,355 @@ +/* ----------------------------------------------------------------------- + ffi.c - Copyright (c) 2012 Tilera Corp. + + TILE Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#include +#include +#include +#include +#include +#include +#include +#include + + +/* The first 10 registers are used to pass arguments and return values. */ +#define NUM_ARG_REGS 10 + +/* Performs a raw function call with the given NUM_ARG_REGS register arguments + and the specified additional stack arguments (if any). */ +extern void ffi_call_tile(ffi_sarg reg_args[NUM_ARG_REGS], + const ffi_sarg *stack_args, + size_t stack_args_bytes, + void (*fnaddr)(void)) + FFI_HIDDEN; + +/* This handles the raw call from the closure stub, cleaning up the + parameters and delegating to ffi_closure_tile_inner. */ +extern void ffi_closure_tile(void) FFI_HIDDEN; + + +ffi_status +ffi_prep_cif_machdep(ffi_cif *cif) +{ + /* We always allocate room for all registers. Even if we don't + use them as parameters, they get returned in the same array + as struct return values so we need to make room. */ + if (cif->bytes < NUM_ARG_REGS * FFI_SIZEOF_ARG) + cif->bytes = NUM_ARG_REGS * FFI_SIZEOF_ARG; + + if (cif->rtype->size > NUM_ARG_REGS * FFI_SIZEOF_ARG) + cif->flags = FFI_TYPE_STRUCT; + else + cif->flags = FFI_TYPE_INT; + + /* Nothing to do. */ + return FFI_OK; +} + + +static long +assign_to_ffi_arg(ffi_sarg *out, void *in, const ffi_type *type, + int write_to_reg) +{ + switch (type->type) + { + case FFI_TYPE_SINT8: + *out = *(SINT8 *)in; + return 1; + + case FFI_TYPE_UINT8: + *out = *(UINT8 *)in; + return 1; + + case FFI_TYPE_SINT16: + *out = *(SINT16 *)in; + return 1; + + case FFI_TYPE_UINT16: + *out = *(UINT16 *)in; + return 1; + + case FFI_TYPE_SINT32: + case FFI_TYPE_UINT32: +#ifndef __LP64__ + case FFI_TYPE_POINTER: +#endif + /* Note that even unsigned 32-bit quantities are sign extended + on tilegx when stored in a register. */ + *out = *(SINT32 *)in; + return 1; + + case FFI_TYPE_FLOAT: +#ifdef __tilegx__ + if (write_to_reg) + { + /* Properly sign extend the value. */ + union { float f; SINT32 s32; } val; + val.f = *(float *)in; + *out = val.s32; + } + else +#endif + { + *(float *)out = *(float *)in; + } + return 1; + + case FFI_TYPE_SINT64: + case FFI_TYPE_UINT64: + case FFI_TYPE_DOUBLE: +#ifdef __LP64__ + case FFI_TYPE_POINTER: +#endif + *(UINT64 *)out = *(UINT64 *)in; + return sizeof(UINT64) / FFI_SIZEOF_ARG; + + case FFI_TYPE_STRUCT: + memcpy(out, in, type->size); + return (type->size + FFI_SIZEOF_ARG - 1) / FFI_SIZEOF_ARG; + + case FFI_TYPE_VOID: + /* Must be a return type. Nothing to do. */ + return 0; + + default: + FFI_ASSERT(0); + return -1; + } +} + + +void +ffi_call(ffi_cif *cif, void (*fn)(void), void *rvalue, void **avalue) +{ + ffi_sarg * const arg_mem = alloca(cif->bytes); + ffi_sarg * const reg_args = arg_mem; + ffi_sarg * const stack_args = ®_args[NUM_ARG_REGS]; + ffi_sarg *argp = arg_mem; + ffi_type ** const arg_types = cif->arg_types; + const long num_args = cif->nargs; + long i; + + if (cif->flags == FFI_TYPE_STRUCT) + { + /* Pass a hidden pointer to the return value. We make sure there + is scratch space for the callee to store the return value even if + our caller doesn't care about it. */ + *argp++ = (intptr_t)(rvalue ? rvalue : alloca(cif->rtype->size)); + + /* No more work needed to return anything. */ + rvalue = NULL; + } + + for (i = 0; i < num_args; i++) + { + ffi_type *type = arg_types[i]; + void * const arg_in = avalue[i]; + ptrdiff_t arg_word = argp - arg_mem; + +#ifndef __tilegx__ + /* Doubleword-aligned values are always in an even-number register + pair, or doubleword-aligned stack slot if out of registers. */ + long align = arg_word & (type->alignment > FFI_SIZEOF_ARG); + argp += align; + arg_word += align; +#endif + + if (type->type == FFI_TYPE_STRUCT) + { + const size_t arg_size_in_words = + (type->size + FFI_SIZEOF_ARG - 1) / FFI_SIZEOF_ARG; + + if (arg_word < NUM_ARG_REGS && + arg_word + arg_size_in_words > NUM_ARG_REGS) + { + /* Args are not allowed to span registers and the stack. */ + argp = stack_args; + } + + memcpy(argp, arg_in, type->size); + argp += arg_size_in_words; + } + else + { + argp += assign_to_ffi_arg(argp, arg_in, arg_types[i], 1); + } + } + + /* Actually do the call. */ + ffi_call_tile(reg_args, stack_args, + cif->bytes - (NUM_ARG_REGS * FFI_SIZEOF_ARG), fn); + + if (rvalue != NULL) + assign_to_ffi_arg(rvalue, reg_args, cif->rtype, 0); +} + + +/* Template code for closure. */ +extern const UINT64 ffi_template_tramp_tile[] FFI_HIDDEN; + + +ffi_status +ffi_prep_closure_loc (ffi_closure *closure, + ffi_cif *cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) +{ +#ifdef __tilegx__ + /* TILE-Gx */ + SINT64 c; + SINT64 h; + int s; + UINT64 *out; + + if (cif->abi != FFI_UNIX) + return FFI_BAD_ABI; + + out = (UINT64 *)closure->tramp; + + c = (intptr_t)closure; + h = (intptr_t)ffi_closure_tile; + s = 0; + + /* Find the smallest shift count that doesn't lose information + (i.e. no need to explicitly insert high bits of the address that + are just the sign extension of the low bits). */ + while ((c >> s) != (SINT16)(c >> s) || (h >> s) != (SINT16)(h >> s)) + s += 16; + +#define OPS(a, b, shift) \ + (create_Imm16_X0((a) >> (shift)) | create_Imm16_X1((b) >> (shift))) + + /* Emit the moveli. */ + *out++ = ffi_template_tramp_tile[0] | OPS(c, h, s); + for (s -= 16; s >= 0; s -= 16) + *out++ = ffi_template_tramp_tile[1] | OPS(c, h, s); + +#undef OPS + + *out++ = ffi_template_tramp_tile[2]; + +#else + /* TILEPro */ + UINT64 *out; + intptr_t delta; + + if (cif->abi != FFI_UNIX) + return FFI_BAD_ABI; + + out = (UINT64 *)closure->tramp; + delta = (intptr_t)ffi_closure_tile - (intptr_t)codeloc; + + *out++ = ffi_template_tramp_tile[0] | create_JOffLong_X1(delta >> 3); +#endif + + closure->cif = cif; + closure->fun = fun; + closure->user_data = user_data; + + invalidate_icache(closure->tramp, (char *)out - closure->tramp, + getpagesize()); + + return FFI_OK; +} + + +/* This is called by the assembly wrapper for closures. This does + all of the work. On entry reg_args[0] holds the values the registers + had when the closure was invoked. On return reg_args[1] holds the register + values to be returned to the caller (many of which may be garbage). */ +void FFI_HIDDEN +ffi_closure_tile_inner(ffi_closure *closure, + ffi_sarg reg_args[2][NUM_ARG_REGS], + ffi_sarg *stack_args) +{ + ffi_cif * const cif = closure->cif; + void ** const avalue = alloca(cif->nargs * sizeof(void *)); + void *rvalue; + ffi_type ** const arg_types = cif->arg_types; + ffi_sarg * const reg_args_in = reg_args[0]; + ffi_sarg * const reg_args_out = reg_args[1]; + ffi_sarg * argp; + long i, arg_word, nargs = cif->nargs; + /* Use a union to guarantee proper alignment for double. */ + union { ffi_sarg arg[NUM_ARG_REGS]; double d; UINT64 u64; } closure_ret; + + /* Start out reading register arguments. */ + argp = reg_args_in; + + /* Copy the caller's structure return address to that the closure + returns the data directly to the caller. */ + if (cif->flags == FFI_TYPE_STRUCT) + { + /* Return by reference via hidden pointer. */ + rvalue = (void *)(intptr_t)*argp++; + arg_word = 1; + } + else + { + /* Return the value in registers. */ + rvalue = &closure_ret; + arg_word = 0; + } + + /* Grab the addresses of the arguments. */ + for (i = 0; i < nargs; i++) + { + ffi_type * const type = arg_types[i]; + const size_t arg_size_in_words = + (type->size + FFI_SIZEOF_ARG - 1) / FFI_SIZEOF_ARG; + +#ifndef __tilegx__ + /* Doubleword-aligned values are always in an even-number register + pair, or doubleword-aligned stack slot if out of registers. */ + long align = arg_word & (type->alignment > FFI_SIZEOF_ARG); + argp += align; + arg_word += align; +#endif + + if (arg_word == NUM_ARG_REGS || + (arg_word < NUM_ARG_REGS && + arg_word + arg_size_in_words > NUM_ARG_REGS)) + { + /* Switch to reading arguments from the stack. */ + argp = stack_args; + arg_word = NUM_ARG_REGS; + } + + avalue[i] = argp; + argp += arg_size_in_words; + arg_word += arg_size_in_words; + } + + /* Invoke the closure. */ + closure->fun(cif, rvalue, avalue, closure->user_data); + + if (cif->flags != FFI_TYPE_STRUCT) + { + /* Canonicalize for register representation. */ + assign_to_ffi_arg(reg_args_out, &closure_ret, cif->rtype, 1); + } +} diff --git a/Modules/_ctypes/libffi/src/tile/ffitarget.h b/Modules/_ctypes/libffi/src/tile/ffitarget.h new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/tile/ffitarget.h @@ -0,0 +1,65 @@ +/* -----------------------------------------------------------------*-C-*- + ffitarget.h - Copyright (c) 2012 Tilera Corp. + Target configuration macros for TILE. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +#ifndef LIBFFI_H +#error "Please do not include ffitarget.h directly into your source. Use ffi.h instead." +#endif + +#ifndef LIBFFI_ASM + +#include + +typedef uint_reg_t ffi_arg; +typedef int_reg_t ffi_sarg; + +typedef enum ffi_abi { + FFI_FIRST_ABI = 0, + FFI_UNIX, + FFI_LAST_ABI, + FFI_DEFAULT_ABI = FFI_UNIX +} ffi_abi; +#endif + +/* ---- Definitions for closures ----------------------------------------- */ +#define FFI_CLOSURES 1 + +#ifdef __tilegx__ +/* We always pass 8-byte values, even in -m32 mode. */ +# define FFI_SIZEOF_ARG 8 +# ifdef __LP64__ +# define FFI_TRAMPOLINE_SIZE (8 * 5) /* 5 bundles */ +# else +# define FFI_TRAMPOLINE_SIZE (8 * 3) /* 3 bundles */ +# endif +#else +# define FFI_SIZEOF_ARG 4 +# define FFI_TRAMPOLINE_SIZE 8 /* 1 bundle */ +#endif +#define FFI_NATIVE_RAW_API 0 + +#endif diff --git a/Modules/_ctypes/libffi/src/tile/tile.S b/Modules/_ctypes/libffi/src/tile/tile.S new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/tile/tile.S @@ -0,0 +1,360 @@ +/* ----------------------------------------------------------------------- + tile.S - Copyright (c) 2011 Tilera Corp. + + Tilera TILEPro and TILE-Gx Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#define LIBFFI_ASM +#include +#include + +/* Number of bytes in a register. */ +#define REG_SIZE FFI_SIZEOF_ARG + +/* Number of bytes in stack linkage area for backtracing. + + A note about the ABI: on entry to a procedure, sp points to a stack + slot where it must spill the return address if it's not a leaf. + REG_SIZE bytes beyond that is a slot owned by the caller which + contains the sp value that the caller had when it was originally + entered (i.e. the caller's frame pointer). */ +#define LINKAGE_SIZE (2 * REG_SIZE) + +/* The first 10 registers are used to pass arguments and return values. */ +#define NUM_ARG_REGS 10 + +#ifdef __tilegx__ +#define SW st +#define LW ld +#define BGZT bgtzt +#else +#define SW sw +#define LW lw +#define BGZT bgzt +#endif + + +/* void ffi_call_tile (int_reg_t reg_args[NUM_ARG_REGS], + const int_reg_t *stack_args, + unsigned long stack_args_bytes, + void (*fnaddr)(void)); + + On entry, REG_ARGS contain the outgoing register values, + and STACK_ARGS containts STACK_ARG_BYTES of additional values + to be passed on the stack. If STACK_ARG_BYTES is zero, then + STACK_ARGS is ignored. + + When the invoked function returns, the values of r0-r9 are + blindly stored back into REG_ARGS for the caller to examine. */ + + .section .text.ffi_call_tile, "ax", @progbits + .align 8 + .globl ffi_call_tile + FFI_HIDDEN(ffi_call_tile) +ffi_call_tile: + +/* Incoming arguments. */ +#define REG_ARGS r0 +#define INCOMING_STACK_ARGS r1 +#define STACK_ARG_BYTES r2 +#define ORIG_FNADDR r3 + +/* Temporary values. */ +#define FRAME_SIZE r10 +#define TMP r11 +#define TMP2 r12 +#define OUTGOING_STACK_ARGS r13 +#define REG_ADDR_PTR r14 +#define RETURN_REG_ADDR r15 +#define FNADDR r16 + + .cfi_startproc + { + /* Save return address. */ + SW sp, lr + .cfi_offset lr, 0 + /* Prepare to spill incoming r52. */ + addi TMP, sp, -REG_SIZE + /* Increase frame size to have room to spill r52 and REG_ARGS. + The +7 is to round up mod 8. */ + addi FRAME_SIZE, STACK_ARG_BYTES, \ + REG_SIZE + REG_SIZE + LINKAGE_SIZE + 7 + } + { + /* Round stack frame size to a multiple of 8 to satisfy ABI. */ + andi FRAME_SIZE, FRAME_SIZE, -8 + /* Compute where to spill REG_ARGS value. */ + addi TMP2, sp, -(REG_SIZE * 2) + } + { + /* Spill incoming r52. */ + SW TMP, r52 + .cfi_offset r52, -REG_SIZE + /* Set up our frame pointer. */ + move r52, sp + .cfi_def_cfa_register r52 + /* Push stack frame. */ + sub sp, sp, FRAME_SIZE + } + { + /* Prepare to set up stack linkage. */ + addi TMP, sp, REG_SIZE + /* Prepare to memcpy stack args. */ + addi OUTGOING_STACK_ARGS, sp, LINKAGE_SIZE + /* Save REG_ARGS which we will need after we call the subroutine. */ + SW TMP2, REG_ARGS + } + { + /* Set up linkage info to hold incoming stack pointer. */ + SW TMP, r52 + } + { + /* Skip stack args memcpy if we don't have any stack args (common). */ + blezt STACK_ARG_BYTES, .Ldone_stack_args_memcpy + } + +.Lmemcpy_stack_args: + { + /* Load incoming argument from stack_args. */ + LW TMP, INCOMING_STACK_ARGS + addi INCOMING_STACK_ARGS, INCOMING_STACK_ARGS, REG_SIZE + } + { + /* Store stack argument into outgoing stack argument area. */ + SW OUTGOING_STACK_ARGS, TMP + addi OUTGOING_STACK_ARGS, OUTGOING_STACK_ARGS, REG_SIZE + addi STACK_ARG_BYTES, STACK_ARG_BYTES, -REG_SIZE + } + { + BGZT STACK_ARG_BYTES, .Lmemcpy_stack_args + } +.Ldone_stack_args_memcpy: + + { + /* Copy aside ORIG_FNADDR so we can overwrite its register. */ + move FNADDR, ORIG_FNADDR + /* Prepare to load argument registers. */ + addi REG_ADDR_PTR, r0, REG_SIZE + /* Load outgoing r0. */ + LW r0, r0 + } + + /* Load up argument registers from the REG_ARGS array. */ +#define LOAD_REG(REG, PTR) \ + { \ + LW REG, PTR ; \ + addi PTR, PTR, REG_SIZE \ + } + + LOAD_REG(r1, REG_ADDR_PTR) + LOAD_REG(r2, REG_ADDR_PTR) + LOAD_REG(r3, REG_ADDR_PTR) + LOAD_REG(r4, REG_ADDR_PTR) + LOAD_REG(r5, REG_ADDR_PTR) + LOAD_REG(r6, REG_ADDR_PTR) + LOAD_REG(r7, REG_ADDR_PTR) + LOAD_REG(r8, REG_ADDR_PTR) + LOAD_REG(r9, REG_ADDR_PTR) + + { + /* Call the subroutine. */ + jalr FNADDR + } + + { + /* Restore original lr. */ + LW lr, r52 + /* Prepare to recover ARGS, which we spilled earlier. */ + addi TMP, r52, -(2 * REG_SIZE) + } + { + /* Restore ARGS, so we can fill it in with the return regs r0-r9. */ + LW RETURN_REG_ADDR, TMP + /* Prepare to restore original r52. */ + addi TMP, r52, -REG_SIZE + } + + { + /* Pop stack frame. */ + move sp, r52 + /* Restore original r52. */ + LW r52, TMP + } + +#define STORE_REG(REG, PTR) \ + { \ + SW PTR, REG ; \ + addi PTR, PTR, REG_SIZE \ + } + + /* Return all register values by reference. */ + STORE_REG(r0, RETURN_REG_ADDR) + STORE_REG(r1, RETURN_REG_ADDR) + STORE_REG(r2, RETURN_REG_ADDR) + STORE_REG(r3, RETURN_REG_ADDR) + STORE_REG(r4, RETURN_REG_ADDR) + STORE_REG(r5, RETURN_REG_ADDR) + STORE_REG(r6, RETURN_REG_ADDR) + STORE_REG(r7, RETURN_REG_ADDR) + STORE_REG(r8, RETURN_REG_ADDR) + STORE_REG(r9, RETURN_REG_ADDR) + + { + jrp lr + } + + .cfi_endproc + .size ffi_call_tile, .-ffi_call_tile + +/* ffi_closure_tile(...) + + On entry, lr points to the closure plus 8 bytes, and r10 + contains the actual return address. + + This function simply dumps all register parameters into a stack array + and passes the closure, the registers array, and the stack arguments + to C code that does all of the actual closure processing. */ + + .section .text.ffi_closure_tile, "ax", @progbits + .align 8 + .globl ffi_closure_tile + FFI_HIDDEN(ffi_closure_tile) + + .cfi_startproc +/* Room to spill all NUM_ARG_REGS incoming registers, plus frame linkage. */ +#define CLOSURE_FRAME_SIZE (((NUM_ARG_REGS * REG_SIZE * 2 + LINKAGE_SIZE) + 7) & -8) +ffi_closure_tile: + { +#ifdef __tilegx__ + st sp, lr + .cfi_offset lr, 0 +#else + /* Save return address (in r10 due to closure stub wrapper). */ + SW sp, r10 + .cfi_return_column r10 + .cfi_offset r10, 0 +#endif + /* Compute address for stack frame linkage. */ + addli r10, sp, -(CLOSURE_FRAME_SIZE - REG_SIZE) + } + { + /* Save incoming stack pointer in linkage area. */ + SW r10, sp + .cfi_offset sp, -(CLOSURE_FRAME_SIZE - REG_SIZE) + /* Push a new stack frame. */ + addli sp, sp, -CLOSURE_FRAME_SIZE + .cfi_adjust_cfa_offset CLOSURE_FRAME_SIZE + } + + { + /* Create pointer to where to start spilling registers. */ + addi r10, sp, LINKAGE_SIZE + } + + /* Spill all the incoming registers. */ + STORE_REG(r0, r10) + STORE_REG(r1, r10) + STORE_REG(r2, r10) + STORE_REG(r3, r10) + STORE_REG(r4, r10) + STORE_REG(r5, r10) + STORE_REG(r6, r10) + STORE_REG(r7, r10) + STORE_REG(r8, r10) + { + /* Save r9. */ + SW r10, r9 +#ifdef __tilegx__ + /* Pointer to closure is passed in r11. */ + move r0, r11 +#else + /* Compute pointer to the closure object. Because the closure + starts with a "jal ffi_closure_tile", we can just take the + value of lr (a phony return address pointing into the closure) + and subtract 8. */ + addi r0, lr, -8 +#endif + /* Compute a pointer to the register arguments we just spilled. */ + addi r1, sp, LINKAGE_SIZE + } + { + /* Compute a pointer to the extra stack arguments (if any). */ + addli r2, sp, CLOSURE_FRAME_SIZE + LINKAGE_SIZE + /* Call C code to deal with all of the grotty details. */ + jal ffi_closure_tile_inner + } + { + addli r10, sp, CLOSURE_FRAME_SIZE + } + { + /* Restore the return address. */ + LW lr, r10 + /* Compute pointer to registers array. */ + addli r10, sp, LINKAGE_SIZE + (NUM_ARG_REGS * REG_SIZE) + } + /* Return all the register values, which C code may have set. */ + LOAD_REG(r0, r10) + LOAD_REG(r1, r10) + LOAD_REG(r2, r10) + LOAD_REG(r3, r10) + LOAD_REG(r4, r10) + LOAD_REG(r5, r10) + LOAD_REG(r6, r10) + LOAD_REG(r7, r10) + LOAD_REG(r8, r10) + LOAD_REG(r9, r10) + { + /* Pop the frame. */ + addli sp, sp, CLOSURE_FRAME_SIZE + jrp lr + } + + .cfi_endproc + .size ffi_closure_tile, . - ffi_closure_tile + + +/* What follows are code template instructions that get copied to the + closure trampoline by ffi_prep_closure_loc. The zeroed operands + get replaced by their proper values at runtime. */ + + .section .text.ffi_template_tramp_tile, "ax", @progbits + .align 8 + .globl ffi_template_tramp_tile + FFI_HIDDEN(ffi_template_tramp_tile) +ffi_template_tramp_tile: +#ifdef __tilegx__ + { + moveli r11, 0 /* backpatched to address of containing closure. */ + moveli r10, 0 /* backpatched to ffi_closure_tile. */ + } + /* Note: the following bundle gets generated multiple times + depending on the pointer value (esp. useful for -m32 mode). */ + { shl16insli r11, r11, 0 ; shl16insli r10, r10, 0 } + { info 2+8 /* for backtracer: -> pc in lr, frame size 0 */ ; jr r10 } +#else + /* 'jal .' yields a PC-relative offset of zero so we can OR in the + right offset at runtime. */ + { move r10, lr ; jal . /* ffi_closure_tile */ } +#endif + + .size ffi_template_tramp_tile, . - ffi_template_tramp_tile diff --git a/Modules/_ctypes/libffi/src/x86/ffi.c b/Modules/_ctypes/libffi/src/x86/ffi.c --- a/Modules/_ctypes/libffi/src/x86/ffi.c +++ b/Modules/_ctypes/libffi/src/x86/ffi.c @@ -424,7 +424,7 @@ /** private members **/ /* The following __attribute__((regparm(1))) decorations will have no effect - on MSVC - standard cdecl convention applies. */ + on MSVC or SUNPRO_C -- standard conventions apply. */ static void ffi_prep_incoming_args_SYSV (char *stack, void **ret, void** args, ffi_cif* cif); void FFI_HIDDEN ffi_closure_SYSV (ffi_closure *) diff --git a/Modules/_ctypes/libffi/src/x86/ffi64.c b/Modules/_ctypes/libffi/src/x86/ffi64.c --- a/Modules/_ctypes/libffi/src/x86/ffi64.c +++ b/Modules/_ctypes/libffi/src/x86/ffi64.c @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - ffi64.c - Copyright (c) 20011 Anthony Green + ffi64.c - Copyright (c) 2013 The Written Word, Inc. + Copyright (c) 2011 Anthony Green Copyright (c) 2008, 2010 Red Hat, Inc. Copyright (c) 2002, 2007 Bo Thorsen @@ -37,17 +38,29 @@ #define MAX_GPR_REGS 6 #define MAX_SSE_REGS 8 -#ifdef __INTEL_COMPILER +#if defined(__INTEL_COMPILER) #define UINT128 __m128 #else +#if defined(__SUNPRO_C) +#include +#define UINT128 __m128i +#else #define UINT128 __int128_t #endif +#endif + +union big_int_union +{ + UINT32 i32; + UINT64 i64; + UINT128 i128; +}; struct register_args { /* Registers for argument passing. */ UINT64 gpr[MAX_GPR_REGS]; - UINT128 sse[MAX_SSE_REGS]; + union big_int_union sse[MAX_SSE_REGS]; }; extern void ffi_call_unix64 (void *args, unsigned long bytes, unsigned flags, @@ -471,16 +484,33 @@ { case X86_64_INTEGER_CLASS: case X86_64_INTEGERSI_CLASS: - reg_args->gpr[gprcount] = 0; - memcpy (®_args->gpr[gprcount], a, size < 8 ? size : 8); + /* Sign-extend integer arguments passed in general + purpose registers, to cope with the fact that + LLVM incorrectly assumes that this will be done + (the x86-64 PS ABI does not specify this). */ + switch (arg_types[i]->type) + { + case FFI_TYPE_SINT8: + *(SINT64 *)®_args->gpr[gprcount] = (SINT64) *((SINT8 *) a); + break; + case FFI_TYPE_SINT16: + *(SINT64 *)®_args->gpr[gprcount] = (SINT64) *((SINT16 *) a); + break; + case FFI_TYPE_SINT32: + *(SINT64 *)®_args->gpr[gprcount] = (SINT64) *((SINT32 *) a); + break; + default: + reg_args->gpr[gprcount] = 0; + memcpy (®_args->gpr[gprcount], a, size < 8 ? size : 8); + } gprcount++; break; case X86_64_SSE_CLASS: case X86_64_SSEDF_CLASS: - reg_args->sse[ssecount++] = *(UINT64 *) a; + reg_args->sse[ssecount++].i64 = *(UINT64 *) a; break; case X86_64_SSESF_CLASS: - reg_args->sse[ssecount++] = *(UINT32 *) a; + reg_args->sse[ssecount++].i32 = *(UINT32 *) a; break; default: abort(); diff --git a/Modules/_ctypes/libffi/src/x86/ffitarget.h b/Modules/_ctypes/libffi/src/x86/ffitarget.h --- a/Modules/_ctypes/libffi/src/x86/ffitarget.h +++ b/Modules/_ctypes/libffi/src/x86/ffitarget.h @@ -61,8 +61,9 @@ typedef long long ffi_sarg; #endif #else -#if defined __x86_64__ && !defined __LP64__ +#if defined __x86_64__ && defined __ILP32__ #define FFI_SIZEOF_ARG 8 +#define FFI_SIZEOF_JAVA_RAW 4 typedef unsigned long long ffi_arg; typedef long long ffi_sarg; #else diff --git a/Modules/_ctypes/libffi/src/x86/sysv.S b/Modules/_ctypes/libffi/src/x86/sysv.S --- a/Modules/_ctypes/libffi/src/x86/sysv.S +++ b/Modules/_ctypes/libffi/src/x86/sysv.S @@ -1,5 +1,6 @@ /* ----------------------------------------------------------------------- - sysv.S - Copyright (c) 1996, 1998, 2001-2003, 2005, 2008, 2010 Red Hat, Inc. + sysv.S - Copyright (c) 2013 The Written Word, Inc. + - Copyright (c) 1996,1998,2001-2003,2005,2008,2010 Red Hat, Inc. X86 Foreign Function Interface @@ -181,9 +182,19 @@ leal -24(%ebp), %edx movl %edx, -12(%ebp) /* resp */ leal 8(%ebp), %edx +#ifdef __SUNPRO_C + /* The SUNPRO compiler doesn't support GCC's regparm function + attribute, so we have to pass all three arguments to + ffi_closure_SYSV_inner on the stack. */ + movl %edx, 8(%esp) /* args = __builtin_dwarf_cfa () */ + leal -12(%ebp), %edx + movl %edx, 4(%esp) /* &resp */ + movl %eax, (%esp) /* closure */ +#else movl %edx, 4(%esp) /* args = __builtin_dwarf_cfa () */ leal -12(%ebp), %edx movl %edx, (%esp) /* &resp */ +#endif #if defined HAVE_HIDDEN_VISIBILITY_ATTRIBUTE || !defined __PIC__ call ffi_closure_SYSV_inner #else @@ -328,6 +339,9 @@ .size ffi_closure_raw_SYSV, .-ffi_closure_raw_SYSV #endif +#if defined __GNUC__ +/* Only emit dwarf unwind info when building with GNU toolchain. */ + #if defined __PIC__ # if defined __sun__ && defined __svr4__ /* 32-bit Solaris 2/x86 uses datarel encoding for PIC. GNU ld before 2.22 @@ -460,6 +474,7 @@ .LEFDE3: #endif +#endif #endif /* ifndef __x86_64__ */ diff --git a/Modules/_ctypes/libffi/src/x86/unix64.S b/Modules/_ctypes/libffi/src/x86/unix64.S --- a/Modules/_ctypes/libffi/src/x86/unix64.S +++ b/Modules/_ctypes/libffi/src/x86/unix64.S @@ -1,6 +1,7 @@ /* ----------------------------------------------------------------------- - unix64.S - Copyright (c) 2002 Bo Thorsen - Copyright (c) 2008 Red Hat, Inc + unix64.S - Copyright (c) 2013 The Written Word, Inc. + - Copyright (c) 2008 Red Hat, Inc + - Copyright (c) 2002 Bo Thorsen x86-64 Foreign Function Interface @@ -324,6 +325,9 @@ .LUW9: .size ffi_closure_unix64,.-ffi_closure_unix64 +#ifdef __GNUC__ +/* Only emit DWARF unwind info when building with the GNU toolchain. */ + #ifdef HAVE_AS_X86_64_UNWIND_SECTION_TYPE .section .eh_frame,"a", at unwind #else @@ -419,6 +423,8 @@ .align 8 .LEFDE3: +#endif /* __GNUC__ */ + #endif /* __x86_64__ */ #if defined __ELF__ && defined __linux__ diff --git a/Modules/_ctypes/libffi/src/xtensa/ffi.c b/Modules/_ctypes/libffi/src/xtensa/ffi.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/xtensa/ffi.c @@ -0,0 +1,298 @@ +/* ----------------------------------------------------------------------- + ffi.c - Copyright (c) 2013 Tensilica, Inc. + + XTENSA Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#include +#include + +/* + |----------------------------------------| + | | + on entry to ffi_call ----> |----------------------------------------| + | caller stack frame for registers a0-a3 | + |----------------------------------------| + | | + | additional arguments | + entry of the function ---> |----------------------------------------| + | copy of function arguments a2-a7 | + | - - - - - - - - - - - - - | + | | + + The area below the entry line becomes the new stack frame for the function. + +*/ + + +#define FFI_TYPE_STRUCT_REGS FFI_TYPE_LAST + + +extern void ffi_call_SYSV(void *rvalue, unsigned rsize, unsigned flags, + void(*fn)(void), unsigned nbytes, extended_cif*); +extern void ffi_closure_SYSV(void) FFI_HIDDEN; + +ffi_status ffi_prep_cif_machdep(ffi_cif *cif) +{ + switch(cif->rtype->type) { + case FFI_TYPE_SINT8: + case FFI_TYPE_UINT8: + case FFI_TYPE_SINT16: + case FFI_TYPE_UINT16: + cif->flags = cif->rtype->type; + break; + case FFI_TYPE_VOID: + case FFI_TYPE_FLOAT: + cif->flags = FFI_TYPE_UINT32; + break; + case FFI_TYPE_DOUBLE: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + cif->flags = FFI_TYPE_UINT64; // cif->rtype->type; + break; + case FFI_TYPE_STRUCT: + cif->flags = FFI_TYPE_STRUCT; //_REGS; + /* Up to 16 bytes are returned in registers */ + if (cif->rtype->size > 4 * 4) { + /* returned structure is referenced by a register; use 8 bytes + (including 4 bytes for potential additional alignment) */ + cif->flags = FFI_TYPE_STRUCT; + cif->bytes += 8; + } + break; + + default: + cif->flags = FFI_TYPE_UINT32; + break; + } + + /* Round the stack up to a full 4 register frame, just in case + (we use this size in movsp). This way, it's also a multiple of + 8 bytes for 64-bit arguments. */ + cif->bytes = ALIGN(cif->bytes, 16); + + return FFI_OK; +} + +void ffi_prep_args(extended_cif *ecif, unsigned char* stack) +{ + unsigned int i; + unsigned long *addr; + ffi_type **ptr; + + union { + void **v; + char **c; + signed char **sc; + unsigned char **uc; + signed short **ss; + unsigned short **us; + unsigned int **i; + long long **ll; + float **f; + double **d; + } p_argv; + + /* Verify that everything is aligned up properly */ + FFI_ASSERT (((unsigned long) stack & 0x7) == 0); + + p_argv.v = ecif->avalue; + addr = (unsigned long*)stack; + + /* structures with a size greater than 16 bytes are passed in memory */ + if (ecif->cif->rtype->type == FFI_TYPE_STRUCT && ecif->cif->rtype->size > 16) + { + *addr++ = (unsigned long)ecif->rvalue; + } + + for (i = ecif->cif->nargs, ptr = ecif->cif->arg_types; + i > 0; + i--, ptr++, p_argv.v++) + { + switch ((*ptr)->type) + { + case FFI_TYPE_SINT8: + *addr++ = **p_argv.sc; + break; + case FFI_TYPE_UINT8: + *addr++ = **p_argv.uc; + break; + case FFI_TYPE_SINT16: + *addr++ = **p_argv.ss; + break; + case FFI_TYPE_UINT16: + *addr++ = **p_argv.us; + break; + case FFI_TYPE_FLOAT: + case FFI_TYPE_INT: + case FFI_TYPE_UINT32: + case FFI_TYPE_SINT32: + case FFI_TYPE_POINTER: + *addr++ = **p_argv.i; + break; + case FFI_TYPE_DOUBLE: + case FFI_TYPE_UINT64: + case FFI_TYPE_SINT64: + if (((unsigned long)addr & 4) != 0) + addr++; + *(unsigned long long*)addr = **p_argv.ll; + addr += sizeof(unsigned long long) / sizeof (addr); + break; + + case FFI_TYPE_STRUCT: + { + unsigned long offs; + unsigned long size; + + if (((unsigned long)addr & 4) != 0 && (*ptr)->alignment > 4) + addr++; + + offs = (unsigned long) addr - (unsigned long) stack; + size = (*ptr)->size; + + /* Entire structure must fit the argument registers or referenced */ + if (offs < FFI_REGISTER_NARGS * 4 + && offs + size > FFI_REGISTER_NARGS * 4) + addr = (unsigned long*) (stack + FFI_REGISTER_NARGS * 4); + + memcpy((char*) addr, *p_argv.c, size); + addr += (size + 3) / 4; + break; + } + + default: + FFI_ASSERT(0); + } + } +} + + +void ffi_call(ffi_cif* cif, void(*fn)(void), void *rvalue, void **avalue) +{ + extended_cif ecif; + unsigned long rsize = cif->rtype->size; + int flags = cif->flags; + void *alloc = NULL; + + ecif.cif = cif; + ecif.avalue = avalue; + + /* Note that for structures that are returned in registers (size <= 16 bytes) + we allocate a temporary buffer and use memcpy to copy it to the final + destination. The reason is that the target address might be misaligned or + the length not a multiple of 4 bytes. Handling all those cases would be + very complex. */ + + if (flags == FFI_TYPE_STRUCT && (rsize <= 16 || rvalue == NULL)) + { + alloc = alloca(ALIGN(rsize, 4)); + ecif.rvalue = alloc; + } + else + { + ecif.rvalue = rvalue; + } + + if (cif->abi != FFI_SYSV) + FFI_ASSERT(0); + + ffi_call_SYSV (ecif.rvalue, rsize, cif->flags, fn, cif->bytes, &ecif); + + if (alloc != NULL && rvalue != NULL) + memcpy(rvalue, alloc, rsize); +} + +extern void ffi_trampoline(); +extern void ffi_cacheflush(void* start, void* end); + +ffi_status +ffi_prep_closure_loc (ffi_closure* closure, + ffi_cif* cif, + void (*fun)(ffi_cif*, void*, void**, void*), + void *user_data, + void *codeloc) +{ + /* copye trampoline to stack and patch 'ffi_closure_SYSV' pointer */ + memcpy(closure->tramp, ffi_trampoline, FFI_TRAMPOLINE_SIZE); + *(unsigned int*)(&closure->tramp[8]) = (unsigned int)ffi_closure_SYSV; + + // Do we have this function? + // __builtin___clear_cache(closer->tramp, closer->tramp + FFI_TRAMPOLINE_SIZE) + ffi_cacheflush(closure->tramp, closure->tramp + FFI_TRAMPOLINE_SIZE); + + closure->cif = cif; + closure->fun = fun; + closure->user_data = user_data; + return FFI_OK; +} + + +long FFI_HIDDEN +ffi_closure_SYSV_inner(ffi_closure *closure, void **values, void *rvalue) +{ + ffi_cif *cif; + ffi_type **arg_types; + void **avalue; + int i, areg; + + cif = closure->cif; + if (cif->abi != FFI_SYSV) + return FFI_BAD_ABI; + + areg = 0; + + int rtype = cif->rtype->type; + if (rtype == FFI_TYPE_STRUCT && cif->rtype->size > 4 * 4) + { + rvalue = *values; + areg++; + } + + cif = closure->cif; + arg_types = cif->arg_types; + avalue = alloca(cif->nargs * sizeof(void *)); + + for (i = 0; i < cif->nargs; i++) + { + if (arg_types[i]->alignment == 8 && (areg & 1) != 0) + areg++; + + // skip the entry 16,a1 framework, add 16 bytes (4 registers) + if (areg == FFI_REGISTER_NARGS) + areg += 4; + + if (arg_types[i]->type == FFI_TYPE_STRUCT) + { + int numregs = ((arg_types[i]->size + 3) & ~3) / 4; + if (areg < FFI_REGISTER_NARGS && areg + numregs > FFI_REGISTER_NARGS) + areg = FFI_REGISTER_NARGS + 4; + } + + avalue[i] = &values[areg]; + areg += (arg_types[i]->size + 3) / 4; + } + + (closure->fun)(cif, rvalue, avalue, closure->user_data); + + return rtype; +} diff --git a/Modules/_ctypes/libffi/src/xtensa/ffitarget.h b/Modules/_ctypes/libffi/src/xtensa/ffitarget.h new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/xtensa/ffitarget.h @@ -0,0 +1,53 @@ +/* -----------------------------------------------------------------*-C-*- + ffitarget.h - Copyright (c) 2013 Tensilica, Inc. + Target configuration macros for XTENSA. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#ifndef LIBFFI_TARGET_H +#define LIBFFI_TARGET_H + +#ifndef LIBFFI_H +#error "Please do not include ffitarget.h directly into your source. Use ffi.h instead." +#endif + +#ifndef LIBFFI_ASM +typedef unsigned long ffi_arg; +typedef signed long ffi_sarg; + +typedef enum ffi_abi { + FFI_FIRST_ABI = 0, + FFI_SYSV, + FFI_LAST_ABI, + FFI_DEFAULT_ABI = FFI_SYSV +} ffi_abi; +#endif + +#define FFI_REGISTER_NARGS 6 + +/* ---- Definitions for closures ----------------------------------------- */ + +#define FFI_CLOSURES 1 +#define FFI_NATIVE_RAW_API 0 +#define FFI_TRAMPOLINE_SIZE 24 + +#endif diff --git a/Modules/_ctypes/libffi/src/xtensa/sysv.S b/Modules/_ctypes/libffi/src/xtensa/sysv.S new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/src/xtensa/sysv.S @@ -0,0 +1,253 @@ +/* ----------------------------------------------------------------------- + sysv.S - Copyright (c) 2013 Tensilica, Inc. + + XTENSA Foreign Function Interface + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + ``Software''), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT + HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, + WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + ----------------------------------------------------------------------- */ + +#define LIBFFI_ASM +#include +#include + +#define ENTRY(name) .text; .globl name; .type name, at function; .align 4; name: +#define END(name) .size name , . - name + +/* Assert that the table below is in sync with ffi.h. */ + +#if FFI_TYPE_UINT8 != 5 \ + || FFI_TYPE_SINT8 != 6 \ + || FFI_TYPE_UINT16 != 7 \ + || FFI_TYPE_SINT16 != 8 \ + || FFI_TYPE_UINT32 != 9 \ + || FFI_TYPE_SINT32 != 10 \ + || FFI_TYPE_UINT64 != 11 +#error "xtensa/sysv.S out of sync with ffi.h" +#endif + + +/* ffi_call_SYSV (rvalue, rbytes, flags, (*fnaddr)(), bytes, ecif) + void *rvalue; a2 + unsigned long rbytes; a3 + unsigned flags; a4 + void (*fnaddr)(); a5 + unsigned long bytes; a6 + extended_cif* ecif) a7 +*/ + +ENTRY(ffi_call_SYSV) + + entry a1, 32 # 32 byte frame for using call8 below + + mov a10, a7 # a10(->arg0): ecif + sub a11, a1, a6 # a11(->arg1): stack pointer + mov a7, a1 # fp + movsp a1, a11 # set new sp = old_sp - bytes + + movi a8, ffi_prep_args + callx8 a8 # ffi_prep_args(ecif, stack) + + # prepare to move stack pointer back up to 6 arguments + # note that 'bytes' is already aligned + + movi a10, 6*4 + sub a11, a6, a10 + movgez a6, a10, a11 + add a6, a1, a6 + + + # we can pass up to 6 arguments in registers + # for simplicity, just load 6 arguments + # (the stack size is at least 32 bytes, so no risk to cross boundaries) + + l32i a10, a1, 0 + l32i a11, a1, 4 + l32i a12, a1, 8 + l32i a13, a1, 12 + l32i a14, a1, 16 + l32i a15, a1, 20 + + # move stack pointer + + movsp a1, a6 + + callx8 a5 # (*fn)(args...) + + # Handle return value(s) + + beqz a2, .Lexit + + movi a5, FFI_TYPE_STRUCT + bne a4, a5, .Lstore + movi a5, 16 + blt a5, a3, .Lexit + + s32i a10, a2, 0 + blti a3, 5, .Lexit + addi a3, a3, -1 + s32i a11, a2, 4 + blti a3, 8, .Lexit + s32i a12, a2, 8 + blti a3, 12, .Lexit + s32i a13, a2, 12 + +.Lexit: retw + +.Lstore: + addi a4, a4, -FFI_TYPE_UINT8 + bgei a4, 7, .Lexit # should never happen + movi a6, store_calls + add a4, a4, a4 + addx4 a6, a4, a6 # store_table + idx * 8 + jx a6 + + .align 8 +store_calls: + # UINT8 + s8i a10, a2, 0 + retw + + # SINT8 + .align 8 + s8i a10, a2, 0 + retw + + # UINT16 + .align 8 + s16i a10, a2, 0 + retw + + # SINT16 + .align 8 + s16i a10, a2, 0 + retw + + # UINT32 + .align 8 + s32i a10, a2, 0 + retw + + # SINT32 + .align 8 + s32i a10, a2, 0 + retw + + # UINT64 + .align 8 + s32i a10, a2, 0 + s32i a11, a2, 4 + retw + +END(ffi_call_SYSV) + + +/* + * void ffi_cacheflush (unsigned long start, unsigned long end) + */ + +#define EXTRA_ARGS_SIZE 24 + +ENTRY(ffi_cacheflush) + + entry a1, 16 + +1: dhwbi a2, 0 + ihi a2, 0 + addi a2, a2, 4 + blt a2, a3, 1b + + retw + +END(ffi_cacheflush) + +/* ffi_trampoline is copied to the stack */ + +ENTRY(ffi_trampoline) + + entry a1, 16 + (FFI_REGISTER_NARGS * 4) + (4 * 4) # [ 0] + j 2f # [ 3] + .align 4 # [ 6] +1: .long 0 # [ 8] +2: l32r a15, 1b # [12] + _mov a14, a0 # [15] + callx0 a15 # [18] + # [21] +END(ffi_trampoline) + +/* + * ffi_closure() + * + * a0: closure + 21 + * a14: return address (a0) + */ + +ENTRY(ffi_closure_SYSV) + + /* intentionally omitting entry here */ + + # restore return address (a0) and move pointer to closure to a10 + addi a10, a0, -21 + mov a0, a14 + + # allow up to 4 arguments as return values + addi a11, a1, 4 * 4 + + # save up to 6 arguments to stack (allocated by entry below) + s32i a2, a11, 0 + s32i a3, a11, 4 + s32i a4, a11, 8 + s32i a5, a11, 12 + s32i a6, a11, 16 + s32i a7, a11, 20 + + movi a8, ffi_closure_SYSV_inner + mov a12, a1 + callx8 a8 # .._inner(*closure, **avalue, *rvalue) + + # load up to four return arguments + l32i a2, a1, 0 + l32i a3, a1, 4 + l32i a4, a1, 8 + l32i a5, a1, 12 + + # (sign-)extend return value + movi a11, FFI_TYPE_UINT8 + bne a10, a11, 1f + extui a2, a2, 0, 8 + retw + +1: movi a11, FFI_TYPE_SINT8 + bne a10, a11, 1f + sext a2, a2, 7 + retw + +1: movi a11, FFI_TYPE_UINT16 + bne a10, a11, 1f + extui a2, a2, 0, 16 + retw + +1: movi a11, FFI_TYPE_SINT16 + bne a10, a11, 1f + sext a2, a2, 15 + +1: retw + +END(ffi_closure_SYSV) diff --git a/Modules/_ctypes/libffi/testsuite/Makefile.am b/Modules/_ctypes/libffi/testsuite/Makefile.am --- a/Modules/_ctypes/libffi/testsuite/Makefile.am +++ b/Modules/_ctypes/libffi/testsuite/Makefile.am @@ -13,73 +13,82 @@ AM_RUNTESTFLAGS = +EXTRA_DEJAGNU_SITE_CONFIG=../local.exp + CLEANFILES = *.exe core* *.log *.sum -EXTRA_DIST = config/default.exp libffi.call/cls_19byte.c \ -libffi.call/cls_align_longdouble_split.c libffi.call/closure_loc_fn0.c \ -libffi.call/cls_schar.c libffi.call/closure_fn1.c \ -libffi.call/many2_win32.c libffi.call/return_ul.c \ -libffi.call/cls_align_double.c libffi.call/return_fl2.c \ -libffi.call/cls_1_1byte.c libffi.call/cls_64byte.c \ -libffi.call/nested_struct7.c libffi.call/cls_align_sint32.c \ -libffi.call/nested_struct2.c libffi.call/ffitest.h \ -libffi.call/nested_struct4.c libffi.call/cls_multi_ushort.c \ -libffi.call/struct3.c libffi.call/cls_3byte1.c \ -libffi.call/cls_16byte.c libffi.call/struct8.c \ -libffi.call/nested_struct8.c libffi.call/cls_multi_sshort.c \ -libffi.call/cls_3byte2.c libffi.call/fastthis2_win32.c \ -libffi.call/cls_pointer.c libffi.call/err_bad_typedef.c \ -libffi.call/cls_4_1byte.c libffi.call/cls_9byte2.c \ -libffi.call/cls_multi_schar.c libffi.call/stret_medium2.c \ -libffi.call/cls_5_1_byte.c libffi.call/call.exp \ -libffi.call/cls_double.c libffi.call/cls_align_sint16.c \ -libffi.call/cls_uint.c libffi.call/return_ll1.c \ -libffi.call/nested_struct3.c libffi.call/cls_20byte1.c \ -libffi.call/closure_fn4.c libffi.call/cls_uchar.c \ -libffi.call/struct2.c libffi.call/cls_7byte.c libffi.call/strlen.c \ -libffi.call/many.c libffi.call/testclosure.c libffi.call/return_fl.c \ -libffi.call/struct5.c libffi.call/cls_12byte.c \ -libffi.call/cls_multi_sshortchar.c \ -libffi.call/cls_align_longdouble_split2.c libffi.call/return_dbl2.c \ -libffi.call/return_fl3.c libffi.call/stret_medium.c \ -libffi.call/nested_struct6.c libffi.call/a.out \ -libffi.call/closure_fn3.c libffi.call/float3.c libffi.call/many2.c \ -libffi.call/closure_stdcall.c libffi.call/cls_align_uint16.c \ -libffi.call/cls_9byte1.c libffi.call/closure_fn6.c \ -libffi.call/cls_double_va.c libffi.call/cls_align_pointer.c \ -libffi.call/cls_align_longdouble.c libffi.call/closure_fn2.c \ -libffi.call/cls_sshort.c libffi.call/many_win32.c \ -libffi.call/nested_struct.c libffi.call/cls_20byte.c \ -libffi.call/cls_longdouble.c libffi.call/cls_multi_uchar.c \ -libffi.call/return_uc.c libffi.call/closure_thiscall.c \ -libffi.call/cls_18byte.c libffi.call/cls_8byte.c \ -libffi.call/promotion.c libffi.call/struct1_win32.c \ -libffi.call/return_dbl.c libffi.call/cls_24byte.c \ -libffi.call/struct4.c libffi.call/cls_6byte.c \ -libffi.call/cls_align_uint32.c libffi.call/float.c \ -libffi.call/float1.c libffi.call/float_va.c libffi.call/negint.c \ -libffi.call/return_dbl1.c libffi.call/cls_3_1byte.c \ -libffi.call/cls_align_float.c libffi.call/return_fl1.c \ -libffi.call/nested_struct10.c libffi.call/nested_struct5.c \ -libffi.call/fastthis1_win32.c libffi.call/cls_align_sint64.c \ -libffi.call/stret_large2.c libffi.call/return_sl.c \ -libffi.call/closure_fn0.c libffi.call/cls_5byte.c \ -libffi.call/cls_2byte.c libffi.call/float2.c \ -libffi.call/cls_dbls_struct.c libffi.call/cls_sint.c \ -libffi.call/stret_large.c libffi.call/cls_ulonglong.c \ -libffi.call/cls_ushort.c libffi.call/nested_struct1.c \ -libffi.call/err_bad_abi.c libffi.call/cls_longdouble_va.c \ -libffi.call/cls_float.c libffi.call/cls_pointer_stack.c \ -libffi.call/pyobjc-tc.c libffi.call/cls_multi_ushortchar.c \ -libffi.call/struct1.c libffi.call/nested_struct9.c \ -libffi.call/huge_struct.c libffi.call/problem1.c libffi.call/float4.c \ -libffi.call/fastthis3_win32.c libffi.call/return_ldl.c \ -libffi.call/strlen2_win32.c libffi.call/closure_fn5.c \ -libffi.call/struct2_win32.c libffi.call/struct6.c \ -libffi.call/return_ll.c libffi.call/struct9.c libffi.call/return_sc.c \ -libffi.call/struct7.c libffi.call/cls_align_uint64.c \ -libffi.call/cls_4byte.c libffi.call/strlen_win32.c \ -libffi.call/cls_6_1_byte.c libffi.call/cls_7_1_byte.c \ -libffi.special/unwindtest.cc libffi.special/special.exp \ -libffi.special/unwindtest_ffi_call.cc libffi.special/ffitestcxx.h \ -lib/wrapper.exp lib/target-libpath.exp lib/libffi.exp +EXTRA_DIST = config/default.exp libffi.call/cls_19byte.c \ +libffi.call/cls_align_longdouble_split.c \ +libffi.call/closure_loc_fn0.c libffi.call/cls_schar.c \ +libffi.call/closure_fn1.c libffi.call/many2_win32.c \ +libffi.call/return_ul.c libffi.call/cls_align_double.c \ +libffi.call/return_fl2.c libffi.call/cls_1_1byte.c \ +libffi.call/cls_64byte.c libffi.call/nested_struct7.c \ +libffi.call/cls_align_sint32.c libffi.call/nested_struct2.c \ +libffi.call/ffitest.h libffi.call/nested_struct4.c \ +libffi.call/cls_multi_ushort.c libffi.call/struct3.c \ +libffi.call/cls_3byte1.c libffi.call/cls_16byte.c \ +libffi.call/struct8.c libffi.call/nested_struct8.c \ +libffi.call/cls_multi_sshort.c libffi.call/cls_3byte2.c \ +libffi.call/fastthis2_win32.c libffi.call/cls_pointer.c \ +libffi.call/err_bad_typedef.c libffi.call/cls_4_1byte.c \ +libffi.call/cls_9byte2.c libffi.call/cls_multi_schar.c \ +libffi.call/stret_medium2.c libffi.call/cls_5_1_byte.c \ +libffi.call/call.exp libffi.call/cls_double.c \ +libffi.call/cls_align_sint16.c libffi.call/cls_uint.c \ +libffi.call/return_ll1.c libffi.call/nested_struct3.c \ +libffi.call/cls_20byte1.c libffi.call/closure_fn4.c \ +libffi.call/cls_uchar.c libffi.call/struct2.c libffi.call/cls_7byte.c \ +libffi.call/strlen.c libffi.call/many.c libffi.call/testclosure.c \ +libffi.call/return_fl.c libffi.call/struct5.c \ +libffi.call/cls_12byte.c libffi.call/cls_multi_sshortchar.c \ +libffi.call/cls_align_longdouble_split2.c libffi.call/return_dbl2.c \ +libffi.call/return_fl3.c libffi.call/stret_medium.c \ +libffi.call/nested_struct6.c libffi.call/closure_fn3.c \ +libffi.call/float3.c libffi.call/many2.c \ +libffi.call/closure_stdcall.c libffi.call/cls_align_uint16.c \ +libffi.call/cls_9byte1.c libffi.call/closure_fn6.c \ +libffi.call/cls_double_va.c libffi.call/cls_align_pointer.c \ +libffi.call/cls_align_longdouble.c libffi.call/closure_fn2.c \ +libffi.call/cls_sshort.c libffi.call/many_win32.c \ +libffi.call/nested_struct.c libffi.call/cls_20byte.c \ +libffi.call/cls_longdouble.c libffi.call/cls_multi_uchar.c \ +libffi.call/return_uc.c libffi.call/closure_thiscall.c \ +libffi.call/cls_18byte.c libffi.call/cls_8byte.c \ +libffi.call/promotion.c libffi.call/struct1_win32.c \ +libffi.call/return_dbl.c libffi.call/cls_24byte.c \ +libffi.call/struct4.c libffi.call/cls_6byte.c \ +libffi.call/cls_align_uint32.c libffi.call/float.c \ +libffi.call/float1.c libffi.call/float_va.c libffi.call/negint.c \ +libffi.call/return_dbl1.c libffi.call/cls_3_1byte.c \ +libffi.call/cls_align_float.c libffi.call/return_fl1.c \ +libffi.call/nested_struct10.c libffi.call/nested_struct5.c \ +libffi.call/fastthis1_win32.c libffi.call/cls_align_sint64.c \ +libffi.call/stret_large2.c libffi.call/return_sl.c \ +libffi.call/closure_fn0.c libffi.call/cls_5byte.c \ +libffi.call/cls_2byte.c libffi.call/float2.c \ +libffi.call/cls_dbls_struct.c libffi.call/cls_sint.c \ +libffi.call/stret_large.c libffi.call/cls_ulonglong.c \ +libffi.call/cls_ushort.c libffi.call/nested_struct1.c \ +libffi.call/err_bad_abi.c libffi.call/cls_longdouble_va.c \ +libffi.call/cls_float.c libffi.call/cls_pointer_stack.c \ +libffi.call/pyobjc-tc.c libffi.call/cls_multi_ushortchar.c \ +libffi.call/struct1.c libffi.call/nested_struct9.c \ +libffi.call/huge_struct.c libffi.call/problem1.c \ +libffi.call/float4.c libffi.call/fastthis3_win32.c \ +libffi.call/return_ldl.c libffi.call/strlen2_win32.c \ +libffi.call/closure_fn5.c libffi.call/struct2_win32.c \ +libffi.call/struct6.c libffi.call/return_ll.c libffi.call/struct9.c \ +libffi.call/return_sc.c libffi.call/struct7.c \ +libffi.call/cls_align_uint64.c libffi.call/cls_4byte.c \ +libffi.call/strlen_win32.c libffi.call/cls_6_1_byte.c \ +libffi.call/cls_7_1_byte.c libffi.special/unwindtest.cc \ +libffi.special/special.exp libffi.special/unwindtest_ffi_call.cc \ +libffi.special/ffitestcxx.h lib/wrapper.exp lib/target-libpath.exp \ +lib/libffi.exp libffi.call/cls_struct_va1.c \ +libffi.call/cls_uchar_va.c libffi.call/cls_uint_va.c \ +libffi.call/cls_ulong_va.c libffi.call/cls_ushort_va.c \ +libffi.call/nested_struct11.c libffi.call/uninitialized.c \ +libffi.call/va_1.c libffi.call/va_struct1.c libffi.call/va_struct2.c \ +libffi.call/va_struct3.c + diff --git a/Modules/_ctypes/libffi/testsuite/Makefile.in b/Modules/_ctypes/libffi/testsuite/Makefile.in --- a/Modules/_ctypes/libffi/testsuite/Makefile.in +++ b/Modules/_ctypes/libffi/testsuite/Makefile.in @@ -1,9 +1,8 @@ -# Makefile.in generated by automake 1.11.3 from Makefile.am. +# Makefile.in generated by automake 1.12.2 from Makefile.am. # @configure_input@ -# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, -# 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software -# Foundation, Inc. +# Copyright (C) 1994-2012 Free Software Foundation, Inc. + # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. @@ -15,6 +14,23 @@ @SET_MAKE@ VPATH = @srcdir@ +am__make_dryrun = \ + { \ + am__dry=no; \ + case $$MAKEFLAGS in \ + *\\[\ \ ]*) \ + echo 'am--echo: ; @echo "AM" OK' | $(MAKE) -f - 2>/dev/null \ + | grep '^AM OK$$' >/dev/null || am__dry=yes;; \ + *) \ + for am__flg in $$MAKEFLAGS; do \ + case $$am__flg in \ + *=*|--*) ;; \ + *n*) am__dry=yes; break;; \ + esac; \ + done;; \ + esac; \ + test $$am__dry = yes; \ + } pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ @@ -37,7 +53,19 @@ subdir = testsuite DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 -am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \ +am__aclocal_m4_deps = $(top_srcdir)/m4/asmcfi.m4 \ + $(top_srcdir)/m4/ax_append_flag.m4 \ + $(top_srcdir)/m4/ax_cc_maxopt.m4 \ + $(top_srcdir)/m4/ax_cflags_warn_all.m4 \ + $(top_srcdir)/m4/ax_check_compile_flag.m4 \ + $(top_srcdir)/m4/ax_compiler_vendor.m4 \ + $(top_srcdir)/m4/ax_configure_args.m4 \ + $(top_srcdir)/m4/ax_enable_builddir.m4 \ + $(top_srcdir)/m4/ax_gcc_archflag.m4 \ + $(top_srcdir)/m4/ax_gcc_x86_cpuid.m4 \ + $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ + $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ + $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/acinclude.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) @@ -47,6 +75,11 @@ CONFIG_CLEAN_VPATH_FILES = SOURCES = DIST_SOURCES = +am__can_run_installinfo = \ + case $$AM_UPDATE_INFO_DIR in \ + n|no|NO) false;; \ + *) (install-info --version) >/dev/null 2>&1;; \ + esac DEJATOOL = $(PACKAGE) RUNTESTDEFAULTFLAGS = --tool $$tool --srcdir $$srcdir DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) @@ -114,6 +147,7 @@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ +PRTDIAG = @PRTDIAG@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ @@ -134,6 +168,7 @@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ +ax_enable_builddir_sed = @ax_enable_builddir_sed@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ @@ -169,6 +204,7 @@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ +sys_symbol_underscore = @sys_symbol_underscore@ sysconfdir = @sysconfdir@ target = @target@ target_alias = @target_alias@ @@ -191,75 +227,82 @@ echo $(top_srcdir)/../dejagnu/runtest ; \ else echo runtest; fi` +EXTRA_DEJAGNU_SITE_CONFIG = ../local.exp CLEANFILES = *.exe core* *.log *.sum -EXTRA_DIST = config/default.exp libffi.call/cls_19byte.c \ -libffi.call/cls_align_longdouble_split.c libffi.call/closure_loc_fn0.c \ -libffi.call/cls_schar.c libffi.call/closure_fn1.c \ -libffi.call/many2_win32.c libffi.call/return_ul.c \ -libffi.call/cls_align_double.c libffi.call/return_fl2.c \ -libffi.call/cls_1_1byte.c libffi.call/cls_64byte.c \ -libffi.call/nested_struct7.c libffi.call/cls_align_sint32.c \ -libffi.call/nested_struct2.c libffi.call/ffitest.h \ -libffi.call/nested_struct4.c libffi.call/cls_multi_ushort.c \ -libffi.call/struct3.c libffi.call/cls_3byte1.c \ -libffi.call/cls_16byte.c libffi.call/struct8.c \ -libffi.call/nested_struct8.c libffi.call/cls_multi_sshort.c \ -libffi.call/cls_3byte2.c libffi.call/fastthis2_win32.c \ -libffi.call/cls_pointer.c libffi.call/err_bad_typedef.c \ -libffi.call/cls_4_1byte.c libffi.call/cls_9byte2.c \ -libffi.call/cls_multi_schar.c libffi.call/stret_medium2.c \ -libffi.call/cls_5_1_byte.c libffi.call/call.exp \ -libffi.call/cls_double.c libffi.call/cls_align_sint16.c \ -libffi.call/cls_uint.c libffi.call/return_ll1.c \ -libffi.call/nested_struct3.c libffi.call/cls_20byte1.c \ -libffi.call/closure_fn4.c libffi.call/cls_uchar.c \ -libffi.call/struct2.c libffi.call/cls_7byte.c libffi.call/strlen.c \ -libffi.call/many.c libffi.call/testclosure.c libffi.call/return_fl.c \ -libffi.call/struct5.c libffi.call/cls_12byte.c \ -libffi.call/cls_multi_sshortchar.c \ -libffi.call/cls_align_longdouble_split2.c libffi.call/return_dbl2.c \ -libffi.call/return_fl3.c libffi.call/stret_medium.c \ -libffi.call/nested_struct6.c libffi.call/a.out \ -libffi.call/closure_fn3.c libffi.call/float3.c libffi.call/many2.c \ -libffi.call/closure_stdcall.c libffi.call/cls_align_uint16.c \ -libffi.call/cls_9byte1.c libffi.call/closure_fn6.c \ -libffi.call/cls_double_va.c libffi.call/cls_align_pointer.c \ -libffi.call/cls_align_longdouble.c libffi.call/closure_fn2.c \ -libffi.call/cls_sshort.c libffi.call/many_win32.c \ -libffi.call/nested_struct.c libffi.call/cls_20byte.c \ -libffi.call/cls_longdouble.c libffi.call/cls_multi_uchar.c \ -libffi.call/return_uc.c libffi.call/closure_thiscall.c \ -libffi.call/cls_18byte.c libffi.call/cls_8byte.c \ -libffi.call/promotion.c libffi.call/struct1_win32.c \ -libffi.call/return_dbl.c libffi.call/cls_24byte.c \ -libffi.call/struct4.c libffi.call/cls_6byte.c \ -libffi.call/cls_align_uint32.c libffi.call/float.c \ -libffi.call/float1.c libffi.call/float_va.c libffi.call/negint.c \ -libffi.call/return_dbl1.c libffi.call/cls_3_1byte.c \ -libffi.call/cls_align_float.c libffi.call/return_fl1.c \ -libffi.call/nested_struct10.c libffi.call/nested_struct5.c \ -libffi.call/fastthis1_win32.c libffi.call/cls_align_sint64.c \ -libffi.call/stret_large2.c libffi.call/return_sl.c \ -libffi.call/closure_fn0.c libffi.call/cls_5byte.c \ -libffi.call/cls_2byte.c libffi.call/float2.c \ -libffi.call/cls_dbls_struct.c libffi.call/cls_sint.c \ -libffi.call/stret_large.c libffi.call/cls_ulonglong.c \ -libffi.call/cls_ushort.c libffi.call/nested_struct1.c \ -libffi.call/err_bad_abi.c libffi.call/cls_longdouble_va.c \ -libffi.call/cls_float.c libffi.call/cls_pointer_stack.c \ -libffi.call/pyobjc-tc.c libffi.call/cls_multi_ushortchar.c \ -libffi.call/struct1.c libffi.call/nested_struct9.c \ -libffi.call/huge_struct.c libffi.call/problem1.c libffi.call/float4.c \ -libffi.call/fastthis3_win32.c libffi.call/return_ldl.c \ -libffi.call/strlen2_win32.c libffi.call/closure_fn5.c \ -libffi.call/struct2_win32.c libffi.call/struct6.c \ -libffi.call/return_ll.c libffi.call/struct9.c libffi.call/return_sc.c \ -libffi.call/struct7.c libffi.call/cls_align_uint64.c \ -libffi.call/cls_4byte.c libffi.call/strlen_win32.c \ -libffi.call/cls_6_1_byte.c libffi.call/cls_7_1_byte.c \ -libffi.special/unwindtest.cc libffi.special/special.exp \ -libffi.special/unwindtest_ffi_call.cc libffi.special/ffitestcxx.h \ -lib/wrapper.exp lib/target-libpath.exp lib/libffi.exp +EXTRA_DIST = config/default.exp libffi.call/cls_19byte.c \ +libffi.call/cls_align_longdouble_split.c \ +libffi.call/closure_loc_fn0.c libffi.call/cls_schar.c \ +libffi.call/closure_fn1.c libffi.call/many2_win32.c \ +libffi.call/return_ul.c libffi.call/cls_align_double.c \ +libffi.call/return_fl2.c libffi.call/cls_1_1byte.c \ +libffi.call/cls_64byte.c libffi.call/nested_struct7.c \ +libffi.call/cls_align_sint32.c libffi.call/nested_struct2.c \ +libffi.call/ffitest.h libffi.call/nested_struct4.c \ +libffi.call/cls_multi_ushort.c libffi.call/struct3.c \ +libffi.call/cls_3byte1.c libffi.call/cls_16byte.c \ +libffi.call/struct8.c libffi.call/nested_struct8.c \ +libffi.call/cls_multi_sshort.c libffi.call/cls_3byte2.c \ +libffi.call/fastthis2_win32.c libffi.call/cls_pointer.c \ +libffi.call/err_bad_typedef.c libffi.call/cls_4_1byte.c \ +libffi.call/cls_9byte2.c libffi.call/cls_multi_schar.c \ +libffi.call/stret_medium2.c libffi.call/cls_5_1_byte.c \ +libffi.call/call.exp libffi.call/cls_double.c \ +libffi.call/cls_align_sint16.c libffi.call/cls_uint.c \ +libffi.call/return_ll1.c libffi.call/nested_struct3.c \ +libffi.call/cls_20byte1.c libffi.call/closure_fn4.c \ +libffi.call/cls_uchar.c libffi.call/struct2.c libffi.call/cls_7byte.c \ +libffi.call/strlen.c libffi.call/many.c libffi.call/testclosure.c \ +libffi.call/return_fl.c libffi.call/struct5.c \ +libffi.call/cls_12byte.c libffi.call/cls_multi_sshortchar.c \ +libffi.call/cls_align_longdouble_split2.c libffi.call/return_dbl2.c \ +libffi.call/return_fl3.c libffi.call/stret_medium.c \ +libffi.call/nested_struct6.c libffi.call/closure_fn3.c \ +libffi.call/float3.c libffi.call/many2.c \ +libffi.call/closure_stdcall.c libffi.call/cls_align_uint16.c \ +libffi.call/cls_9byte1.c libffi.call/closure_fn6.c \ +libffi.call/cls_double_va.c libffi.call/cls_align_pointer.c \ +libffi.call/cls_align_longdouble.c libffi.call/closure_fn2.c \ +libffi.call/cls_sshort.c libffi.call/many_win32.c \ +libffi.call/nested_struct.c libffi.call/cls_20byte.c \ +libffi.call/cls_longdouble.c libffi.call/cls_multi_uchar.c \ +libffi.call/return_uc.c libffi.call/closure_thiscall.c \ +libffi.call/cls_18byte.c libffi.call/cls_8byte.c \ +libffi.call/promotion.c libffi.call/struct1_win32.c \ +libffi.call/return_dbl.c libffi.call/cls_24byte.c \ +libffi.call/struct4.c libffi.call/cls_6byte.c \ +libffi.call/cls_align_uint32.c libffi.call/float.c \ +libffi.call/float1.c libffi.call/float_va.c libffi.call/negint.c \ +libffi.call/return_dbl1.c libffi.call/cls_3_1byte.c \ +libffi.call/cls_align_float.c libffi.call/return_fl1.c \ +libffi.call/nested_struct10.c libffi.call/nested_struct5.c \ +libffi.call/fastthis1_win32.c libffi.call/cls_align_sint64.c \ +libffi.call/stret_large2.c libffi.call/return_sl.c \ +libffi.call/closure_fn0.c libffi.call/cls_5byte.c \ +libffi.call/cls_2byte.c libffi.call/float2.c \ +libffi.call/cls_dbls_struct.c libffi.call/cls_sint.c \ +libffi.call/stret_large.c libffi.call/cls_ulonglong.c \ +libffi.call/cls_ushort.c libffi.call/nested_struct1.c \ +libffi.call/err_bad_abi.c libffi.call/cls_longdouble_va.c \ +libffi.call/cls_float.c libffi.call/cls_pointer_stack.c \ +libffi.call/pyobjc-tc.c libffi.call/cls_multi_ushortchar.c \ +libffi.call/struct1.c libffi.call/nested_struct9.c \ +libffi.call/huge_struct.c libffi.call/problem1.c \ +libffi.call/float4.c libffi.call/fastthis3_win32.c \ +libffi.call/return_ldl.c libffi.call/strlen2_win32.c \ +libffi.call/closure_fn5.c libffi.call/struct2_win32.c \ +libffi.call/struct6.c libffi.call/return_ll.c libffi.call/struct9.c \ +libffi.call/return_sc.c libffi.call/struct7.c \ +libffi.call/cls_align_uint64.c libffi.call/cls_4byte.c \ +libffi.call/strlen_win32.c libffi.call/cls_6_1_byte.c \ +libffi.call/cls_7_1_byte.c libffi.special/unwindtest.cc \ +libffi.special/special.exp libffi.special/unwindtest_ffi_call.cc \ +libffi.special/ffitestcxx.h lib/wrapper.exp lib/target-libpath.exp \ +lib/libffi.exp libffi.call/cls_struct_va1.c \ +libffi.call/cls_uchar_va.c libffi.call/cls_uint_va.c \ +libffi.call/cls_ulong_va.c libffi.call/cls_ushort_va.c \ +libffi.call/nested_struct11.c libffi.call/uninitialized.c \ +libffi.call/va_1.c libffi.call/va_struct1.c libffi.call/va_struct2.c \ +libffi.call/va_struct3.c all: all-am @@ -306,6 +349,8 @@ ctags: CTAGS CTAGS: +cscope cscopelist: + check-DEJAGNU: site.exp srcdir='$(srcdir)'; export srcdir; \ @@ -316,11 +361,11 @@ if $$runtest $(AM_RUNTESTFLAGS) $(RUNTESTDEFAULTFLAGS) $(RUNTESTFLAGS); \ then :; else exit_status=1; fi; \ done; \ - else echo "WARNING: could not find \`runtest'" 1>&2; :;\ + else echo "WARNING: could not find 'runtest'" 1>&2; :;\ fi; \ exit $$exit_status site.exp: Makefile $(EXTRA_DEJAGNU_SITE_CONFIG) - @echo 'Making a new site.exp file...' + @echo 'Making a new site.exp file ...' @echo '## these variables are automatically generated by make ##' >site.tmp @echo '# Do not edit here. If you wish to override these values' >>site.tmp @echo '# edit the last section' >>site.tmp diff --git a/Modules/_ctypes/libffi/testsuite/lib/libffi.exp b/Modules/_ctypes/libffi/testsuite/lib/libffi.exp --- a/Modules/_ctypes/libffi/testsuite/lib/libffi.exp +++ b/Modules/_ctypes/libffi/testsuite/lib/libffi.exp @@ -101,9 +101,17 @@ global tool_root_dir global ld_library_path + global using_gcc + set blddirffi [pwd]/.. verbose "libffi $blddirffi" + # Are we building with GCC? + set tmp [grep ../config.status "GCC='yes'"] + if { [string match $tmp "GCC='yes'"] } { + + set using_gcc "yes" + set gccdir [lookfor_file $tool_root_dir gcc/libgcc.a] if {$gccdir != ""} { set gccdir [file dirname $gccdir] @@ -127,6 +135,13 @@ } } } + + } else { + + set using_gcc "no" + + } + # add the library path for libffi. append ld_library_path ":${blddirffi}/.libs" @@ -203,6 +218,10 @@ lappend options "libs= -lffi" + if { [string match "aarch64*-*-linux*" $target_triplet] } { + lappend options "libs= -lpthread" + } + verbose "options: $options" return [target_compile $source $dest $type $options] } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/a.out b/Modules/_ctypes/libffi/testsuite/libffi.call/a.out deleted file mode 100755 Binary file Modules/_ctypes/libffi/testsuite/libffi.call/a.out has changed diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/call.exp b/Modules/_ctypes/libffi/testsuite/libffi.call/call.exp --- a/Modules/_ctypes/libffi/testsuite/libffi.call/call.exp +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/call.exp @@ -19,11 +19,20 @@ global srcdir subdir -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O0 -W -Wall" "" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O2" "" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O3" "" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-Os" "" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O2 -fomit-frame-pointer" "" +if { [string match $using_gcc "yes"] } { + + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O0 -W -Wall" "" + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O2" "" + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O3" "" + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-Os" "" + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "-O2 -fomit-frame-pointer" "" + +} else { + + # Assume we are using the vendor compiler. + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] "" "" + +} dg-finish diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_double_va.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_double_va.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_double_va.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_double_va.c @@ -45,9 +45,9 @@ args[2] = NULL; ffi_call(&cif, FFI_FN(printf), &res, args); - // { dg-output "7.0" } + /* { dg-output "7.0" } */ printf("res: %d\n", (int) res); - // { dg-output "\nres: 4" } + /* { dg-output "\nres: 4" } */ /* The call to cls_double_va_fn is static, so have to use a normal prep_cif */ CHECK(ffi_prep_cif(&cif, FFI_DEFAULT_ABI, 2, &ffi_type_sint, arg_types) == FFI_OK); @@ -55,9 +55,9 @@ CHECK(ffi_prep_closure_loc(pcl, &cif, cls_double_va_fn, NULL, code) == FFI_OK); res = ((int(*)(char*, double))(code))(format, doubleArg); - // { dg-output "\n7.0" } + /* { dg-output "\n7.0" } */ printf("res: %d\n", (int) res); - // { dg-output "\nres: 4" } + /* { dg-output "\nres: 4" } */ exit(0); } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble.c @@ -5,7 +5,9 @@ Originator: Blake Chaffin */ /* { dg-excess-errors "no long double format" { xfail x86_64-*-mingw* x86_64-*-cygwin* } } */ -/* { dg-do run { xfail arm*-*-* strongarm*-*-* xscale*-*-* } } */ +/* This test is known to PASS on armv7l-unknown-linux-gnueabihf, so I have + remove the xfail for arm*-*-* below, until we know more. */ +/* { dg-do run { xfail strongarm*-*-* xscale*-*-* } } */ /* { dg-options -mlong-double-128 { target powerpc64*-*-linux* } } */ /* { dg-output "" { xfail x86_64-*-mingw* x86_64-*-cygwin* } } */ diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble_va.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble_va.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble_va.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_longdouble_va.c @@ -45,9 +45,9 @@ args[2] = NULL; ffi_call(&cif, FFI_FN(printf), &res, args); - // { dg-output "7.0" } + /* { dg-output "7.0" } */ printf("res: %d\n", (int) res); - // { dg-output "\nres: 4" } + /* { dg-output "\nres: 4" } */ /* The call to cls_longdouble_va_fn is static, so have to use a normal prep_cif */ CHECK(ffi_prep_cif(&cif, FFI_DEFAULT_ABI, 2, &ffi_type_sint, @@ -56,9 +56,9 @@ CHECK(ffi_prep_closure_loc(pcl, &cif, cls_longdouble_va_fn, NULL, code) == FFI_OK); res = ((int(*)(char*, long double))(code))(format, ldArg); - // { dg-output "\n7.0" } + /* { dg-output "\n7.0" } */ printf("res: %d\n", (int) res); - // { dg-output "\nres: 4" } + /* { dg-output "\nres: 4" } */ exit(0); } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer.c @@ -35,7 +35,7 @@ void *code; ffi_closure* pcl = ffi_closure_alloc(sizeof(ffi_closure), &code); void* args[3]; -// ffi_type cls_pointer_type; + /* ffi_type cls_pointer_type; */ ffi_type* arg_types[3]; /* cls_pointer_type.size = sizeof(void*); diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer_stack.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer_stack.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer_stack.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_pointer_stack.c @@ -98,7 +98,7 @@ void *code; ffi_closure* pcl = ffi_closure_alloc(sizeof(ffi_closure), &code); void* args[3]; -// ffi_type cls_pointer_type; + /* ffi_type cls_pointer_type; */ ffi_type* arg_types[3]; /* cls_pointer_type.size = sizeof(void*); @@ -125,18 +125,18 @@ ffi_call(&cif, FFI_FN(cls_pointer_fn1), &res, args); printf("res: 0x%08x\n", (unsigned int) res); - // { dg-output "\n0x01234567 0x89abcdef: 0x8acf1356" } - // { dg-output "\n0x8acf1356 0x01234567: 0x8bf258bd" } - // { dg-output "\nres: 0x8bf258bd" } + /* { dg-output "\n0x01234567 0x89abcdef: 0x8acf1356" } */ + /* { dg-output "\n0x8acf1356 0x01234567: 0x8bf258bd" } */ + /* { dg-output "\nres: 0x8bf258bd" } */ CHECK(ffi_prep_closure_loc(pcl, &cif, cls_pointer_gn, NULL, code) == FFI_OK); res = (ffi_arg)(uintptr_t)((void*(*)(void*, void*))(code))(arg1, arg2); printf("res: 0x%08x\n", (unsigned int) res); - // { dg-output "\n0x01234567 0x89abcdef: 0x8acf1356" } - // { dg-output "\n0x8acf1356 0x01234567: 0x8bf258bd" } - // { dg-output "\nres: 0x8bf258bd" } + /* { dg-output "\n0x01234567 0x89abcdef: 0x8acf1356" } */ + /* { dg-output "\n0x8acf1356 0x01234567: 0x8bf258bd" } */ + /* { dg-output "\nres: 0x8bf258bd" } */ exit(0); } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_struct_va1.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_struct_va1.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_struct_va1.c @@ -0,0 +1,114 @@ +/* Area: ffi_call, closure_call + Purpose: Test doubles passed in variable argument lists. + Limitations: none. + PR: none. + Originator: Blake Chaffin 6/6/2007 */ + +/* { dg-do run } */ +/* { dg-output "" { xfail avr32*-*-* } } */ +#include "ffitest.h" + +struct small_tag +{ + unsigned char a; + unsigned char b; +}; + +struct large_tag +{ + unsigned a; + unsigned b; + unsigned c; + unsigned d; + unsigned e; +}; + +static void +test_fn (ffi_cif* cif __UNUSED__, void* resp, + void** args, void* userdata __UNUSED__) +{ + int n = *(int*)args[0]; + struct small_tag s1 = * (struct small_tag *) args[1]; + struct large_tag l1 = * (struct large_tag *) args[2]; + struct small_tag s2 = * (struct small_tag *) args[3]; + + printf ("%d %d %d %d %d %d %d %d %d %d\n", n, s1.a, s1.b, + l1.a, l1.b, l1.c, l1.d, l1.e, + s2.a, s2.b); + * (int*) resp = 42; +} + +int +main (void) +{ + ffi_cif cif; + void *code; + ffi_closure *pcl = ffi_closure_alloc (sizeof (ffi_closure), &code); + ffi_type* arg_types[5]; + + ffi_arg res = 0; + + ffi_type s_type; + ffi_type *s_type_elements[3]; + + ffi_type l_type; + ffi_type *l_type_elements[6]; + + struct small_tag s1; + struct small_tag s2; + struct large_tag l1; + + int si; + + s_type.size = 0; + s_type.alignment = 0; + s_type.type = FFI_TYPE_STRUCT; + s_type.elements = s_type_elements; + + s_type_elements[0] = &ffi_type_uchar; + s_type_elements[1] = &ffi_type_uchar; + s_type_elements[2] = NULL; + + l_type.size = 0; + l_type.alignment = 0; + l_type.type = FFI_TYPE_STRUCT; + l_type.elements = l_type_elements; + + l_type_elements[0] = &ffi_type_uint; + l_type_elements[1] = &ffi_type_uint; + l_type_elements[2] = &ffi_type_uint; + l_type_elements[3] = &ffi_type_uint; + l_type_elements[4] = &ffi_type_uint; + l_type_elements[5] = NULL; + + arg_types[0] = &ffi_type_sint; + arg_types[1] = &s_type; + arg_types[2] = &l_type; + arg_types[3] = &s_type; + arg_types[4] = NULL; + + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 4, &ffi_type_sint, + arg_types) == FFI_OK); + + si = 4; + s1.a = 5; + s1.b = 6; + + s2.a = 20; + s2.b = 21; + + l1.a = 10; + l1.b = 11; + l1.c = 12; + l1.d = 13; + l1.e = 14; + + CHECK(ffi_prep_closure_loc(pcl, &cif, test_fn, NULL, code) == FFI_OK); + + res = ((int (*)(int, ...))(code))(si, s1, l1, s2); + /* { dg-output "4 5 6 10 11 12 13 14 20 21" } */ + printf("res: %d\n", (int) res); + /* { dg-output "\nres: 42" } */ + + exit(0); +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_uchar_va.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_uchar_va.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_uchar_va.c @@ -0,0 +1,44 @@ +/* Area: closure_call + Purpose: Test anonymous unsigned char argument. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ +#include "ffitest.h" + +typedef unsigned char T; + +static void cls_ret_T_fn(ffi_cif* cif __UNUSED__, void* resp, void** args, + void* userdata __UNUSED__) + { + *(ffi_arg *)resp = *(T *)args[0]; + + printf("%d: %d %d\n", (int)(*(ffi_arg *)resp), *(T *)args[0], *(T *)args[1]); + } + +typedef T (*cls_ret_T)(T, ...); + +int main (void) +{ + ffi_cif cif; + void *code; + ffi_closure *pcl = ffi_closure_alloc(sizeof(ffi_closure), &code); + ffi_type * cl_arg_types[3]; + T res; + + cl_arg_types[0] = &ffi_type_uchar; + cl_arg_types[1] = &ffi_type_uchar; + cl_arg_types[2] = NULL; + + /* Initialize the cif */ + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 2, + &ffi_type_uchar, cl_arg_types) == FFI_OK); + + CHECK(ffi_prep_closure_loc(pcl, &cif, cls_ret_T_fn, NULL, code) == FFI_OK); + res = ((((cls_ret_T)code)(67, 4))); + /* { dg-output "67: 67 4" } */ + printf("res: %d\n", res); + /* { dg-output "\nres: 67" } */ + exit(0); +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_uint_va.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_uint_va.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_uint_va.c @@ -0,0 +1,45 @@ +/* Area: closure_call + Purpose: Test anonymous unsigned int argument. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ + +#include "ffitest.h" + +typedef unsigned int T; + +static void cls_ret_T_fn(ffi_cif* cif __UNUSED__, void* resp, void** args, + void* userdata __UNUSED__) + { + *(T *)resp = *(T *)args[0]; + + printf("%d: %d %d\n", *(T *)resp, *(T *)args[0], *(T *)args[1]); + } + +typedef T (*cls_ret_T)(T, ...); + +int main (void) +{ + ffi_cif cif; + void *code; + ffi_closure *pcl = ffi_closure_alloc(sizeof(ffi_closure), &code); + ffi_type * cl_arg_types[3]; + T res; + + cl_arg_types[0] = &ffi_type_uint; + cl_arg_types[1] = &ffi_type_uint; + cl_arg_types[2] = NULL; + + /* Initialize the cif */ + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 2, + &ffi_type_uint, cl_arg_types) == FFI_OK); + + CHECK(ffi_prep_closure_loc(pcl, &cif, cls_ret_T_fn, NULL, code) == FFI_OK); + res = ((((cls_ret_T)code)(67, 4))); + /* { dg-output "67: 67 4" } */ + printf("res: %d\n", res); + /* { dg-output "\nres: 67" } */ + exit(0); +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulong_va.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulong_va.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulong_va.c @@ -0,0 +1,45 @@ +/* Area: closure_call + Purpose: Test anonymous unsigned long argument. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ + +#include "ffitest.h" + +typedef unsigned long T; + +static void cls_ret_T_fn(ffi_cif* cif __UNUSED__, void* resp, void** args, + void* userdata __UNUSED__) + { + *(T *)resp = *(T *)args[0]; + + printf("%ld: %ld %ld\n", *(T *)resp, *(T *)args[0], *(T *)args[1]); + } + +typedef T (*cls_ret_T)(T, ...); + +int main (void) +{ + ffi_cif cif; + void *code; + ffi_closure *pcl = ffi_closure_alloc(sizeof(ffi_closure), &code); + ffi_type * cl_arg_types[3]; + T res; + + cl_arg_types[0] = &ffi_type_ulong; + cl_arg_types[1] = &ffi_type_ulong; + cl_arg_types[2] = NULL; + + /* Initialize the cif */ + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 2, + &ffi_type_ulong, cl_arg_types) == FFI_OK); + + CHECK(ffi_prep_closure_loc(pcl, &cif, cls_ret_T_fn, NULL, code) == FFI_OK); + res = ((((cls_ret_T)code)(67, 4))); + /* { dg-output "67: 67 4" } */ + printf("res: %ld\n", res); + /* { dg-output "\nres: 67" } */ + exit(0); +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulonglong.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulonglong.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulonglong.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ulonglong.c @@ -11,7 +11,7 @@ static void cls_ret_ulonglong_fn(ffi_cif* cif __UNUSED__, void* resp, void** args, void* userdata __UNUSED__) { - *(unsigned long long *)resp= *(unsigned long long *)args[0]; + *(unsigned long long *)resp= 0xfffffffffffffffLL ^ *(unsigned long long *)args[0]; printf("%" PRIuLL ": %" PRIuLL "\n",*(unsigned long long *)args[0], *(unsigned long long *)(resp)); @@ -34,14 +34,14 @@ &ffi_type_uint64, cl_arg_types) == FFI_OK); CHECK(ffi_prep_closure_loc(pcl, &cif, cls_ret_ulonglong_fn, NULL, code) == FFI_OK); res = (*((cls_ret_ulonglong)code))(214LL); - /* { dg-output "214: 214" } */ + /* { dg-output "214: 1152921504606846761" } */ printf("res: %" PRIdLL "\n", res); - /* { dg-output "\nres: 214" } */ + /* { dg-output "\nres: 1152921504606846761" } */ res = (*((cls_ret_ulonglong)code))(9223372035854775808LL); - /* { dg-output "\n9223372035854775808: 9223372035854775808" } */ + /* { dg-output "\n9223372035854775808: 8070450533247928831" } */ printf("res: %" PRIdLL "\n", res); - /* { dg-output "\nres: 9223372035854775808" } */ + /* { dg-output "\nres: 8070450533247928831" } */ exit(0); } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ushort_va.c b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ushort_va.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/cls_ushort_va.c @@ -0,0 +1,44 @@ +/* Area: closure_call + Purpose: Test anonymous unsigned short argument. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ +#include "ffitest.h" + +typedef unsigned short T; + +static void cls_ret_T_fn(ffi_cif* cif __UNUSED__, void* resp, void** args, + void* userdata __UNUSED__) + { + *(ffi_arg *)resp = *(T *)args[0]; + + printf("%d: %d %d\n", (int)(*(ffi_arg *)resp), *(T *)args[0], *(T *)args[1]); + } + +typedef T (*cls_ret_T)(T, ...); + +int main (void) +{ + ffi_cif cif; + void *code; + ffi_closure *pcl = ffi_closure_alloc(sizeof(ffi_closure), &code); + ffi_type * cl_arg_types[3]; + T res; + + cl_arg_types[0] = &ffi_type_ushort; + cl_arg_types[1] = &ffi_type_ushort; + cl_arg_types[2] = NULL; + + /* Initialize the cif */ + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 2, + &ffi_type_ushort, cl_arg_types) == FFI_OK); + + CHECK(ffi_prep_closure_loc(pcl, &cif, cls_ret_T_fn, NULL, code) == FFI_OK); + res = ((((cls_ret_T)code)(67, 4))); + /* { dg-output "67: 67 4" } */ + printf("res: %d\n", res); + /* { dg-output "\nres: 67" } */ + exit(0); +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/ffitest.h b/Modules/_ctypes/libffi/testsuite/libffi.call/ffitest.h --- a/Modules/_ctypes/libffi/testsuite/libffi.call/ffitest.h +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/ffitest.h @@ -15,7 +15,7 @@ #define MAX_ARGS 256 -#define CHECK(x) !(x) ? abort() : 0 +#define CHECK(x) !(x) ? (abort(), 1) : 0 /* Define __UNUSED__ that also other compilers than gcc can run the tests. */ #undef __UNUSED__ @@ -127,44 +127,6 @@ #define PRId64 "I64d" #endif -#ifdef USING_MMAP -static inline void * -allocate_mmap (size_t size) -{ - void *page; -#if defined (HAVE_MMAP_DEV_ZERO) - static int dev_zero_fd = -1; +#ifndef PRIuPTR +#define PRIuPTR "u" #endif - -#ifdef HAVE_MMAP_DEV_ZERO - if (dev_zero_fd == -1) - { - dev_zero_fd = open ("/dev/zero", O_RDONLY); - if (dev_zero_fd == -1) - { - perror ("open /dev/zero: %m"); - exit (1); - } - } -#endif - - -#ifdef HAVE_MMAP_ANON - page = mmap (NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, - MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); -#endif -#ifdef HAVE_MMAP_DEV_ZERO - page = mmap (NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, - MAP_PRIVATE, dev_zero_fd, 0); -#endif - - if (page == (void *) MAP_FAILED) - { - perror ("virtual memory exhausted"); - exit (1); - } - - return page; -} - -#endif diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/float_va.c b/Modules/_ctypes/libffi/testsuite/libffi.call/float_va.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/float_va.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/float_va.c @@ -56,9 +56,9 @@ * different. */ /* Call it statically and then via ffi */ resfp=float_va_fn(0,2.0); - // { dg-output "0: 2.0 : total: 2.0" } + /* { dg-output "0: 2.0 : total: 2.0" } */ printf("compiled: %.1f\n", resfp); - // { dg-output "\ncompiled: 2.0" } + /* { dg-output "\ncompiled: 2.0" } */ arg_types[0] = &ffi_type_uint; arg_types[1] = &ffi_type_double; @@ -71,16 +71,16 @@ values[0] = &firstarg; values[1] = &doubles[0]; ffi_call(&cif, FFI_FN(float_va_fn), &resfp, values); - // { dg-output "\n0: 2.0 : total: 2.0" } + /* { dg-output "\n0: 2.0 : total: 2.0" } */ printf("ffi: %.1f\n", resfp); - // { dg-output "\nffi: 2.0" } + /* { dg-output "\nffi: 2.0" } */ /* Second test, float_va_fn(2,2.0,3.0,4.0), now with variadic params */ /* Call it statically and then via ffi */ resfp=float_va_fn(2,2.0,3.0,4.0); - // { dg-output "\n2: 2.0 : 0:3.0 1:4.0 total: 11.0" } + /* { dg-output "\n2: 2.0 : 0:3.0 1:4.0 total: 11.0" } */ printf("compiled: %.1f\n", resfp); - // { dg-output "\ncompiled: 11.0" } + /* { dg-output "\ncompiled: 11.0" } */ arg_types[0] = &ffi_type_uint; arg_types[1] = &ffi_type_double; @@ -99,9 +99,9 @@ values[2] = &doubles[1]; values[3] = &doubles[2]; ffi_call(&cif, FFI_FN(float_va_fn), &resfp, values); - // { dg-output "\n2: 2.0 : 0:3.0 1:4.0 total: 11.0" } + /* { dg-output "\n2: 2.0 : 0:3.0 1:4.0 total: 11.0" } */ printf("ffi: %.1f\n", resfp); - // { dg-output "\nffi: 11.0" } + /* { dg-output "\nffi: 11.0" } */ exit(0); } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/huge_struct.c b/Modules/_ctypes/libffi/testsuite/libffi.call/huge_struct.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/huge_struct.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/huge_struct.c @@ -8,6 +8,7 @@ /* { dg-excess-errors "" { target x86_64-*-mingw* x86_64-*-cygwin* } } */ /* { dg-do run { xfail strongarm*-*-* xscale*-*-* } } */ /* { dg-options -mlong-double-128 { target powerpc64*-*-linux* } } */ +/* { dg-options -Wformat=0 { target moxie*-*-elf } } */ /* { dg-output "" { xfail x86_64-*-mingw* x86_64-*-cygwin* } } */ #include "ffitest.h" @@ -295,7 +296,7 @@ CHECK(ffi_prep_cif(&cif, FFI_DEFAULT_ABI, 50, &ret_struct_type, argTypes) == FFI_OK); ffi_call(&cif, FFI_FN(test_large_fn), &retVal, argValues); - // { dg-output "1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } + /* { dg-output "1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } */ printf("res: %" PRIu8 " %" PRId8 " %hu %hd %u %d %" PRIu64 " %" PRId64 " %.0f %.0f %.0Lf %#lx " "%" PRIu8 " %" PRId8 " %hu %hd %u %d %" PRIu64 " %" PRId64 " %.0f %.0f %.0Lf %#lx " "%" PRIu8 " %" PRId8 " %hu %hd %u %d %" PRIu64 " %" PRId64 " %.0f %.0f %.0Lf %#lx " @@ -308,7 +309,7 @@ retVal.ee, retVal.ff, retVal.gg, retVal.hh, retVal.ii, (unsigned long)retVal.jj, retVal.kk, retVal.ll, retVal.mm, retVal.nn, retVal.oo, retVal.pp, retVal.qq, retVal.rr, retVal.ss, retVal.tt, retVal.uu, (unsigned long)retVal.vv, retVal.ww, retVal.xx); - // { dg-output "\nres: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } + /* { dg-output "\nres: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } */ CHECK(ffi_prep_closure_loc(pcl, &cif, cls_large_fn, NULL, code) == FFI_OK); @@ -323,7 +324,7 @@ ui8, si8, ui16, si16, ui32, si32, ui64, si64, f, d, ld, p, ui8, si8, ui16, si16, ui32, si32, ui64, si64, f, d, ld, p, ui8, si8); - // { dg-output "\n1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } + /* { dg-output "\n1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2 3 4 5 6 7 8 9 10 11 0x12345678 1 2: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } */ printf("res: %" PRIu8 " %" PRId8 " %hu %hd %u %d %" PRIu64 " %" PRId64 " %.0f %.0f %.0Lf %#lx " "%" PRIu8 " %" PRId8 " %hu %hd %u %d %" PRIu64 " %" PRId64 " %.0f %.0f %.0Lf %#lx " "%" PRIu8 " %" PRId8 " %hu %hd %u %d %" PRIu64 " %" PRId64 " %.0f %.0f %.0Lf %#lx " @@ -336,7 +337,7 @@ retVal.ee, retVal.ff, retVal.gg, retVal.hh, retVal.ii, (unsigned long)retVal.jj, retVal.kk, retVal.ll, retVal.mm, retVal.nn, retVal.oo, retVal.pp, retVal.qq, retVal.rr, retVal.ss, retVal.tt, retVal.uu, (unsigned long)retVal.vv, retVal.ww, retVal.xx); - // { dg-output "\nres: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } + /* { dg-output "\nres: 2 3 4 5 6 7 8 9 10 11 12 0x12345679 3 4 5 6 7 8 9 10 11 12 13 0x1234567a 4 5 6 7 8 9 10 11 12 13 14 0x1234567b 5 6 7 8 9 10 11 12 13 14 15 0x1234567c 6 7" } */ return 0; } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/many2.c b/Modules/_ctypes/libffi/testsuite/libffi.call/many2.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/many2.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/many2.c @@ -12,7 +12,10 @@ typedef unsigned char u8; -__attribute__((noinline)) uint8_t +#ifdef __GNUC__ +__attribute__((noinline)) +#endif +uint8_t foo (uint8_t a, uint8_t b, uint8_t c, uint8_t d, uint8_t e, uint8_t f, uint8_t g) { diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/negint.c b/Modules/_ctypes/libffi/testsuite/libffi.call/negint.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/negint.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/negint.c @@ -5,7 +5,6 @@ Originator: From the original ffitest.c */ /* { dg-do run } */ -/* { dg-options -O2 } */ #include "ffitest.h" diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct1.c b/Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct1.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct1.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct1.c @@ -156,6 +156,6 @@ CHECK( res_dbl.e.ii == (e_dbl.c + f_dbl.ii + g_dbl.e.ii)); CHECK( res_dbl.e.dd == (e_dbl.a + f_dbl.dd + g_dbl.e.dd)); CHECK( res_dbl.e.ff == (e_dbl.b + f_dbl.ff + g_dbl.e.ff)); - // CHECK( 1 == 0); + /* CHECK( 1 == 0); */ exit(0); } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct11.c b/Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct11.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/nested_struct11.c @@ -0,0 +1,121 @@ +/* Area: ffi_call, closure_call + Purpose: Check parameter passing with nested structs + of a single type. This tests the special cases + for homogenous floating-point aggregates in the + AArch64 PCS. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ +#include "ffitest.h" + +typedef struct A { + float a_x; + float a_y; +} A; + +typedef struct B { + float b_x; + float b_y; +} B; + +typedef struct C { + A a; + B b; +} C; + +static C C_fn (int x, int y, int z, C source, int i, int j, int k) +{ + C result; + result.a.a_x = source.a.a_x; + result.a.a_y = source.a.a_y; + result.b.b_x = source.b.b_x; + result.b.b_y = source.b.b_y; + + printf ("%d, %d, %d, %d, %d, %d\n", x, y, z, i, j, k); + + printf ("%.1f, %.1f, %.1f, %.1f, " + "%.1f, %.1f, %.1f, %.1f\n", + source.a.a_x, source.a.a_y, + source.b.b_x, source.b.b_y, + result.a.a_x, result.a.a_y, + result.b.b_x, result.b.b_y); + + return result; +} + +int main (void) +{ + ffi_cif cif; + + ffi_type* struct_fields_source_a[3]; + ffi_type* struct_fields_source_b[3]; + ffi_type* struct_fields_source_c[3]; + ffi_type* arg_types[8]; + + ffi_type struct_type_a, struct_type_b, struct_type_c; + + struct A source_fld_a = {1.0, 2.0}; + struct B source_fld_b = {4.0, 8.0}; + int k = 1; + + struct C result; + struct C source = {source_fld_a, source_fld_b}; + + struct_type_a.size = 0; + struct_type_a.alignment = 0; + struct_type_a.type = FFI_TYPE_STRUCT; + struct_type_a.elements = struct_fields_source_a; + + struct_type_b.size = 0; + struct_type_b.alignment = 0; + struct_type_b.type = FFI_TYPE_STRUCT; + struct_type_b.elements = struct_fields_source_b; + + struct_type_c.size = 0; + struct_type_c.alignment = 0; + struct_type_c.type = FFI_TYPE_STRUCT; + struct_type_c.elements = struct_fields_source_c; + + struct_fields_source_a[0] = &ffi_type_float; + struct_fields_source_a[1] = &ffi_type_float; + struct_fields_source_a[2] = NULL; + + struct_fields_source_b[0] = &ffi_type_float; + struct_fields_source_b[1] = &ffi_type_float; + struct_fields_source_b[2] = NULL; + + struct_fields_source_c[0] = &struct_type_a; + struct_fields_source_c[1] = &struct_type_b; + struct_fields_source_c[2] = NULL; + + arg_types[0] = &ffi_type_sint32; + arg_types[1] = &ffi_type_sint32; + arg_types[2] = &ffi_type_sint32; + arg_types[3] = &struct_type_c; + arg_types[4] = &ffi_type_sint32; + arg_types[5] = &ffi_type_sint32; + arg_types[6] = &ffi_type_sint32; + arg_types[7] = NULL; + + void *args[7]; + args[0] = &k; + args[1] = &k; + args[2] = &k; + args[3] = &source; + args[4] = &k; + args[5] = &k; + args[6] = &k; + CHECK (ffi_prep_cif (&cif, FFI_DEFAULT_ABI, 7, &struct_type_c, + arg_types) == FFI_OK); + + ffi_call (&cif, FFI_FN (C_fn), &result, args); + /* { dg-output "1, 1, 1, 1, 1, 1\n" } */ + /* { dg-output "1.0, 2.0, 4.0, 8.0, 1.0, 2.0, 4.0, 8.0" } */ + CHECK (result.a.a_x == source.a.a_x); + CHECK (result.a.a_y == source.a.a_y); + CHECK (result.b.b_x == source.b.b_x); + CHECK (result.b.b_y == source.b.b_y); + exit (0); +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/return_dbl.c b/Modules/_ctypes/libffi/testsuite/libffi.call/return_dbl.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/return_dbl.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/return_dbl.c @@ -9,6 +9,7 @@ static double return_dbl(double dbl) { + printf ("%f\n", dbl); return 2 * dbl; } int main (void) diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/return_uc.c b/Modules/_ctypes/libffi/testsuite/libffi.call/return_uc.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/return_uc.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/return_uc.c @@ -32,7 +32,7 @@ uc < (unsigned char) '\xff'; uc++) { ffi_call(&cif, FFI_FN(return_uc), &rint, values); - CHECK(rint == (signed int) uc); + CHECK((unsigned char)rint == uc); } exit(0); } diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large.c b/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large.c @@ -9,8 +9,8 @@ /* { dg-do run { xfail strongarm*-*-* xscale*-*-* } } */ #include "ffitest.h" -// 13 FPRs: 104 bytes -// 14 FPRs: 112 bytes +/* 13 FPRs: 104 bytes */ +/* 14 FPRs: 112 bytes */ typedef struct struct_108byte { double a; diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large2.c b/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large2.c --- a/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large2.c +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/stret_large2.c @@ -9,8 +9,8 @@ /* { dg-do run { xfail strongarm*-*-* xscale*-*-* } } */ #include "ffitest.h" -// 13 FPRs: 104 bytes -// 14 FPRs: 112 bytes +/* 13 FPRs: 104 bytes */ +/* 14 FPRs: 112 bytes */ typedef struct struct_116byte { double a; diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/uninitialized.c b/Modules/_ctypes/libffi/testsuite/libffi.call/uninitialized.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/uninitialized.c @@ -0,0 +1,61 @@ +/* { dg-do run } */ +#include "ffitest.h" + +typedef struct +{ + unsigned char uc; + double d; + unsigned int ui; +} test_structure_1; + +static test_structure_1 struct1(test_structure_1 ts) +{ + ts.uc++; + ts.d--; + ts.ui++; + + return ts; +} + +int main (void) +{ + ffi_cif cif; + ffi_type *args[MAX_ARGS]; + void *values[MAX_ARGS]; + ffi_type ts1_type; + ffi_type *ts1_type_elements[4]; + + memset(&cif, 1, sizeof(cif)); + ts1_type.size = 0; + ts1_type.alignment = 0; + ts1_type.type = FFI_TYPE_STRUCT; + ts1_type.elements = ts1_type_elements; + ts1_type_elements[0] = &ffi_type_uchar; + ts1_type_elements[1] = &ffi_type_double; + ts1_type_elements[2] = &ffi_type_uint; + ts1_type_elements[3] = NULL; + + test_structure_1 ts1_arg; + /* This is a hack to get a properly aligned result buffer */ + test_structure_1 *ts1_result = + (test_structure_1 *) malloc (sizeof(test_structure_1)); + + args[0] = &ts1_type; + values[0] = &ts1_arg; + + /* Initialize the cif */ + CHECK(ffi_prep_cif(&cif, FFI_DEFAULT_ABI, 1, + &ts1_type, args) == FFI_OK); + + ts1_arg.uc = '\x01'; + ts1_arg.d = 3.14159; + ts1_arg.ui = 555; + + ffi_call(&cif, FFI_FN(struct1), ts1_result, values); + + CHECK(ts1_result->ui == 556); + CHECK(ts1_result->d == 3.14159 - 1); + + free (ts1_result); + exit(0); +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/va_1.c b/Modules/_ctypes/libffi/testsuite/libffi.call/va_1.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/va_1.c @@ -0,0 +1,196 @@ +/* Area: ffi_call + Purpose: Test passing struct in variable argument lists. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ +/* { dg-output "" { xfail avr32*-*-* } } */ + +#include "ffitest.h" +#include + +struct small_tag +{ + unsigned char a; + unsigned char b; +}; + +struct large_tag +{ + unsigned a; + unsigned b; + unsigned c; + unsigned d; + unsigned e; +}; + +static int +test_fn (int n, ...) +{ + va_list ap; + struct small_tag s1; + struct small_tag s2; + struct large_tag l; + unsigned char uc; + signed char sc; + unsigned short us; + signed short ss; + unsigned int ui; + signed int si; + unsigned long ul; + signed long sl; + float f; + double d; + + va_start (ap, n); + s1 = va_arg (ap, struct small_tag); + l = va_arg (ap, struct large_tag); + s2 = va_arg (ap, struct small_tag); + + uc = va_arg (ap, unsigned); + sc = va_arg (ap, signed); + + us = va_arg (ap, unsigned); + ss = va_arg (ap, signed); + + ui = va_arg (ap, unsigned int); + si = va_arg (ap, signed int); + + ul = va_arg (ap, unsigned long); + sl = va_arg (ap, signed long); + + f = va_arg (ap, double); /* C standard promotes float->double + when anonymous */ + d = va_arg (ap, double); + + printf ("%u %u %u %u %u %u %u %u %u uc=%u sc=%d %u %d %u %d %lu %ld %f %f\n", + s1.a, s1.b, l.a, l.b, l.c, l.d, l.e, + s2.a, s2.b, + uc, sc, + us, ss, + ui, si, + ul, sl, + f, d); + va_end (ap); + return n + 1; +} + +int +main (void) +{ + ffi_cif cif; + void* args[15]; + ffi_type* arg_types[15]; + + ffi_type s_type; + ffi_type *s_type_elements[3]; + + ffi_type l_type; + ffi_type *l_type_elements[6]; + + struct small_tag s1; + struct small_tag s2; + struct large_tag l1; + + int n; + int res; + + unsigned char uc; + signed char sc; + unsigned short us; + signed short ss; + unsigned int ui; + signed int si; + unsigned long ul; + signed long sl; + double d1; + double f1; + + s_type.size = 0; + s_type.alignment = 0; + s_type.type = FFI_TYPE_STRUCT; + s_type.elements = s_type_elements; + + s_type_elements[0] = &ffi_type_uchar; + s_type_elements[1] = &ffi_type_uchar; + s_type_elements[2] = NULL; + + l_type.size = 0; + l_type.alignment = 0; + l_type.type = FFI_TYPE_STRUCT; + l_type.elements = l_type_elements; + + l_type_elements[0] = &ffi_type_uint; + l_type_elements[1] = &ffi_type_uint; + l_type_elements[2] = &ffi_type_uint; + l_type_elements[3] = &ffi_type_uint; + l_type_elements[4] = &ffi_type_uint; + l_type_elements[5] = NULL; + + arg_types[0] = &ffi_type_sint; + arg_types[1] = &s_type; + arg_types[2] = &l_type; + arg_types[3] = &s_type; + arg_types[4] = &ffi_type_uchar; + arg_types[5] = &ffi_type_schar; + arg_types[6] = &ffi_type_ushort; + arg_types[7] = &ffi_type_sshort; + arg_types[8] = &ffi_type_uint; + arg_types[9] = &ffi_type_sint; + arg_types[10] = &ffi_type_ulong; + arg_types[11] = &ffi_type_slong; + arg_types[12] = &ffi_type_double; + arg_types[13] = &ffi_type_double; + arg_types[14] = NULL; + + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 14, &ffi_type_sint, arg_types) == FFI_OK); + + s1.a = 5; + s1.b = 6; + + l1.a = 10; + l1.b = 11; + l1.c = 12; + l1.d = 13; + l1.e = 14; + + s2.a = 7; + s2.b = 8; + + n = 41; + + uc = 9; + sc = 10; + us = 11; + ss = 12; + ui = 13; + si = 14; + ul = 15; + sl = 16; + f1 = 2.12; + d1 = 3.13; + + args[0] = &n; + args[1] = &s1; + args[2] = &l1; + args[3] = &s2; + args[4] = &uc; + args[5] = ≻ + args[6] = &us; + args[7] = &ss; + args[8] = &ui; + args[9] = &si; + args[10] = &ul; + args[11] = &sl; + args[12] = &f1; + args[13] = &d1; + args[14] = NULL; + + ffi_call(&cif, FFI_FN(test_fn), &res, args); + /* { dg-output "5 6 10 11 12 13 14 7 8 uc=9 sc=10 11 12 13 14 15 16 2.120000 3.130000" } */ + printf("res: %d\n", (int) res); + /* { dg-output "\nres: 42" } */ + + return 0; +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct1.c b/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct1.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct1.c @@ -0,0 +1,121 @@ +/* Area: ffi_call + Purpose: Test passing struct in variable argument lists. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ +/* { dg-output "" { xfail avr32*-*-* } } */ + +#include "ffitest.h" +#include + +struct small_tag +{ + unsigned char a; + unsigned char b; +}; + +struct large_tag +{ + unsigned a; + unsigned b; + unsigned c; + unsigned d; + unsigned e; +}; + +static int +test_fn (int n, ...) +{ + va_list ap; + struct small_tag s1; + struct small_tag s2; + struct large_tag l; + + va_start (ap, n); + s1 = va_arg (ap, struct small_tag); + l = va_arg (ap, struct large_tag); + s2 = va_arg (ap, struct small_tag); + printf ("%u %u %u %u %u %u %u %u %u\n", s1.a, s1.b, l.a, l.b, l.c, l.d, l.e, + s2.a, s2.b); + va_end (ap); + return n + 1; +} + +int +main (void) +{ + ffi_cif cif; + void* args[5]; + ffi_type* arg_types[5]; + + ffi_type s_type; + ffi_type *s_type_elements[3]; + + ffi_type l_type; + ffi_type *l_type_elements[6]; + + struct small_tag s1; + struct small_tag s2; + struct large_tag l1; + + int n; + int res; + + s_type.size = 0; + s_type.alignment = 0; + s_type.type = FFI_TYPE_STRUCT; + s_type.elements = s_type_elements; + + s_type_elements[0] = &ffi_type_uchar; + s_type_elements[1] = &ffi_type_uchar; + s_type_elements[2] = NULL; + + l_type.size = 0; + l_type.alignment = 0; + l_type.type = FFI_TYPE_STRUCT; + l_type.elements = l_type_elements; + + l_type_elements[0] = &ffi_type_uint; + l_type_elements[1] = &ffi_type_uint; + l_type_elements[2] = &ffi_type_uint; + l_type_elements[3] = &ffi_type_uint; + l_type_elements[4] = &ffi_type_uint; + l_type_elements[5] = NULL; + + arg_types[0] = &ffi_type_sint; + arg_types[1] = &s_type; + arg_types[2] = &l_type; + arg_types[3] = &s_type; + arg_types[4] = NULL; + + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 4, &ffi_type_sint, arg_types) == FFI_OK); + + s1.a = 5; + s1.b = 6; + + l1.a = 10; + l1.b = 11; + l1.c = 12; + l1.d = 13; + l1.e = 14; + + s2.a = 7; + s2.b = 8; + + n = 41; + + args[0] = &n; + args[1] = &s1; + args[2] = &l1; + args[3] = &s2; + args[4] = NULL; + + ffi_call(&cif, FFI_FN(test_fn), &res, args); + /* { dg-output "5 6 10 11 12 13 14 7 8" } */ + printf("res: %d\n", (int) res); + /* { dg-output "\nres: 42" } */ + + return 0; +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct2.c b/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct2.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct2.c @@ -0,0 +1,123 @@ +/* Area: ffi_call + Purpose: Test passing struct in variable argument lists. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ +/* { dg-output "" { xfail avr32*-*-* } } */ + +#include "ffitest.h" +#include + +struct small_tag +{ + unsigned char a; + unsigned char b; +}; + +struct large_tag +{ + unsigned a; + unsigned b; + unsigned c; + unsigned d; + unsigned e; +}; + +static struct small_tag +test_fn (int n, ...) +{ + va_list ap; + struct small_tag s1; + struct small_tag s2; + struct large_tag l; + + va_start (ap, n); + s1 = va_arg (ap, struct small_tag); + l = va_arg (ap, struct large_tag); + s2 = va_arg (ap, struct small_tag); + printf ("%u %u %u %u %u %u %u %u %u\n", s1.a, s1.b, l.a, l.b, l.c, l.d, l.e, + s2.a, s2.b); + va_end (ap); + s1.a += s2.a; + s1.b += s2.b; + return s1; +} + +int +main (void) +{ + ffi_cif cif; + void* args[5]; + ffi_type* arg_types[5]; + + ffi_type s_type; + ffi_type *s_type_elements[3]; + + ffi_type l_type; + ffi_type *l_type_elements[6]; + + struct small_tag s1; + struct small_tag s2; + struct large_tag l1; + + int n; + struct small_tag res; + + s_type.size = 0; + s_type.alignment = 0; + s_type.type = FFI_TYPE_STRUCT; + s_type.elements = s_type_elements; + + s_type_elements[0] = &ffi_type_uchar; + s_type_elements[1] = &ffi_type_uchar; + s_type_elements[2] = NULL; + + l_type.size = 0; + l_type.alignment = 0; + l_type.type = FFI_TYPE_STRUCT; + l_type.elements = l_type_elements; + + l_type_elements[0] = &ffi_type_uint; + l_type_elements[1] = &ffi_type_uint; + l_type_elements[2] = &ffi_type_uint; + l_type_elements[3] = &ffi_type_uint; + l_type_elements[4] = &ffi_type_uint; + l_type_elements[5] = NULL; + + arg_types[0] = &ffi_type_sint; + arg_types[1] = &s_type; + arg_types[2] = &l_type; + arg_types[3] = &s_type; + arg_types[4] = NULL; + + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 4, &s_type, arg_types) == FFI_OK); + + s1.a = 5; + s1.b = 6; + + l1.a = 10; + l1.b = 11; + l1.c = 12; + l1.d = 13; + l1.e = 14; + + s2.a = 7; + s2.b = 8; + + n = 41; + + args[0] = &n; + args[1] = &s1; + args[2] = &l1; + args[3] = &s2; + args[4] = NULL; + + ffi_call(&cif, FFI_FN(test_fn), &res, args); + /* { dg-output "5 6 10 11 12 13 14 7 8" } */ + printf("res: %d %d\n", res.a, res.b); + /* { dg-output "\nres: 12 14" } */ + + return 0; +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct3.c b/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct3.c new file mode 100644 --- /dev/null +++ b/Modules/_ctypes/libffi/testsuite/libffi.call/va_struct3.c @@ -0,0 +1,125 @@ +/* Area: ffi_call + Purpose: Test passing struct in variable argument lists. + Limitations: none. + PR: none. + Originator: ARM Ltd. */ + +/* { dg-do run } */ +/* { dg-output "" { xfail avr32*-*-* } } */ + +#include "ffitest.h" +#include + +struct small_tag +{ + unsigned char a; + unsigned char b; +}; + +struct large_tag +{ + unsigned a; + unsigned b; + unsigned c; + unsigned d; + unsigned e; +}; + +static struct large_tag +test_fn (int n, ...) +{ + va_list ap; + struct small_tag s1; + struct small_tag s2; + struct large_tag l; + + va_start (ap, n); + s1 = va_arg (ap, struct small_tag); + l = va_arg (ap, struct large_tag); + s2 = va_arg (ap, struct small_tag); + printf ("%u %u %u %u %u %u %u %u %u\n", s1.a, s1.b, l.a, l.b, l.c, l.d, l.e, + s2.a, s2.b); + va_end (ap); + l.a += s1.a; + l.b += s1.b; + l.c += s2.a; + l.d += s2.b; + return l; +} + +int +main (void) +{ + ffi_cif cif; + void* args[5]; + ffi_type* arg_types[5]; + + ffi_type s_type; + ffi_type *s_type_elements[3]; + + ffi_type l_type; + ffi_type *l_type_elements[6]; + + struct small_tag s1; + struct small_tag s2; + struct large_tag l1; + + int n; + struct large_tag res; + + s_type.size = 0; + s_type.alignment = 0; + s_type.type = FFI_TYPE_STRUCT; + s_type.elements = s_type_elements; + + s_type_elements[0] = &ffi_type_uchar; + s_type_elements[1] = &ffi_type_uchar; + s_type_elements[2] = NULL; + + l_type.size = 0; + l_type.alignment = 0; + l_type.type = FFI_TYPE_STRUCT; + l_type.elements = l_type_elements; + + l_type_elements[0] = &ffi_type_uint; + l_type_elements[1] = &ffi_type_uint; + l_type_elements[2] = &ffi_type_uint; + l_type_elements[3] = &ffi_type_uint; + l_type_elements[4] = &ffi_type_uint; + l_type_elements[5] = NULL; + + arg_types[0] = &ffi_type_sint; + arg_types[1] = &s_type; + arg_types[2] = &l_type; + arg_types[3] = &s_type; + arg_types[4] = NULL; + + CHECK(ffi_prep_cif_var(&cif, FFI_DEFAULT_ABI, 1, 4, &l_type, arg_types) == FFI_OK); + + s1.a = 5; + s1.b = 6; + + l1.a = 10; + l1.b = 11; + l1.c = 12; + l1.d = 13; + l1.e = 14; + + s2.a = 7; + s2.b = 8; + + n = 41; + + args[0] = &n; + args[1] = &s1; + args[2] = &l1; + args[3] = &s2; + args[4] = NULL; + + ffi_call(&cif, FFI_FN(test_fn), &res, args); + /* { dg-output "5 6 10 11 12 13 14 7 8" } */ + printf("res: %d %d %d %d %d\n", res.a, res.b, res.c, res.d, res.e); + /* { dg-output "\nres: 15 17 19 21 14" } */ + + return 0; +} diff --git a/Modules/_ctypes/libffi/testsuite/libffi.special/ffitestcxx.h b/Modules/_ctypes/libffi/testsuite/libffi.special/ffitestcxx.h --- a/Modules/_ctypes/libffi/testsuite/libffi.special/ffitestcxx.h +++ b/Modules/_ctypes/libffi/testsuite/libffi.special/ffitestcxx.h @@ -53,44 +53,3 @@ #define PRIuLL "llu" #endif -#ifdef USING_MMAP -static inline void * -allocate_mmap (size_t size) -{ - void *page; -#if defined (HAVE_MMAP_DEV_ZERO) - static int dev_zero_fd = -1; -#endif - -#ifdef HAVE_MMAP_DEV_ZERO - if (dev_zero_fd == -1) - { - dev_zero_fd = open ("/dev/zero", O_RDONLY); - if (dev_zero_fd == -1) - { - perror ("open /dev/zero: %m"); - exit (1); - } - } -#endif - - -#ifdef HAVE_MMAP_ANON - page = mmap (NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, - MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); -#endif -#ifdef HAVE_MMAP_DEV_ZERO - page = mmap (NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, - MAP_PRIVATE, dev_zero_fd, 0); -#endif - - if (page == (char *) MAP_FAILED) - { - perror ("virtual memory exhausted"); - exit (1); - } - - return page; -} - -#endif diff --git a/Modules/_ctypes/libffi/testsuite/libffi.special/special.exp b/Modules/_ctypes/libffi/testsuite/libffi.special/special.exp --- a/Modules/_ctypes/libffi/testsuite/libffi.special/special.exp +++ b/Modules/_ctypes/libffi/testsuite/libffi.special/special.exp @@ -23,10 +23,14 @@ set cxx_options " -shared-libgcc -lstdc++" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-O0 -W -Wall" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-O2" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-O3" -dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-Os" +if { [string match $using_gcc "yes"] } { + + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-O0 -W -Wall" + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-O2" + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-O3" + dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.cc]] $cxx_options "-Os" + +} dg-finish diff --git a/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest.cc b/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest.cc --- a/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest.cc +++ b/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest.cc @@ -5,6 +5,7 @@ Originator: Jeff Sturm */ /* { dg-do run } */ + #include "ffitestcxx.h" #if defined HAVE_STDINT_H diff --git a/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest_ffi_call.cc b/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest_ffi_call.cc --- a/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest_ffi_call.cc +++ b/Modules/_ctypes/libffi/testsuite/libffi.special/unwindtest_ffi_call.cc @@ -5,6 +5,7 @@ Originator: Andreas Tobler 20061213 */ /* { dg-do run } */ + #include "ffitestcxx.h" static int checking(int a __UNUSED__, short b __UNUSED__, diff --git a/Modules/_ctypes/libffi/texinfo.tex b/Modules/_ctypes/libffi/texinfo.tex --- a/Modules/_ctypes/libffi/texinfo.tex +++ b/Modules/_ctypes/libffi/texinfo.tex @@ -1,18 +1,18 @@ % texinfo.tex -- TeX macros to handle Texinfo files. -% +% % Load plain if necessary, i.e., if running under initex. \expandafter\ifx\csname fmtname\endcsname\relax\input plain\fi % -\def\texinfoversion{2005-07-05.19} -% -% Copyright (C) 1985, 1986, 1988, 1990, 1991, 1992, 1993, 1994, 1995, -% 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software -% Foundation, Inc. -% -% This texinfo.tex file is free software; you can redistribute it and/or +\def\texinfoversion{2012-06-05.14} +% +% Copyright 1985, 1986, 1988, 1990, 1991, 1992, 1993, 1994, 1995, +% 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, +% 2007, 2008, 2009, 2010, 2011, 2012 Free Software Foundation, Inc. +% +% This texinfo.tex file is free software: you can redistribute it and/or % modify it under the terms of the GNU General Public License as -% published by the Free Software Foundation; either version 2, or (at -% your option) any later version. +% published by the Free Software Foundation, either version 3 of the +% License, or (at your option) any later version. % % This texinfo.tex file is distributed in the hope that it will be % useful, but WITHOUT ANY WARRANTY; without even the implied warranty @@ -20,9 +20,7 @@ % General Public License for more details. % % You should have received a copy of the GNU General Public License -% along with this texinfo.tex file; see the file COPYING. If not, write -% to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, -% Boston, MA 02110-1301, USA. +% along with this program. If not, see . % % As a special exception, when this file is read by TeX when processing % a Texinfo source document, you may use the result without @@ -30,9 +28,9 @@ % % Please try the latest version of texinfo.tex before submitting bug % reports; you can get the latest version from: -% http://www.gnu.org/software/texinfo/ (the Texinfo home page), or -% ftp://tug.org/tex/texinfo.tex -% (and all CTAN mirrors, see http://www.ctan.org). +% http://ftp.gnu.org/gnu/texinfo/ (the Texinfo release area), or +% http://ftpmirror.gnu.org/texinfo/ (same, via a mirror), or +% http://www.gnu.org/software/texinfo/ (the Texinfo home page) % The texinfo.tex in any given distribution could well be out % of date, so if that's what you're using, please check. % @@ -67,7 +65,6 @@ \everyjob{\message{[Texinfo version \texinfoversion]}% \catcode`+=\active \catcode`\_=\active} -\message{Basics,} \chardef\other=12 % We never want plain's \outer definition of \+ in Texinfo. @@ -95,10 +92,13 @@ \let\ptexnewwrite\newwrite \let\ptexnoindent=\noindent \let\ptexplus=+ +\let\ptexraggedright=\raggedright \let\ptexrbrace=\} \let\ptexslash=\/ \let\ptexstar=\* \let\ptext=\t +\let\ptextop=\top +{\catcode`\'=\active \global\let\ptexquoteright'}% active in plain's math mode % If this character appears in an error message or help string, it % starts a new line in the output. @@ -116,10 +116,11 @@ % Set up fixed words for English if not already set. \ifx\putwordAppendix\undefined \gdef\putwordAppendix{Appendix}\fi \ifx\putwordChapter\undefined \gdef\putwordChapter{Chapter}\fi +\ifx\putworderror\undefined \gdef\putworderror{error}\fi \ifx\putwordfile\undefined \gdef\putwordfile{file}\fi \ifx\putwordin\undefined \gdef\putwordin{in}\fi -\ifx\putwordIndexIsEmpty\undefined \gdef\putwordIndexIsEmpty{(Index is empty)}\fi -\ifx\putwordIndexNonexistent\undefined \gdef\putwordIndexNonexistent{(Index is nonexistent)}\fi +\ifx\putwordIndexIsEmpty\undefined \gdef\putwordIndexIsEmpty{(Index is empty)}\fi +\ifx\putwordIndexNonexistent\undefined \gdef\putwordIndexNonexistent{(Index is nonexistent)}\fi \ifx\putwordInfo\undefined \gdef\putwordInfo{Info}\fi \ifx\putwordInstanceVariableof\undefined \gdef\putwordInstanceVariableof{Instance Variable of}\fi \ifx\putwordMethodon\undefined \gdef\putwordMethodon{Method on}\fi @@ -153,28 +154,25 @@ \ifx\putwordDefopt\undefined \gdef\putwordDefopt{User Option}\fi \ifx\putwordDeffunc\undefined \gdef\putwordDeffunc{Function}\fi -% In some macros, we cannot use the `\? notation---the left quote is -% in some cases the escape char. -\chardef\backChar = `\\ +% Since the category of space is not known, we have to be careful. +\chardef\spacecat = 10 +\def\spaceisspace{\catcode`\ =\spacecat} + +% sometimes characters are active, so we need control sequences. +\chardef\ampChar = `\& \chardef\colonChar = `\: \chardef\commaChar = `\, +\chardef\dashChar = `\- \chardef\dotChar = `\. \chardef\exclamChar= `\! -\chardef\plusChar = `\+ +\chardef\hashChar = `\# +\chardef\lquoteChar= `\` \chardef\questChar = `\? +\chardef\rquoteChar= `\' \chardef\semiChar = `\; +\chardef\slashChar = `\/ \chardef\underChar = `\_ -\chardef\spaceChar = `\ % -\chardef\spacecat = 10 -\def\spaceisspace{\catcode\spaceChar=\spacecat} - -{% for help with debugging. - % example usage: \expandafter\show\activebackslash - \catcode`\! = 0 \catcode`\\ = \active - !global!def!activebackslash{\} -} - % Ignore a token. % \def\gobble#1{} @@ -203,36 +201,7 @@ % that mark overfull boxes (in case you have decided % that the text looks ok even though it passes the margin). % -\def\finalout{\overfullrule=0pt} - -% @| inserts a changebar to the left of the current line. It should -% surround any changed text. This approach does *not* work if the -% change spans more than two lines of output. To handle that, we would -% have adopt a much more difficult approach (putting marks into the main -% vertical list for the beginning and end of each change). -% -\def\|{% - % \vadjust can only be used in horizontal mode. - \leavevmode - % - % Append this vertical mode material after the current line in the output. - \vadjust{% - % We want to insert a rule with the height and depth of the current - % leading; that is exactly what \strutbox is supposed to record. - \vskip-\baselineskip - % - % \vadjust-items are inserted at the left edge of the type. So - % the \llap here moves out into the left-hand margin. - \llap{% - % - % For a thicker or thinner bar, change the `1pt'. - \vrule height\baselineskip width1pt - % - % This is the space between the bar and the text. - \hskip 12pt - }% - }% -} +\def\finalout{\overfullrule=0pt } % Sometimes it is convenient to have everything in the transcript file % and nothing on the terminal. We don't just call \tracingall here, @@ -250,7 +219,7 @@ \tracingmacros2 \tracingrestores1 \showboxbreadth\maxdimen \showboxdepth\maxdimen - \ifx\eTeXversion\undefined\else % etex gives us more logging + \ifx\eTeXversion\thisisundefined\else % etex gives us more logging \tracingscantokens1 \tracingifs1 \tracinggroups1 @@ -261,6 +230,13 @@ \errorcontextlines16 }% +% @errormsg{MSG}. Do the index-like expansions on MSG, but if things +% aren't perfect, it's not the end of the world, being an error message, +% after all. +% +\def\errormsg{\begingroup \indexnofonts \doerrormsg} +\def\doerrormsg#1{\errmessage{#1}} + % add check for \lastpenalty to plain's definitions. If the last thing % we did was a \nobreak, we don't want to insert more space. % @@ -271,7 +247,6 @@ \def\bigbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\bigskipamount \removelastskip\penalty-200\bigskip\fi\fi} -% For @cropmarks command. % Do @cropmarks to get crop marks. % \newif\ifcropmarks @@ -285,6 +260,50 @@ \newdimen\cornerthick \cornerthick=.3pt \newdimen\topandbottommargin \topandbottommargin=.75in +% Output a mark which sets \thischapter, \thissection and \thiscolor. +% We dump everything together because we only have one kind of mark. +% This works because we only use \botmark / \topmark, not \firstmark. +% +% A mark contains a subexpression of the \ifcase ... \fi construct. +% \get*marks macros below extract the needed part using \ifcase. +% +% Another complication is to let the user choose whether \thischapter +% (\thissection) refers to the chapter (section) in effect at the top +% of a page, or that at the bottom of a page. The solution is +% described on page 260 of The TeXbook. It involves outputting two +% marks for the sectioning macros, one before the section break, and +% one after. I won't pretend I can describe this better than DEK... +\def\domark{% + \toks0=\expandafter{\lastchapterdefs}% + \toks2=\expandafter{\lastsectiondefs}% + \toks4=\expandafter{\prevchapterdefs}% + \toks6=\expandafter{\prevsectiondefs}% + \toks8=\expandafter{\lastcolordefs}% + \mark{% + \the\toks0 \the\toks2 + \noexpand\or \the\toks4 \the\toks6 + \noexpand\else \the\toks8 + }% +} +% \topmark doesn't work for the very first chapter (after the title +% page or the contents), so we use \firstmark there -- this gets us +% the mark with the chapter defs, unless the user sneaks in, e.g., +% @setcolor (or @url, or @link, etc.) between @contents and the very +% first @chapter. +\def\gettopheadingmarks{% + \ifcase0\topmark\fi + \ifx\thischapter\empty \ifcase0\firstmark\fi \fi +} +\def\getbottomheadingmarks{\ifcase1\botmark\fi} +\def\getcolormarks{\ifcase2\topmark\fi} + +% Avoid "undefined control sequence" errors. +\def\lastchapterdefs{} +\def\lastsectiondefs{} +\def\prevchapterdefs{} +\def\prevsectiondefs{} +\def\lastcolordefs{} + % Main output routine. \chardef\PAGE = 255 \output = {\onepageout{\pagecontents\PAGE}} @@ -302,7 +321,9 @@ % % Do this outside of the \shipout so @code etc. will be expanded in % the headline as they should be, not taken literally (outputting ''code). + \ifodd\pageno \getoddheadingmarks \else \getevenheadingmarks \fi \setbox\headlinebox = \vbox{\let\hsize=\pagewidth \makeheadline}% + \ifodd\pageno \getoddfootingmarks \else \getevenfootingmarks \fi \setbox\footlinebox = \vbox{\let\hsize=\pagewidth \makefootline}% % {% @@ -311,6 +332,13 @@ % before the \shipout runs. % \indexdummies % don't expand commands in the output. + \normalturnoffactive % \ in index entries must not stay \, e.g., if + % the page break happens to be in the middle of an example. + % We don't want .vr (or whatever) entries like this: + % \entry{{\tt \indexbackslash }acronym}{32}{\code {\acronym}} + % "\acronym" won't work when it's read back in; + % it needs to be + % {\code {{\tt \backslashcurfont }acronym} \shipout\vbox{% % Do this early so pdf references go to the beginning of the page. \ifpdfmakepagedest \pdfdest name{\the\pageno} xyz\fi @@ -338,9 +366,9 @@ \pagebody{#1}% \ifdim\ht\footlinebox > 0pt % Only leave this space if the footline is nonempty. - % (We lessened \vsize for it in \oddfootingxxx.) + % (We lessened \vsize for it in \oddfootingyyy.) % The \baselineskip=24pt in plain's \makefootline has no effect. - \vskip 2\baselineskip + \vskip 24pt \unvbox\footlinebox \fi % @@ -374,7 +402,7 @@ % marginal hacks, juha at viisa.uucp (Juha Takala) \ifvoid\margin\else % marginal info is present \rlap{\kern\hsize\vbox to\z@{\kern1pt\box\margin \vss}}\fi -\dimen@=\dp#1 \unvbox#1 +\dimen@=\dp#1\relax \unvbox#1\relax \ifvoid\footins\else\vskip\skip\footins\footnoterule \unvbox\footins\fi \ifr at ggedbottom \kern-\dimen@ \vfil \fi} } @@ -396,7 +424,7 @@ % \def\parsearg{\parseargusing{}} \def\parseargusing#1#2{% - \def\next{#2}% + \def\argtorun{#2}% \begingroup \obeylines \spaceisspace @@ -415,7 +443,7 @@ \def\argremovecomment#1\comment#2\ArgTerm{\argremovec #1\c\ArgTerm} \def\argremovec#1\c#2\ArgTerm{\argcheckspaces#1\^^M\ArgTerm} -% Each occurence of `\^^M' or `\^^M' is replaced by a single space. +% Each occurrence of `\^^M' or `\^^M' is replaced by a single space. % % \argremovec might leave us with trailing space, e.g., % @end itemize @c foo @@ -427,8 +455,7 @@ \def\argcheckspacesY#1\^^M#2\^^M#3\ArgTerm{% \def\temp{#3}% \ifx\temp\empty - % We cannot use \next here, as it holds the macro to run; - % thus we reuse \temp. + % Do not use \next, perhaps the caller of \parsearg uses it; reuse \temp: \let\temp\finishparsearg \else \let\temp\argcheckspaces @@ -440,14 +467,14 @@ % If a _delimited_ argument is enclosed in braces, they get stripped; so % to get _exactly_ the rest of the line, we had to prevent such situation. % We prepended an \empty token at the very beginning and we expand it now, -% just before passing the control to \next. -% (Similarily, we have to think about #3 of \argcheckspacesY above: it is +% just before passing the control to \argtorun. +% (Similarly, we have to think about #3 of \argcheckspacesY above: it is % either the null string, or it ends with \^^M---thus there is no danger % that a pair of braces would be stripped. % % But first, we have to remove the trailing space token. % -\def\finishparsearg#1 \ArgTerm{\expandafter\next\expandafter{#1}} +\def\finishparsearg#1 \ArgTerm{\expandafter\argtorun\expandafter{#1}} % \parseargdef\foo{...} % is roughly equivalent to @@ -498,12 +525,12 @@ % used to check whether the current environment is the one expected. % % Non-false conditionals (@iftex, @ifset) don't fit into this, so they -% are not treated as enviroments; they don't open a group. (The +% are not treated as environments; they don't open a group. (The % implementation of @end takes care not to call \endgroup in this % special case.) -% At runtime, environments start with this: +% At run-time, environments start with this: \def\startenvironment#1{\begingroup\def\thisenv{#1}} % initialize \let\thisenv\empty @@ -521,7 +548,7 @@ \fi } -% Evironment mismatch, #1 expected: +% Environment mismatch, #1 expected: \def\badenverr{% \errhelp = \EMsimple \errmessage{This command can appear only \inenvironment\temp, @@ -529,7 +556,7 @@ } \def\inenvironment#1{% \ifx#1\empty - out of any environment% + outside of any environment% \else in environment \expandafter\string#1% \fi @@ -541,7 +568,7 @@ \parseargdef\end{% \if 1\csname iscond.#1\endcsname \else - % The general wording of \badenverr may not be ideal, but... --kasal, 06nov03 + % The general wording of \badenverr may not be ideal. \expandafter\checkenv\csname#1\endcsname \csname E#1\endcsname \endgroup @@ -551,85 +578,6 @@ \newhelp\EMsimple{Press RETURN to continue.} -%% Simple single-character @ commands - -% @@ prints an @ -% Kludge this until the fonts are right (grr). -\def\@{{\tt\char64}} - -% This is turned off because it was never documented -% and you can use @w{...} around a quote to suppress ligatures. -%% Define @` and @' to be the same as ` and ' -%% but suppressing ligatures. -%\def\`{{`}} -%\def\'{{'}} - -% Used to generate quoted braces. -\def\mylbrace {{\tt\char123}} -\def\myrbrace {{\tt\char125}} -\let\{=\mylbrace -\let\}=\myrbrace -\begingroup - % Definitions to produce \{ and \} commands for indices, - % and @{ and @} for the aux/toc files. - \catcode`\{ = \other \catcode`\} = \other - \catcode`\[ = 1 \catcode`\] = 2 - \catcode`\! = 0 \catcode`\\ = \other - !gdef!lbracecmd[\{]% - !gdef!rbracecmd[\}]% - !gdef!lbraceatcmd[@{]% - !gdef!rbraceatcmd[@}]% -!endgroup - -% @comma{} to avoid , parsing problems. -\let\comma = , - -% Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent -% Others are defined by plain TeX: @` @' @" @^ @~ @= @u @v @H. -\let\, = \c -\let\dotaccent = \. -\def\ringaccent#1{{\accent23 #1}} -\let\tieaccent = \t -\let\ubaraccent = \b -\let\udotaccent = \d - -% Other special characters: @questiondown @exclamdown @ordf @ordm -% Plain TeX defines: @AA @AE @O @OE @L (plus lowercase versions) @ss. -\def\questiondown{?`} -\def\exclamdown{!`} -\def\ordf{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{a}}} -\def\ordm{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{o}}} - -% Dotless i and dotless j, used for accents. -\def\imacro{i} -\def\jmacro{j} -\def\dotless#1{% - \def\temp{#1}% - \ifx\temp\imacro \ptexi - \else\ifx\temp\jmacro \j - \else \errmessage{@dotless can be used only with i or j}% - \fi\fi -} - -% The \TeX{} logo, as in plain, but resetting the spacing so that a -% period following counts as ending a sentence. (Idea found in latex.) -% -\edef\TeX{\TeX \spacefactor=1000 } - -% @LaTeX{} logo. Not quite the same results as the definition in -% latex.ltx, since we use a different font for the raised A; it's most -% convenient for us to use an explicitly smaller font, rather than using -% the \scriptstyle font (since we don't reset \scriptstyle and -% \scriptscriptstyle). -% -\def\LaTeX{% - L\kern-.36em - {\setbox0=\hbox{T}% - \vbox to \ht0{\hbox{\selectfonts\lllsize A}\vss}}% - \kern-.15em - \TeX -} - % Be sure we're in horizontal mode when doing a tie, since we make space % equivalent to this in @example-like environments. Otherwise, a space % at the beginning of a line will start with \penalty -- and @@ -661,7 +609,7 @@ \def\?{?\spacefactor=\endofsentencespacefactor\space} % @frenchspacing on|off says whether to put extra space after punctuation. -% +% \def\onword{on} \def\offword{off} % @@ -671,7 +619,7 @@ \else\ifx\temp\offword \plainnonfrenchspacing \else \errhelp = \EMsimple - \errmessage{Unknown @frenchspacing option `\temp', must be on/off}% + \errmessage{Unknown @frenchspacing option `\temp', must be on|off}% \fi\fi } @@ -753,15 +701,6 @@ \newdimen\mil \mil=0.001in -% Old definition--didn't work. -%\parseargdef\need{\par % -%% This method tries to make TeX break the page naturally -%% if the depth of the box does not fit. -%{\baselineskip=0pt% -%\vtop to #1\mil{\vfil}\kern -#1\mil\nobreak -%\prevdepth=-1000pt -%}} - \parseargdef\need{% % Ensure vertical mode, so we don't make a big box in the middle of a % paragraph. @@ -825,7 +764,7 @@ % @inmargin{WHICH}{TEXT} puts TEXT in the WHICH margin next to the current % paragraph. For more general purposes, use the \margin insertion -% class. WHICH is `l' or `r'. +% class. WHICH is `l' or `r'. Not documented, written for gawk manual. % \newskip\inmarginspacing \inmarginspacing=1cm \def\strutdepth{\dp\strutbox} @@ -872,15 +811,51 @@ \temp } -% @include file insert text of that file as input. +% @| inserts a changebar to the left of the current line. It should +% surround any changed text. This approach does *not* work if the +% change spans more than two lines of output. To handle that, we would +% have adopt a much more difficult approach (putting marks into the main +% vertical list for the beginning and end of each change). This command +% is not documented, not supported, and doesn't work. +% +\def\|{% + % \vadjust can only be used in horizontal mode. + \leavevmode + % + % Append this vertical mode material after the current line in the output. + \vadjust{% + % We want to insert a rule with the height and depth of the current + % leading; that is exactly what \strutbox is supposed to record. + \vskip-\baselineskip + % + % \vadjust-items are inserted at the left edge of the type. So + % the \llap here moves out into the left-hand margin. + \llap{% + % + % For a thicker or thinner bar, change the `1pt'. + \vrule height\baselineskip width1pt + % + % This is the space between the bar and the text. + \hskip 12pt + }% + }% +} + +% @include FILE -- \input text of FILE. % \def\include{\parseargusing\filenamecatcodes\includezzz} \def\includezzz#1{% \pushthisfilestack \def\thisfile{#1}% {% - \makevalueexpandable - \def\temp{\input #1 }% + \makevalueexpandable % we want to expand any @value in FILE. + \turnoffactive % and allow special characters in the expansion + \indexnofonts % Allow `@@' and other weird things in file names. + \wlog{texinfo.tex: doing @include of #1^^J}% + \edef\temp{\noexpand\input #1 }% + % + % This trickery is to read FILE outside of a group, in case it makes + % definitions, etc. \expandafter }\temp \popthisfilestack @@ -895,6 +870,8 @@ \catcode`>=\other \catcode`+=\other \catcode`-=\other + \catcode`\`=\other + \catcode`\'=\other } \def\pushthisfilestack{% @@ -910,7 +887,7 @@ \def\popthisfilestack{\errthisfilestackempty} \def\errthisfilestackempty{\errmessage{Internal error: the stack of filenames is empty.}} - +% \def\thisfile{} % @center line @@ -918,36 +895,46 @@ % \parseargdef\center{% \ifhmode - \let\next\centerH + \let\centersub\centerH \else - \let\next\centerV + \let\centersub\centerV \fi - \next{\hfil \ignorespaces#1\unskip \hfil}% -} -\def\centerH#1{% - {% - \hfil\break - \advance\hsize by -\leftskip - \advance\hsize by -\rightskip - \line{#1}% - \break - }% -} -\def\centerV#1{\line{\kern\leftskip #1\kern\rightskip}} + \centersub{\hfil \ignorespaces#1\unskip \hfil}% + \let\centersub\relax % don't let the definition persist, just in case +} +\def\centerH#1{{% + \hfil\break + \advance\hsize by -\leftskip + \advance\hsize by -\rightskip + \line{#1}% + \break +}} +% +\newcount\centerpenalty +\def\centerV#1{% + % The idea here is the same as in \startdefun, \cartouche, etc.: if + % @center is the first thing after a section heading, we need to wipe + % out the negative parskip inserted by \sectionheading, but still + % prevent a page break here. + \centerpenalty = \lastpenalty + \ifnum\centerpenalty>10000 \vskip\parskip \fi + \ifnum\centerpenalty>9999 \penalty\centerpenalty \fi + \line{\kern\leftskip #1\kern\rightskip}% +} % @sp n outputs n lines of vertical space - +% \parseargdef\sp{\vskip #1\baselineskip} % @comment ...line which is ignored... % @c is the same as @comment % @ignore ... @end ignore is another way to write a comment - +% \def\comment{\begingroup \catcode`\^^M=\other% \catcode`\@=\other \catcode`\{=\other \catcode`\}=\other% \commentxxx} {\catcode`\^^M=\other \gdef\commentxxx#1^^M{\endgroup}} - +% \let\c=\comment % @paragraphindent NCHARS @@ -1040,86 +1027,6 @@ } -% @asis just yields its argument. Used with @table, for example. -% -\def\asis#1{#1} - -% @math outputs its argument in math mode. -% -% One complication: _ usually means subscripts, but it could also mean -% an actual _ character, as in @math{@var{some_variable} + 1}. So make -% _ active, and distinguish by seeing if the current family is \slfam, -% which is what @var uses. -{ - \catcode\underChar = \active - \gdef\mathunderscore{% - \catcode\underChar=\active - \def_{\ifnum\fam=\slfam \_\else\sb\fi}% - } -} -% Another complication: we want \\ (and @\) to output a \ character. -% FYI, plain.tex uses \\ as a temporary control sequence (why?), but -% this is not advertised and we don't care. Texinfo does not -% otherwise define @\. -% -% The \mathchar is class=0=ordinary, family=7=ttfam, position=5C=\. -\def\mathbackslash{\ifnum\fam=\ttfam \mathchar"075C \else\backslash \fi} -% -\def\math{% - \tex - \mathunderscore - \let\\ = \mathbackslash - \mathactive - $\finishmath -} -\def\finishmath#1{#1$\endgroup} % Close the group opened by \tex. - -% Some active characters (such as <) are spaced differently in math. -% We have to reset their definitions in case the @math was an argument -% to a command which sets the catcodes (such as @item or @section). -% -{ - \catcode`^ = \active - \catcode`< = \active - \catcode`> = \active - \catcode`+ = \active - \gdef\mathactive{% - \let^ = \ptexhat - \let< = \ptexless - \let> = \ptexgtr - \let+ = \ptexplus - } -} - -% @bullet and @minus need the same treatment as @math, just above. -\def\bullet{$\ptexbullet$} -\def\minus{$-$} - -% @dots{} outputs an ellipsis using the current font. -% We do .5em per period so that it has the same spacing in a typewriter -% font as three actual period characters. -% -\def\dots{% - \leavevmode - \hbox to 1.5em{% - \hskip 0pt plus 0.25fil - .\hfil.\hfil.% - \hskip 0pt plus 0.5fil - }% -} - -% @enddots{} is an end-of-sentence ellipsis. -% -\def\enddots{% - \dots - \spacefactor=\endofsentencespacefactor -} - -% @comma{} is so commas can be inserted into text without messing up -% Texinfo's parsing. -% -\let\comma = , - % @refill is a no-op. \let\refill=\relax @@ -1184,9 +1091,8 @@ \newif\ifpdfmakepagedest % when pdftex is run in dvi mode, \pdfoutput is defined (so \pdfoutput=1 -% can be set). So we test for \relax and 0 as well as \undefined, -% borrowed from ifpdf.sty. -\ifx\pdfoutput\undefined +% can be set). So we test for \relax and 0 as well as being undefined. +\ifx\pdfoutput\thisisundefined \else \ifx\pdfoutput\relax \else @@ -1197,99 +1103,156 @@ \fi \fi -% PDF uses PostScript string constants for the names of xref targets, to +% PDF uses PostScript string constants for the names of xref targets, % for display in the outlines, and in other places. Thus, we have to % double any backslashes. Otherwise, a name like "\node" will be % interpreted as a newline (\n), followed by o, d, e. Not good. -% http://www.ntg.nl/pipermail/ntg-pdftex/2004-July/000654.html -% (and related messages, the final outcome is that it is up to the TeX -% user to double the backslashes and otherwise make the string valid, so -% that's we do). - -% double active backslashes. % -{\catcode`\@=0 \catcode`\\=\active - @gdef at activebackslash{@catcode`@\=@active @otherbackslash} - @gdef at activebackslashdouble{% - @catcode at backChar=@active - @let\=@doublebackslash} -} - -% To handle parens, we must adopt a different approach, since parens are -% not active characters. hyperref.dtx (which has the same problem as -% us) handles it with this amazing macro to replace tokens. I've -% tinkered with it a little for texinfo, but it's definitely from there. -% -% #1 is the tokens to replace. -% #2 is the replacement. -% #3 is the control sequence with the string. -% -\def\HyPsdSubst#1#2#3{% - \def\HyPsdReplace##1#1##2\END{% - ##1% - \ifx\\##2\\% - \else - #2% - \HyReturnAfterFi{% - \HyPsdReplace##2\END +% See http://www.ntg.nl/pipermail/ntg-pdftex/2004-July/000654.html and +% related messages. The final outcome is that it is up to the TeX user +% to double the backslashes and otherwise make the string valid, so +% that's what we do. pdftex 1.30.0 (ca.2005) introduced a primitive to +% do this reliably, so we use it. + +% #1 is a control sequence in which to do the replacements, +% which we \xdef. +\def\txiescapepdf#1{% + \ifx\pdfescapestring\thisisundefined + % No primitive available; should we give a warning or log? + % Many times it won't matter. + \else + % The expandable \pdfescapestring primitive escapes parentheses, + % backslashes, and other special chars. + \xdef#1{\pdfescapestring{#1}}% + \fi +} + +\newhelp\nopdfimagehelp{Texinfo supports .png, .jpg, .jpeg, and .pdf images +with PDF output, and none of those formats could be found. (.eps cannot +be supported due to the design of the PDF format; use regular TeX (DVI +output) for that.)} + +\ifpdf + % + % Color manipulation macros based on pdfcolor.tex, + % except using rgb instead of cmyk; the latter is said to render as a + % very dark gray on-screen and a very dark halftone in print, instead + % of actual black. + \def\rgbDarkRed{0.50 0.09 0.12} + \def\rgbBlack{0 0 0} + % + % k sets the color for filling (usual text, etc.); + % K sets the color for stroking (thin rules, e.g., normal _'s). + \def\pdfsetcolor#1{\pdfliteral{#1 rg #1 RG}} + % + % Set color, and create a mark which defines \thiscolor accordingly, + % so that \makeheadline knows which color to restore. + \def\setcolor#1{% + \xdef\lastcolordefs{\gdef\noexpand\thiscolor{#1}}% + \domark + \pdfsetcolor{#1}% + } + % + \def\maincolor{\rgbBlack} + \pdfsetcolor{\maincolor} + \edef\thiscolor{\maincolor} + \def\lastcolordefs{} + % + \def\makefootline{% + \baselineskip24pt + \line{\pdfsetcolor{\maincolor}\the\footline}% + } + % + \def\makeheadline{% + \vbox to 0pt{% + \vskip-22.5pt + \line{% + \vbox to8.5pt{}% + % Extract \thiscolor definition from the marks. + \getcolormarks + % Typeset the headline with \maincolor, then restore the color. + \pdfsetcolor{\maincolor}\the\headline\pdfsetcolor{\thiscolor}% }% - \fi - }% - \xdef#3{\expandafter\HyPsdReplace#3#1\END}% -} -\long\def\HyReturnAfterFi#1\fi{\fi#1} - -% #1 is a control sequence in which to do the replacements. -\def\backslashparens#1{% - \xdef#1{#1}% redefine it as its expansion; the definition is simply - % \lastnode when called from \setref -> \pdfmkdest. - \HyPsdSubst{(}{\backslashlparen}{#1}% - \HyPsdSubst{)}{\backslashrparen}{#1}% -} - -{\catcode\exclamChar = 0 \catcode\backChar = \other - !gdef!backslashlparen{\(}% - !gdef!backslashrparen{\)}% -} - -\ifpdf - \input pdfcolor - \pdfcatalog{/PageMode /UseOutlines}% + \vss + }% + \nointerlineskip + } + % + % + \pdfcatalog{/PageMode /UseOutlines} + % + % #1 is image name, #2 width (might be empty/whitespace), #3 height (ditto). \def\dopdfimage#1#2#3{% - \def\imagewidth{#2}% - \def\imageheight{#3}% - % without \immediate, pdftex seg faults when the same image is + \def\pdfimagewidth{#2}\setbox0 = \hbox{\ignorespaces #2}% + \def\pdfimageheight{#3}\setbox2 = \hbox{\ignorespaces #3}% + % + % pdftex (and the PDF format) support .pdf, .png, .jpg (among + % others). Let's try in that order, PDF first since if + % someone has a scalable image, presumably better to use that than a + % bitmap. + \let\pdfimgext=\empty + \begingroup + \openin 1 #1.pdf \ifeof 1 + \openin 1 #1.PDF \ifeof 1 + \openin 1 #1.png \ifeof 1 + \openin 1 #1.jpg \ifeof 1 + \openin 1 #1.jpeg \ifeof 1 + \openin 1 #1.JPG \ifeof 1 + \errhelp = \nopdfimagehelp + \errmessage{Could not find image file #1 for pdf}% + \else \gdef\pdfimgext{JPG}% + \fi + \else \gdef\pdfimgext{jpeg}% + \fi + \else \gdef\pdfimgext{jpg}% + \fi + \else \gdef\pdfimgext{png}% + \fi + \else \gdef\pdfimgext{PDF}% + \fi + \else \gdef\pdfimgext{pdf}% + \fi + \closein 1 + \endgroup + % + % without \immediate, ancient pdftex seg faults when the same image is % included twice. (Version 3.14159-pre-1.0-unofficial-20010704.) \ifnum\pdftexversion < 14 \immediate\pdfimage \else \immediate\pdfximage \fi - \ifx\empty\imagewidth\else width \imagewidth \fi - \ifx\empty\imageheight\else height \imageheight \fi + \ifdim \wd0 >0pt width \pdfimagewidth \fi + \ifdim \wd2 >0pt height \pdfimageheight \fi \ifnum\pdftexversion<13 - #1.pdf% + #1.\pdfimgext \else - {#1.pdf}% + {#1.\pdfimgext}% \fi \ifnum\pdftexversion < 14 \else \pdfrefximage \pdflastximage \fi} + % \def\pdfmkdest#1{{% % We have to set dummies so commands such as @code, and characters % such as \, aren't expanded when present in a section title. - \atdummies - \activebackslashdouble + \indexnofonts + \turnoffactive + \makevalueexpandable \def\pdfdestname{#1}% - \backslashparens\pdfdestname - \pdfdest name{\pdfdestname} xyz% - }}% + \txiescapepdf\pdfdestname + \safewhatsit{\pdfdest name{\pdfdestname} xyz}% + }} % % used to mark target names; must be expandable. - \def\pdfmkpgn#1{#1}% - % - \let\linkcolor = \Blue % was Cyan, but that seems light? - \def\endlink{\Black\pdfendlink} + \def\pdfmkpgn#1{#1} + % + % by default, use a color that is dark enough to print on paper as + % nearly black, but still distinguishable for online viewing. + \def\urlcolor{\rgbDarkRed} + \def\linkcolor{\rgbDarkRed} + \def\endlink{\setcolor{\maincolor}\pdfendlink} + % % Adding outlines to PDF; macros for calculating structure of outlines % come from Petr Olsak \def\expnumber#1{\expandafter\ifx\csname#1\endcsname\relax 0% @@ -1309,29 +1272,24 @@ % page number. We could generate a destination for the section % text in the case where a section has no node, but it doesn't % seem worth the trouble, since most documents are normally structured. - \def\pdfoutlinedest{#3}% + \edef\pdfoutlinedest{#3}% \ifx\pdfoutlinedest\empty \def\pdfoutlinedest{#4}% \else - % Doubled backslashes in the name. - {\activebackslashdouble \xdef\pdfoutlinedest{#3}% - \backslashparens\pdfoutlinedest}% + \txiescapepdf\pdfoutlinedest \fi % - % Also double the backslashes in the display string. - {\activebackslashdouble \xdef\pdfoutlinetext{#1}% - \backslashparens\pdfoutlinetext}% + % Also escape PDF chars in the display string. + \edef\pdfoutlinetext{#1}% + \txiescapepdf\pdfoutlinetext % \pdfoutline goto name{\pdfmkpgn{\pdfoutlinedest}}#2{\pdfoutlinetext}% } % \def\pdfmakeoutlines{% \begingroup - % Thanh's hack / proper braces in bookmarks - \edef\mylbrace{\iftrue \string{\else}\fi}\let\{=\mylbrace - \edef\myrbrace{\iffalse{\else\string}\fi}\let\}=\myrbrace - % % Read toc silently, to get counts of subentries for \pdfoutline. + \def\partentry##1##2##3##4{}% ignore parts in the outlines \def\numchapentry##1##2##3##4{% \def\thischapnum{##2}% \def\thissecnum{0}% @@ -1385,35 +1343,63 @@ % Latin 2 (0xea) gets translated to a | character. Info from % Staszek Wawrykiewicz, 19 Jan 2004 04:09:24 +0100. % - % xx to do this right, we have to translate 8-bit characters to - % their "best" equivalent, based on the @documentencoding. Right - % now, I guess we'll just let the pdf reader have its way. + % TODO this right, we have to translate 8-bit characters to + % their "best" equivalent, based on the @documentencoding. Too + % much work for too little return. Just use the ASCII equivalents + % we use for the index sort strings. + % \indexnofonts \setupdatafile - \activebackslash - \input \jobname.toc + % We can have normal brace characters in the PDF outlines, unlike + % Texinfo index files. So set that up. + \def\{{\lbracecharliteral}% + \def\}{\rbracecharliteral}% + \catcode`\\=\active \otherbackslash + \input \tocreadfilename \endgroup } + {\catcode`[=1 \catcode`]=2 + \catcode`{=\other \catcode`}=\other + \gdef\lbracecharliteral[{]% + \gdef\rbracecharliteral[}]% + ] % \def\skipspaces#1{\def\PP{#1}\def\D{|}% \ifx\PP\D\let\nextsp\relax \else\let\nextsp\skipspaces - \ifx\p\space\else\addtokens{\filename}{\PP}% - \advance\filenamelength by 1 - \fi + \addtokens{\filename}{\PP}% + \advance\filenamelength by 1 \fi \nextsp} - \def\getfilename#1{\filenamelength=0\expandafter\skipspaces#1|\relax} + \def\getfilename#1{% + \filenamelength=0 + % If we don't expand the argument now, \skipspaces will get + % snagged on things like "@value{foo}". + \edef\temp{#1}% + \expandafter\skipspaces\temp|\relax + } \ifnum\pdftexversion < 14 \let \startlink \pdfannotlink \else \let \startlink \pdfstartlink \fi + % make a live url in pdf output. \def\pdfurl#1{% \begingroup - \normalturnoffactive\def\@{@}% + % it seems we really need yet another set of dummies; have not + % tried to figure out what each command should do in the context + % of @url. for now, just make @/ a no-op, that's the only one + % people have actually reported a problem with. + % + \normalturnoffactive + \def\@{@}% + \let\/=\empty \makevalueexpandable - \leavevmode\Red + % do we want to go so far as to use \indexnofonts instead of just + % special-casing \var here? + \def\var##1{##1}% + % + \leavevmode\setcolor{\urlcolor}% \startlink attr{/Border [0 0 0]}% user{/Subtype /Link /A << /S /URI /URI (#1) >>}% \endgroup} @@ -1440,13 +1426,15 @@ {\noexpand\pdflink{\the\toksC}}\toksC={}\global\countA=0} \def\pdflink#1{% \startlink attr{/Border [0 0 0]} goto name{\pdfmkpgn{#1}} - \linkcolor #1\endlink} + \setcolor{\linkcolor}#1\endlink} \def\done{\edef\st{\global\noexpand\toksA={\the\toksB}}\st} \else + % non-pdf mode \let\pdfmkdest = \gobble \let\pdfurl = \gobble \let\endlink = \relax - \let\linkcolor = \relax + \let\setcolor = \gobble + \let\pdfsetcolor = \gobble \let\pdfmakeoutlines = \relax \fi % \ifx\pdfoutput @@ -1472,6 +1460,10 @@ \def\bf{\fam=\bffam \setfontstyle{bf}}\def\bfstylename{bf} \def\tt{\fam=\ttfam \setfontstyle{tt}} +% Unfortunately, we have to override this for titles and the like, since +% in those cases "rm" is bold. Sigh. +\def\rmisbold{\rm\def\curfontstyle{bf}} + % Texinfo sort of supports the sans serif font style, which plain TeX does not. % So we set up a \sf. \newfam\sffam @@ -1481,8 +1473,6 @@ % We don't need math for this font style. \def\ttsl{\setfontstyle{ttsl}} -% Default leading. -\newdimen\textleading \textleading = 13.2pt % Set the baselineskip to #1, and the lineskip and strut size % correspondingly. There is no deep meaning behind these magic numbers @@ -1492,8 +1482,13 @@ \def\strutheightpercent{.70833} \def\strutdepthpercent {.29167} % +% can get a sort of poor man's double spacing by redefining this. +\def\baselinefactor{1} +% +\newdimen\textleading \def\setleading#1{% - \normalbaselineskip = #1\relax + \dimen0 = #1\relax + \normalbaselineskip = \baselinefactor\dimen0 \normallineskip = \lineskipfactor\normalbaselineskip \normalbaselines \setbox\strutbox =\hbox{% @@ -1502,20 +1497,295 @@ }% } -% Set the font macro #1 to the font named #2, adding on the -% specified font prefix (normally `cm'). -% #3 is the font's design size, #4 is a scale factor -\def\setfont#1#2#3#4{\font#1=\fontprefix#2#3 scaled #4} +% PDF CMaps. See also LaTeX's t1.cmap. +% +% do nothing with this by default. +\expandafter\let\csname cmapOT1\endcsname\gobble +\expandafter\let\csname cmapOT1IT\endcsname\gobble +\expandafter\let\csname cmapOT1TT\endcsname\gobble + +% if we are producing pdf, and we have \pdffontattr, then define cmaps. +% (\pdffontattr was introduced many years ago, but people still run +% older pdftex's; it's easy to conditionalize, so we do.) +\ifpdf \ifx\pdffontattr\thisisundefined \else + \begingroup + \catcode`\^^M=\active \def^^M{^^J}% Output line endings as the ^^J char. + \catcode`\%=12 \immediate\pdfobj stream {%!PS-Adobe-3.0 Resource-CMap +%%DocumentNeededResources: ProcSet (CIDInit) +%%IncludeResource: ProcSet (CIDInit) +%%BeginResource: CMap (TeX-OT1-0) +%%Title: (TeX-OT1-0 TeX OT1 0) +%%Version: 1.000 +%%EndComments +/CIDInit /ProcSet findresource begin +12 dict begin +begincmap +/CIDSystemInfo +<< /Registry (TeX) +/Ordering (OT1) +/Supplement 0 +>> def +/CMapName /TeX-OT1-0 def +/CMapType 2 def +1 begincodespacerange +<00> <7F> +endcodespacerange +8 beginbfrange +<00> <01> <0393> +<09> <0A> <03A8> +<23> <26> <0023> +<28> <3B> <0028> +<3F> <5B> <003F> +<5D> <5E> <005D> +<61> <7A> <0061> +<7B> <7C> <2013> +endbfrange +40 beginbfchar +<02> <0398> +<03> <039B> +<04> <039E> +<05> <03A0> +<06> <03A3> +<07> <03D2> +<08> <03A6> +<0B> <00660066> +<0C> <00660069> +<0D> <0066006C> +<0E> <006600660069> +<0F> <00660066006C> +<10> <0131> +<11> <0237> +<12> <0060> +<13> <00B4> +<14> <02C7> +<15> <02D8> +<16> <00AF> +<17> <02DA> +<18> <00B8> +<19> <00DF> +<1A> <00E6> +<1B> <0153> +<1C> <00F8> +<1D> <00C6> +<1E> <0152> +<1F> <00D8> +<21> <0021> +<22> <201D> +<27> <2019> +<3C> <00A1> +<3D> <003D> +<3E> <00BF> +<5C> <201C> +<5F> <02D9> +<60> <2018> +<7D> <02DD> +<7E> <007E> +<7F> <00A8> +endbfchar +endcmap +CMapName currentdict /CMap defineresource pop +end +end +%%EndResource +%%EOF + }\endgroup + \expandafter\edef\csname cmapOT1\endcsname#1{% + \pdffontattr#1{/ToUnicode \the\pdflastobj\space 0 R}% + }% +% +% \cmapOT1IT + \begingroup + \catcode`\^^M=\active \def^^M{^^J}% Output line endings as the ^^J char. + \catcode`\%=12 \immediate\pdfobj stream {%!PS-Adobe-3.0 Resource-CMap +%%DocumentNeededResources: ProcSet (CIDInit) +%%IncludeResource: ProcSet (CIDInit) +%%BeginResource: CMap (TeX-OT1IT-0) +%%Title: (TeX-OT1IT-0 TeX OT1IT 0) +%%Version: 1.000 +%%EndComments +/CIDInit /ProcSet findresource begin +12 dict begin +begincmap +/CIDSystemInfo +<< /Registry (TeX) +/Ordering (OT1IT) +/Supplement 0 +>> def +/CMapName /TeX-OT1IT-0 def +/CMapType 2 def +1 begincodespacerange +<00> <7F> +endcodespacerange +8 beginbfrange +<00> <01> <0393> +<09> <0A> <03A8> +<25> <26> <0025> +<28> <3B> <0028> +<3F> <5B> <003F> +<5D> <5E> <005D> +<61> <7A> <0061> +<7B> <7C> <2013> +endbfrange +42 beginbfchar +<02> <0398> +<03> <039B> +<04> <039E> +<05> <03A0> +<06> <03A3> +<07> <03D2> +<08> <03A6> +<0B> <00660066> +<0C> <00660069> +<0D> <0066006C> +<0E> <006600660069> +<0F> <00660066006C> +<10> <0131> +<11> <0237> +<12> <0060> +<13> <00B4> +<14> <02C7> +<15> <02D8> +<16> <00AF> +<17> <02DA> +<18> <00B8> +<19> <00DF> +<1A> <00E6> +<1B> <0153> +<1C> <00F8> +<1D> <00C6> +<1E> <0152> +<1F> <00D8> +<21> <0021> +<22> <201D> +<23> <0023> +<24> <00A3> +<27> <2019> +<3C> <00A1> +<3D> <003D> +<3E> <00BF> +<5C> <201C> +<5F> <02D9> +<60> <2018> +<7D> <02DD> +<7E> <007E> +<7F> <00A8> +endbfchar +endcmap +CMapName currentdict /CMap defineresource pop +end +end +%%EndResource +%%EOF + }\endgroup + \expandafter\edef\csname cmapOT1IT\endcsname#1{% + \pdffontattr#1{/ToUnicode \the\pdflastobj\space 0 R}% + }% +% +% \cmapOT1TT + \begingroup + \catcode`\^^M=\active \def^^M{^^J}% Output line endings as the ^^J char. + \catcode`\%=12 \immediate\pdfobj stream {%!PS-Adobe-3.0 Resource-CMap +%%DocumentNeededResources: ProcSet (CIDInit) +%%IncludeResource: ProcSet (CIDInit) +%%BeginResource: CMap (TeX-OT1TT-0) +%%Title: (TeX-OT1TT-0 TeX OT1TT 0) +%%Version: 1.000 +%%EndComments +/CIDInit /ProcSet findresource begin +12 dict begin +begincmap +/CIDSystemInfo +<< /Registry (TeX) +/Ordering (OT1TT) +/Supplement 0 +>> def +/CMapName /TeX-OT1TT-0 def +/CMapType 2 def +1 begincodespacerange +<00> <7F> +endcodespacerange +5 beginbfrange +<00> <01> <0393> +<09> <0A> <03A8> +<21> <26> <0021> +<28> <5F> <0028> +<61> <7E> <0061> +endbfrange +32 beginbfchar +<02> <0398> +<03> <039B> +<04> <039E> +<05> <03A0> +<06> <03A3> +<07> <03D2> +<08> <03A6> +<0B> <2191> +<0C> <2193> +<0D> <0027> +<0E> <00A1> +<0F> <00BF> +<10> <0131> +<11> <0237> +<12> <0060> +<13> <00B4> +<14> <02C7> +<15> <02D8> +<16> <00AF> +<17> <02DA> +<18> <00B8> +<19> <00DF> +<1A> <00E6> +<1B> <0153> +<1C> <00F8> +<1D> <00C6> +<1E> <0152> +<1F> <00D8> +<20> <2423> +<27> <2019> +<60> <2018> +<7F> <00A8> +endbfchar +endcmap +CMapName currentdict /CMap defineresource pop +end +end +%%EndResource +%%EOF + }\endgroup + \expandafter\edef\csname cmapOT1TT\endcsname#1{% + \pdffontattr#1{/ToUnicode \the\pdflastobj\space 0 R}% + }% +\fi\fi + + +% Set the font macro #1 to the font named \fontprefix#2. +% #3 is the font's design size, #4 is a scale factor, #5 is the CMap +% encoding (only OT1, OT1IT and OT1TT are allowed, or empty to omit). +% Example: +% #1 = \textrm +% #2 = \rmshape +% #3 = 10 +% #4 = \mainmagstep +% #5 = OT1 +% +\def\setfont#1#2#3#4#5{% + \font#1=\fontprefix#2#3 scaled #4 + \csname cmap#5\endcsname#1% +} +% This is what gets called when #5 of \setfont is empty. +\let\cmap\gobble +% +% (end of cmaps) % Use cm as the default font prefix. % To specify the font prefix, you must define \fontprefix % before you read in texinfo.tex. -\ifx\fontprefix\undefined +\ifx\fontprefix\thisisundefined \def\fontprefix{cm} \fi % Support font families that don't use the same naming scheme as CM. \def\rmshape{r} -\def\rmbshape{bx} %where the normal face is bold +\def\rmbshape{bx} % where the normal face is bold \def\bfshape{b} \def\bxshape{bx} \def\ttshape{tt} @@ -1530,118 +1800,291 @@ \def\scshape{csc} \def\scbshape{csc} +% Definitions for a main text size of 11pt. (The default in Texinfo.) +% +\def\definetextfontsizexi{% % Text fonts (11.2pt, magstep1). \def\textnominalsize{11pt} \edef\mainmagstep{\magstephalf} -\setfont\textrm\rmshape{10}{\mainmagstep} -\setfont\texttt\ttshape{10}{\mainmagstep} -\setfont\textbf\bfshape{10}{\mainmagstep} -\setfont\textit\itshape{10}{\mainmagstep} -\setfont\textsl\slshape{10}{\mainmagstep} -\setfont\textsf\sfshape{10}{\mainmagstep} -\setfont\textsc\scshape{10}{\mainmagstep} -\setfont\textttsl\ttslshape{10}{\mainmagstep} +\setfont\textrm\rmshape{10}{\mainmagstep}{OT1} +\setfont\texttt\ttshape{10}{\mainmagstep}{OT1TT} +\setfont\textbf\bfshape{10}{\mainmagstep}{OT1} +\setfont\textit\itshape{10}{\mainmagstep}{OT1IT} +\setfont\textsl\slshape{10}{\mainmagstep}{OT1} +\setfont\textsf\sfshape{10}{\mainmagstep}{OT1} +\setfont\textsc\scshape{10}{\mainmagstep}{OT1} +\setfont\textttsl\ttslshape{10}{\mainmagstep}{OT1TT} \font\texti=cmmi10 scaled \mainmagstep \font\textsy=cmsy10 scaled \mainmagstep +\def\textecsize{1095} % A few fonts for @defun names and args. -\setfont\defbf\bfshape{10}{\magstep1} -\setfont\deftt\ttshape{10}{\magstep1} -\setfont\defttsl\ttslshape{10}{\magstep1} +\setfont\defbf\bfshape{10}{\magstep1}{OT1} +\setfont\deftt\ttshape{10}{\magstep1}{OT1TT} +\setfont\defttsl\ttslshape{10}{\magstep1}{OT1TT} \def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf} % Fonts for indices, footnotes, small examples (9pt). \def\smallnominalsize{9pt} -\setfont\smallrm\rmshape{9}{1000} -\setfont\smalltt\ttshape{9}{1000} -\setfont\smallbf\bfshape{10}{900} -\setfont\smallit\itshape{9}{1000} -\setfont\smallsl\slshape{9}{1000} -\setfont\smallsf\sfshape{9}{1000} -\setfont\smallsc\scshape{10}{900} -\setfont\smallttsl\ttslshape{10}{900} +\setfont\smallrm\rmshape{9}{1000}{OT1} +\setfont\smalltt\ttshape{9}{1000}{OT1TT} +\setfont\smallbf\bfshape{10}{900}{OT1} +\setfont\smallit\itshape{9}{1000}{OT1IT} +\setfont\smallsl\slshape{9}{1000}{OT1} +\setfont\smallsf\sfshape{9}{1000}{OT1} +\setfont\smallsc\scshape{10}{900}{OT1} +\setfont\smallttsl\ttslshape{10}{900}{OT1TT} \font\smalli=cmmi9 \font\smallsy=cmsy9 +\def\smallecsize{0900} % Fonts for small examples (8pt). \def\smallernominalsize{8pt} -\setfont\smallerrm\rmshape{8}{1000} -\setfont\smallertt\ttshape{8}{1000} -\setfont\smallerbf\bfshape{10}{800} -\setfont\smallerit\itshape{8}{1000} -\setfont\smallersl\slshape{8}{1000} -\setfont\smallersf\sfshape{8}{1000} -\setfont\smallersc\scshape{10}{800} -\setfont\smallerttsl\ttslshape{10}{800} +\setfont\smallerrm\rmshape{8}{1000}{OT1} +\setfont\smallertt\ttshape{8}{1000}{OT1TT} +\setfont\smallerbf\bfshape{10}{800}{OT1} +\setfont\smallerit\itshape{8}{1000}{OT1IT} +\setfont\smallersl\slshape{8}{1000}{OT1} +\setfont\smallersf\sfshape{8}{1000}{OT1} +\setfont\smallersc\scshape{10}{800}{OT1} +\setfont\smallerttsl\ttslshape{10}{800}{OT1TT} \font\smalleri=cmmi8 \font\smallersy=cmsy8 +\def\smallerecsize{0800} % Fonts for title page (20.4pt): \def\titlenominalsize{20pt} -\setfont\titlerm\rmbshape{12}{\magstep3} -\setfont\titleit\itbshape{10}{\magstep4} -\setfont\titlesl\slbshape{10}{\magstep4} -\setfont\titlett\ttbshape{12}{\magstep3} -\setfont\titlettsl\ttslshape{10}{\magstep4} -\setfont\titlesf\sfbshape{17}{\magstep1} +\setfont\titlerm\rmbshape{12}{\magstep3}{OT1} +\setfont\titleit\itbshape{10}{\magstep4}{OT1IT} +\setfont\titlesl\slbshape{10}{\magstep4}{OT1} +\setfont\titlett\ttbshape{12}{\magstep3}{OT1TT} +\setfont\titlettsl\ttslshape{10}{\magstep4}{OT1TT} +\setfont\titlesf\sfbshape{17}{\magstep1}{OT1} \let\titlebf=\titlerm -\setfont\titlesc\scbshape{10}{\magstep4} +\setfont\titlesc\scbshape{10}{\magstep4}{OT1} \font\titlei=cmmi12 scaled \magstep3 \font\titlesy=cmsy10 scaled \magstep4 -\def\authorrm{\secrm} -\def\authortt{\sectt} +\def\titleecsize{2074} % Chapter (and unnumbered) fonts (17.28pt). \def\chapnominalsize{17pt} -\setfont\chaprm\rmbshape{12}{\magstep2} -\setfont\chapit\itbshape{10}{\magstep3} -\setfont\chapsl\slbshape{10}{\magstep3} -\setfont\chaptt\ttbshape{12}{\magstep2} -\setfont\chapttsl\ttslshape{10}{\magstep3} -\setfont\chapsf\sfbshape{17}{1000} +\setfont\chaprm\rmbshape{12}{\magstep2}{OT1} +\setfont\chapit\itbshape{10}{\magstep3}{OT1IT} +\setfont\chapsl\slbshape{10}{\magstep3}{OT1} +\setfont\chaptt\ttbshape{12}{\magstep2}{OT1TT} +\setfont\chapttsl\ttslshape{10}{\magstep3}{OT1TT} +\setfont\chapsf\sfbshape{17}{1000}{OT1} \let\chapbf=\chaprm -\setfont\chapsc\scbshape{10}{\magstep3} +\setfont\chapsc\scbshape{10}{\magstep3}{OT1} \font\chapi=cmmi12 scaled \magstep2 \font\chapsy=cmsy10 scaled \magstep3 +\def\chapecsize{1728} % Section fonts (14.4pt). \def\secnominalsize{14pt} -\setfont\secrm\rmbshape{12}{\magstep1} -\setfont\secit\itbshape{10}{\magstep2} -\setfont\secsl\slbshape{10}{\magstep2} -\setfont\sectt\ttbshape{12}{\magstep1} -\setfont\secttsl\ttslshape{10}{\magstep2} -\setfont\secsf\sfbshape{12}{\magstep1} +\setfont\secrm\rmbshape{12}{\magstep1}{OT1} +\setfont\secit\itbshape{10}{\magstep2}{OT1IT} +\setfont\secsl\slbshape{10}{\magstep2}{OT1} +\setfont\sectt\ttbshape{12}{\magstep1}{OT1TT} +\setfont\secttsl\ttslshape{10}{\magstep2}{OT1TT} +\setfont\secsf\sfbshape{12}{\magstep1}{OT1} \let\secbf\secrm -\setfont\secsc\scbshape{10}{\magstep2} +\setfont\secsc\scbshape{10}{\magstep2}{OT1} \font\seci=cmmi12 scaled \magstep1 \font\secsy=cmsy10 scaled \magstep2 +\def\sececsize{1440} % Subsection fonts (13.15pt). \def\ssecnominalsize{13pt} -\setfont\ssecrm\rmbshape{12}{\magstephalf} -\setfont\ssecit\itbshape{10}{1315} -\setfont\ssecsl\slbshape{10}{1315} -\setfont\ssectt\ttbshape{12}{\magstephalf} -\setfont\ssecttsl\ttslshape{10}{1315} -\setfont\ssecsf\sfbshape{12}{\magstephalf} +\setfont\ssecrm\rmbshape{12}{\magstephalf}{OT1} +\setfont\ssecit\itbshape{10}{1315}{OT1IT} +\setfont\ssecsl\slbshape{10}{1315}{OT1} +\setfont\ssectt\ttbshape{12}{\magstephalf}{OT1TT} +\setfont\ssecttsl\ttslshape{10}{1315}{OT1TT} +\setfont\ssecsf\sfbshape{12}{\magstephalf}{OT1} \let\ssecbf\ssecrm -\setfont\ssecsc\scbshape{10}{1315} +\setfont\ssecsc\scbshape{10}{1315}{OT1} \font\sseci=cmmi12 scaled \magstephalf \font\ssecsy=cmsy10 scaled 1315 +\def\ssececsize{1200} % Reduced fonts for @acro in text (10pt). \def\reducednominalsize{10pt} -\setfont\reducedrm\rmshape{10}{1000} -\setfont\reducedtt\ttshape{10}{1000} -\setfont\reducedbf\bfshape{10}{1000} -\setfont\reducedit\itshape{10}{1000} -\setfont\reducedsl\slshape{10}{1000} -\setfont\reducedsf\sfshape{10}{1000} -\setfont\reducedsc\scshape{10}{1000} -\setfont\reducedttsl\ttslshape{10}{1000} +\setfont\reducedrm\rmshape{10}{1000}{OT1} +\setfont\reducedtt\ttshape{10}{1000}{OT1TT} +\setfont\reducedbf\bfshape{10}{1000}{OT1} +\setfont\reducedit\itshape{10}{1000}{OT1IT} +\setfont\reducedsl\slshape{10}{1000}{OT1} +\setfont\reducedsf\sfshape{10}{1000}{OT1} +\setfont\reducedsc\scshape{10}{1000}{OT1} +\setfont\reducedttsl\ttslshape{10}{1000}{OT1TT} \font\reducedi=cmmi10 \font\reducedsy=cmsy10 +\def\reducedecsize{1000} + +\textleading = 13.2pt % line spacing for 11pt CM +\textfonts % reset the current fonts +\rm +} % end of 11pt text font size definitions, \definetextfontsizexi + + +% Definitions to make the main text be 10pt Computer Modern, with +% section, chapter, etc., sizes following suit. This is for the GNU +% Press printing of the Emacs 22 manual. Maybe other manuals in the +% future. Used with @smallbook, which sets the leading to 12pt. +% +\def\definetextfontsizex{% +% Text fonts (10pt). +\def\textnominalsize{10pt} +\edef\mainmagstep{1000} +\setfont\textrm\rmshape{10}{\mainmagstep}{OT1} +\setfont\texttt\ttshape{10}{\mainmagstep}{OT1TT} +\setfont\textbf\bfshape{10}{\mainmagstep}{OT1} +\setfont\textit\itshape{10}{\mainmagstep}{OT1IT} +\setfont\textsl\slshape{10}{\mainmagstep}{OT1} +\setfont\textsf\sfshape{10}{\mainmagstep}{OT1} +\setfont\textsc\scshape{10}{\mainmagstep}{OT1} +\setfont\textttsl\ttslshape{10}{\mainmagstep}{OT1TT} +\font\texti=cmmi10 scaled \mainmagstep +\font\textsy=cmsy10 scaled \mainmagstep +\def\textecsize{1000} + +% A few fonts for @defun names and args. +\setfont\defbf\bfshape{10}{\magstephalf}{OT1} +\setfont\deftt\ttshape{10}{\magstephalf}{OT1TT} +\setfont\defttsl\ttslshape{10}{\magstephalf}{OT1TT} +\def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf} + +% Fonts for indices, footnotes, small examples (9pt). +\def\smallnominalsize{9pt} +\setfont\smallrm\rmshape{9}{1000}{OT1} +\setfont\smalltt\ttshape{9}{1000}{OT1TT} +\setfont\smallbf\bfshape{10}{900}{OT1} +\setfont\smallit\itshape{9}{1000}{OT1IT} +\setfont\smallsl\slshape{9}{1000}{OT1} +\setfont\smallsf\sfshape{9}{1000}{OT1} +\setfont\smallsc\scshape{10}{900}{OT1} +\setfont\smallttsl\ttslshape{10}{900}{OT1TT} +\font\smalli=cmmi9 +\font\smallsy=cmsy9 +\def\smallecsize{0900} + +% Fonts for small examples (8pt). +\def\smallernominalsize{8pt} +\setfont\smallerrm\rmshape{8}{1000}{OT1} +\setfont\smallertt\ttshape{8}{1000}{OT1TT} +\setfont\smallerbf\bfshape{10}{800}{OT1} +\setfont\smallerit\itshape{8}{1000}{OT1IT} +\setfont\smallersl\slshape{8}{1000}{OT1} +\setfont\smallersf\sfshape{8}{1000}{OT1} +\setfont\smallersc\scshape{10}{800}{OT1} +\setfont\smallerttsl\ttslshape{10}{800}{OT1TT} +\font\smalleri=cmmi8 +\font\smallersy=cmsy8 +\def\smallerecsize{0800} + +% Fonts for title page (20.4pt): +\def\titlenominalsize{20pt} +\setfont\titlerm\rmbshape{12}{\magstep3}{OT1} +\setfont\titleit\itbshape{10}{\magstep4}{OT1IT} +\setfont\titlesl\slbshape{10}{\magstep4}{OT1} +\setfont\titlett\ttbshape{12}{\magstep3}{OT1TT} +\setfont\titlettsl\ttslshape{10}{\magstep4}{OT1TT} +\setfont\titlesf\sfbshape{17}{\magstep1}{OT1} +\let\titlebf=\titlerm +\setfont\titlesc\scbshape{10}{\magstep4}{OT1} +\font\titlei=cmmi12 scaled \magstep3 +\font\titlesy=cmsy10 scaled \magstep4 +\def\titleecsize{2074} + +% Chapter fonts (14.4pt). +\def\chapnominalsize{14pt} +\setfont\chaprm\rmbshape{12}{\magstep1}{OT1} +\setfont\chapit\itbshape{10}{\magstep2}{OT1IT} +\setfont\chapsl\slbshape{10}{\magstep2}{OT1} +\setfont\chaptt\ttbshape{12}{\magstep1}{OT1TT} +\setfont\chapttsl\ttslshape{10}{\magstep2}{OT1TT} +\setfont\chapsf\sfbshape{12}{\magstep1}{OT1} +\let\chapbf\chaprm +\setfont\chapsc\scbshape{10}{\magstep2}{OT1} +\font\chapi=cmmi12 scaled \magstep1 +\font\chapsy=cmsy10 scaled \magstep2 +\def\chapecsize{1440} + +% Section fonts (12pt). +\def\secnominalsize{12pt} +\setfont\secrm\rmbshape{12}{1000}{OT1} +\setfont\secit\itbshape{10}{\magstep1}{OT1IT} +\setfont\secsl\slbshape{10}{\magstep1}{OT1} +\setfont\sectt\ttbshape{12}{1000}{OT1TT} +\setfont\secttsl\ttslshape{10}{\magstep1}{OT1TT} +\setfont\secsf\sfbshape{12}{1000}{OT1} +\let\secbf\secrm +\setfont\secsc\scbshape{10}{\magstep1}{OT1} +\font\seci=cmmi12 +\font\secsy=cmsy10 scaled \magstep1 +\def\sececsize{1200} + +% Subsection fonts (10pt). +\def\ssecnominalsize{10pt} +\setfont\ssecrm\rmbshape{10}{1000}{OT1} +\setfont\ssecit\itbshape{10}{1000}{OT1IT} +\setfont\ssecsl\slbshape{10}{1000}{OT1} +\setfont\ssectt\ttbshape{10}{1000}{OT1TT} +\setfont\ssecttsl\ttslshape{10}{1000}{OT1TT} +\setfont\ssecsf\sfbshape{10}{1000}{OT1} +\let\ssecbf\ssecrm +\setfont\ssecsc\scbshape{10}{1000}{OT1} +\font\sseci=cmmi10 +\font\ssecsy=cmsy10 +\def\ssececsize{1000} + +% Reduced fonts for @acro in text (9pt). +\def\reducednominalsize{9pt} +\setfont\reducedrm\rmshape{9}{1000}{OT1} +\setfont\reducedtt\ttshape{9}{1000}{OT1TT} +\setfont\reducedbf\bfshape{10}{900}{OT1} +\setfont\reducedit\itshape{9}{1000}{OT1IT} +\setfont\reducedsl\slshape{9}{1000}{OT1} +\setfont\reducedsf\sfshape{9}{1000}{OT1} +\setfont\reducedsc\scshape{10}{900}{OT1} +\setfont\reducedttsl\ttslshape{10}{900}{OT1TT} +\font\reducedi=cmmi9 +\font\reducedsy=cmsy9 +\def\reducedecsize{0900} + +\divide\parskip by 2 % reduce space between paragraphs +\textleading = 12pt % line spacing for 10pt CM +\textfonts % reset the current fonts +\rm +} % end of 10pt text font size definitions, \definetextfontsizex + + +% We provide the user-level command +% @fonttextsize 10 +% (or 11) to redefine the text font size. pt is assumed. +% +\def\xiword{11} +\def\xword{10} +\def\xwordpt{10pt} +% +\parseargdef\fonttextsize{% + \def\textsizearg{#1}% + %\wlog{doing @fonttextsize \textsizearg}% + % + % Set \globaldefs so that documents can use this inside @tex, since + % makeinfo 4.8 does not support it, but we need it nonetheless. + % + \begingroup \globaldefs=1 + \ifx\textsizearg\xword \definetextfontsizex + \else \ifx\textsizearg\xiword \definetextfontsizexi + \else + \errhelp=\EMsimple + \errmessage{@fonttextsize only supports `10' or `11', not `\textsizearg'} + \fi\fi + \endgroup +} + % In order for the font changes to affect most math symbols and letters, % we have to define the \textfont of the standard families. Since @@ -1681,8 +2124,8 @@ \let\tenttsl=\titlettsl \def\curfontsize{title}% \def\lsize{chap}\def\lllsize{subsec}% - \resetmathfonts \setleading{25pt}} -\def\titlefont#1{{\titlefonts\rm #1}} + \resetmathfonts \setleading{27pt}} +\def\titlefont#1{{\titlefonts\rmisbold #1}} \def\chapfonts{% \let\tenrm=\chaprm \let\tenit=\chapit \let\tensl=\chapsl \let\tenbf=\chapbf \let\tentt=\chaptt \let\smallcaps=\chapsc @@ -1733,6 +2176,16 @@ \def\lsize{smaller}\def\lllsize{smaller}% \resetmathfonts \setleading{9.5pt}} +% Fonts for short table of contents. +\setfont\shortcontrm\rmshape{12}{1000}{OT1} +\setfont\shortcontbf\bfshape{10}{\magstep1}{OT1} % no cmb12 +\setfont\shortcontsl\slshape{12}{1000}{OT1} +\setfont\shortconttt\ttshape{12}{1000}{OT1TT} + +% Define these just so they can be easily changed for other fonts. +\def\angleleft{$\langle$} +\def\angleright{$\rangle$} + % Set the fonts to use with the @small... environments. \let\smallexamplefonts = \smallfonts @@ -1746,53 +2199,215 @@ % % By the way, for comparison, here's what fits with @example (10pt): % 8.5x11=71 smallbook=60 a4=75 a5=58 -% -% I wish the USA used A4 paper. % --karl, 24jan03. - % Set up the default fonts, so we can use them for creating boxes. % -\textfonts \rm - -% Define these so they can be easily changed for other fonts. -\def\angleleft{$\langle$} -\def\angleright{$\rangle$} +\definetextfontsizexi + + +\message{markup,} + +% Check if we are currently using a typewriter font. Since all the +% Computer Modern typewriter fonts have zero interword stretch (and +% shrink), and it is reasonable to expect all typewriter fonts to have +% this property, we can check that font parameter. +% +\def\ifmonospace{\ifdim\fontdimen3\font=0pt } + +% Markup style infrastructure. \defmarkupstylesetup\INITMACRO will +% define and register \INITMACRO to be called on markup style changes. +% \INITMACRO can check \currentmarkupstyle for the innermost +% style and the set of \ifmarkupSTYLE switches for all styles +% currently in effect. +\newif\ifmarkupvar +\newif\ifmarkupsamp +\newif\ifmarkupkey +%\newif\ifmarkupfile % @file == @samp. +%\newif\ifmarkupoption % @option == @samp. +\newif\ifmarkupcode +\newif\ifmarkupkbd +%\newif\ifmarkupenv % @env == @code. +%\newif\ifmarkupcommand % @command == @code. +\newif\ifmarkuptex % @tex (and part of @math, for now). +\newif\ifmarkupexample +\newif\ifmarkupverb +\newif\ifmarkupverbatim + +\let\currentmarkupstyle\empty + +\def\setupmarkupstyle#1{% + \csname markup#1true\endcsname + \def\currentmarkupstyle{#1}% + \markupstylesetup +} + +\let\markupstylesetup\empty + +\def\defmarkupstylesetup#1{% + \expandafter\def\expandafter\markupstylesetup + \expandafter{\markupstylesetup #1}% + \def#1% +} + +% Markup style setup for left and right quotes. +\defmarkupstylesetup\markupsetuplq{% + \expandafter\let\expandafter \temp + \csname markupsetuplq\currentmarkupstyle\endcsname + \ifx\temp\relax \markupsetuplqdefault \else \temp \fi +} + +\defmarkupstylesetup\markupsetuprq{% + \expandafter\let\expandafter \temp + \csname markupsetuprq\currentmarkupstyle\endcsname + \ifx\temp\relax \markupsetuprqdefault \else \temp \fi +} + +{ +\catcode`\'=\active +\catcode`\`=\active + +\gdef\markupsetuplqdefault{\let`\lq} +\gdef\markupsetuprqdefault{\let'\rq} + +\gdef\markupsetcodequoteleft{\let`\codequoteleft} +\gdef\markupsetcodequoteright{\let'\codequoteright} + +\gdef\markupsetnoligaturesquoteleft{\let`\noligaturesquoteleft} +} + +\let\markupsetuplqcode \markupsetcodequoteleft +\let\markupsetuprqcode \markupsetcodequoteright +% +\let\markupsetuplqexample \markupsetcodequoteleft +\let\markupsetuprqexample \markupsetcodequoteright +% +\let\markupsetuplqsamp \markupsetcodequoteleft +\let\markupsetuprqsamp \markupsetcodequoteright +% +\let\markupsetuplqverb \markupsetcodequoteleft +\let\markupsetuprqverb \markupsetcodequoteright +% +\let\markupsetuplqverbatim \markupsetcodequoteleft +\let\markupsetuprqverbatim \markupsetcodequoteright + +\let\markupsetuplqkbd \markupsetnoligaturesquoteleft + +% Allow an option to not use regular directed right quote/apostrophe +% (char 0x27), but instead the undirected quote from cmtt (char 0x0d). +% The undirected quote is ugly, so don't make it the default, but it +% works for pasting with more pdf viewers (at least evince), the +% lilypond developers report. xpdf does work with the regular 0x27. +% +\def\codequoteright{% + \expandafter\ifx\csname SETtxicodequoteundirected\endcsname\relax + \expandafter\ifx\csname SETcodequoteundirected\endcsname\relax + '% + \else \char'15 \fi + \else \char'15 \fi +} +% +% and a similar option for the left quote char vs. a grave accent. +% Modern fonts display ASCII 0x60 as a grave accent, so some people like +% the code environments to do likewise. +% +\def\codequoteleft{% + \expandafter\ifx\csname SETtxicodequotebacktick\endcsname\relax + \expandafter\ifx\csname SETcodequotebacktick\endcsname\relax + % [Knuth] pp. 380,381,391 + % \relax disables Spanish ligatures ?` and !` of \tt font. + \relax`% + \else \char'22 \fi + \else \char'22 \fi +} + +% Commands to set the quote options. +% +\parseargdef\codequoteundirected{% + \def\temp{#1}% + \ifx\temp\onword + \expandafter\let\csname SETtxicodequoteundirected\endcsname + = t% + \else\ifx\temp\offword + \expandafter\let\csname SETtxicodequoteundirected\endcsname + = \relax + \else + \errhelp = \EMsimple + \errmessage{Unknown @codequoteundirected value `\temp', must be on|off}% + \fi\fi +} +% +\parseargdef\codequotebacktick{% + \def\temp{#1}% + \ifx\temp\onword + \expandafter\let\csname SETtxicodequotebacktick\endcsname + = t% + \else\ifx\temp\offword + \expandafter\let\csname SETtxicodequotebacktick\endcsname + = \relax + \else + \errhelp = \EMsimple + \errmessage{Unknown @codequotebacktick value `\temp', must be on|off}% + \fi\fi +} + +% [Knuth] pp. 380,381,391, disable Spanish ligatures ?` and !` of \tt font. +\def\noligaturesquoteleft{\relax\lq} % Count depth in font-changes, for error checks \newcount\fontdepth \fontdepth=0 -% Fonts for short table of contents. -\setfont\shortcontrm\rmshape{12}{1000} -\setfont\shortcontbf\bfshape{10}{\magstep1} % no cmb12 -\setfont\shortcontsl\slshape{12}{1000} -\setfont\shortconttt\ttshape{12}{1000} - -%% Add scribe-like font environments, plus @l for inline lisp (usually sans -%% serif) and @ii for TeX italic - -% \smartitalic{ARG} outputs arg in italics, followed by an italic correction -% unless the following character is such as not to need one. -\def\smartitalicx{\ifx\next,\else\ifx\next-\else\ifx\next.\else - \ptexslash\fi\fi\fi} -\def\smartslanted#1{{\ifusingtt\ttsl\sl #1}\futurelet\next\smartitalicx} -\def\smartitalic#1{{\ifusingtt\ttsl\it #1}\futurelet\next\smartitalicx} - -% like \smartslanted except unconditionally uses \ttsl. +% Font commands. + +% #1 is the font command (\sl or \it), #2 is the text to slant. +% If we are in a monospaced environment, however, 1) always use \ttsl, +% and 2) do not add an italic correction. +\def\dosmartslant#1#2{% + \ifusingtt + {{\ttsl #2}\let\next=\relax}% + {\def\next{{#1#2}\futurelet\next\smartitaliccorrection}}% + \next +} +\def\smartslanted{\dosmartslant\sl} +\def\smartitalic{\dosmartslant\it} + +% Output an italic correction unless \next (presumed to be the following +% character) is such as not to need one. +\def\smartitaliccorrection{% + \ifx\next,% + \else\ifx\next-% + \else\ifx\next.% + \else\ptexslash + \fi\fi\fi + \aftersmartic +} + +% like \smartslanted except unconditionally uses \ttsl, and no ic. % @var is set to this for defun arguments. -\def\ttslanted#1{{\ttsl #1}\futurelet\next\smartitalicx} - -% like \smartslanted except unconditionally use \sl. We never want +\def\ttslanted#1{{\ttsl #1}} + +% @cite is like \smartslanted except unconditionally use \sl. We never want % ttsl for book titles, do we? -\def\cite#1{{\sl #1}\futurelet\next\smartitalicx} +\def\cite#1{{\sl #1}\futurelet\next\smartitaliccorrection} + +\def\aftersmartic{} +\def\var#1{% + \let\saveaftersmartic = \aftersmartic + \def\aftersmartic{\null\let\aftersmartic=\saveaftersmartic}% + \smartslanted{#1}% +} \let\i=\smartitalic \let\slanted=\smartslanted -\let\var=\smartslanted \let\dfn=\smartslanted \let\emph=\smartitalic -% @b, explicit bold. +% Explicit font changes: @r, @sc, undocumented @ii. +\def\r#1{{\rm #1}} % roman font +\def\sc#1{{\smallcaps#1}} % smallcaps font +\def\ii#1{{\it #1}} % italic font + +% @b, explicit bold. Also @strong. \def\b#1{{\bf #1}} \let\strong=\b @@ -1824,21 +2439,35 @@ \catcode`@=\other \def\endofsentencespacefactor{3000}% default +% @t, explicit typewriter. \def\t#1{% {\tt \rawbackslash \plainfrenchspacing #1}% \null } -\def\samp#1{`\tclose{#1}'\null} -\setfont\keyrm\rmshape{8}{1000} -\font\keysy=cmsy9 -\def\key#1{{\keyrm\textfont2=\keysy \leavevmode\hbox{% - \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{% - \vbox{\hrule\kern-0.4pt - \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}% - \kern-0.4pt\hrule}% - \kern-.06em\raise0.4pt\hbox{\angleright}}}} -% The old definition, with no lozenge: -%\def\key #1{{\ttsl \nohyphenation \uppercase{#1}}\null} + +% @samp. +\def\samp#1{{\setupmarkupstyle{samp}\lq\tclose{#1}\rq\null}} + +% definition of @key that produces a lozenge. Doesn't adjust to text size. +%\setfont\keyrm\rmshape{8}{1000}{OT1} +%\font\keysy=cmsy9 +%\def\key#1{{\keyrm\textfont2=\keysy \leavevmode\hbox{% +% \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{% +% \vbox{\hrule\kern-0.4pt +% \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}% +% \kern-0.4pt\hrule}% +% \kern-.06em\raise0.4pt\hbox{\angleright}}}} + +% definition of @key with no lozenge. If the current font is already +% monospace, don't change it; that way, we respect @kbdinputstyle. But +% if it isn't monospace, then use \tt. +% +\def\key#1{{\setupmarkupstyle{key}% + \nohyphenation + \ifmonospace\else\tt\fi + #1}\null} + +% ctrl is no longer a Texinfo command. \def\ctrl #1{{\tt \rawbackslash \hat}#1} % @file, @option are the same as @samp. @@ -1865,7 +2494,7 @@ \plainfrenchspacing #1% }% - \null + \null % reset spacefactor to 1000 } % We *must* turn on hyphenation at `-' and `_' in @code. @@ -1878,11 +2507,14 @@ % and arrange explicitly to hyphenate at a dash. % -- rms. { - \catcode`\-=\active - \catcode`\_=\active + \catcode`\-=\active \catcode`\_=\active + \catcode`\'=\active \catcode`\`=\active + \global\let'=\rq \global\let`=\lq % default definitions % \global\def\code{\begingroup - \catcode`\-=\active \catcode`\_=\active + \setupmarkupstyle{code}% + % The following should really be moved into \setupmarkupstyle handlers. + \catcode\dashChar=\active \catcode\underChar=\active \ifallowcodebreaks \let-\codedash \let_\codeunder @@ -1894,6 +2526,8 @@ } } +\def\codex #1{\tclose{#1}\endgroup} + \def\realdash{-} \def\codedash{-\discretionary{}{}{}} \def\codeunder{% @@ -1907,13 +2541,12 @@ \discretionary{}{}{}}% {\_}% } -\def\codex #1{\tclose{#1}\endgroup} % An additional complication: the above will allow breaks after, e.g., % each of the four underscores in __typeof__. This is undesirable in % some manuals, especially if they don't have long identifiers in % general. @allowcodebreaks provides a way to control this. -% +% \newif\ifallowcodebreaks \allowcodebreakstrue \def\keywordtrue{true} @@ -1927,55 +2560,18 @@ \allowcodebreaksfalse \else \errhelp = \EMsimple - \errmessage{Unknown @allowcodebreaks option `\txiarg'}% + \errmessage{Unknown @allowcodebreaks option `\txiarg', must be true|false}% \fi\fi } -% @kbd is like @code, except that if the argument is just one @key command, -% then @kbd has no effect. - -% @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always), -% `example' (@kbd uses ttsl only inside of @example and friends), -% or `code' (@kbd uses normal tty font always). -\parseargdef\kbdinputstyle{% - \def\txiarg{#1}% - \ifx\txiarg\worddistinct - \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}% - \else\ifx\txiarg\wordexample - \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}% - \else\ifx\txiarg\wordcode - \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}% - \else - \errhelp = \EMsimple - \errmessage{Unknown @kbdinputstyle option `\txiarg'}% - \fi\fi\fi -} -\def\worddistinct{distinct} -\def\wordexample{example} -\def\wordcode{code} - -% Default is `distinct.' -\kbdinputstyle distinct - -\def\xkey{\key} -\def\kbdfoo#1#2#3\par{\def\one{#1}\def\three{#3}\def\threex{??}% -\ifx\one\xkey\ifx\threex\three \key{#2}% -\else{\tclose{\kbdfont\look}}\fi -\else{\tclose{\kbdfont\look}}\fi} - -% For @indicateurl, @env, @command quotes seem unnecessary, so use \code. -\let\indicateurl=\code -\let\env=\code -\let\command=\code - % @uref (abbreviation for `urlref') takes an optional (comma-separated) % second argument specifying the text to display and an optional third % arg as text to display instead of (rather than in addition to) the url -% itself. First (mandatory) arg is the url. Perhaps eventually put in -% a hypertex \special here. -% -\def\uref#1{\douref #1,,,\finish} -\def\douref#1,#2,#3,#4\finish{\begingroup +% itself. First (mandatory) arg is the url. +% (This \urefnobreak definition isn't used now, leaving it for a while +% for comparison.) +\def\urefnobreak#1{\dourefnobreak #1,,,\finish} +\def\dourefnobreak#1,#2,#3,#4\finish{\begingroup \unsepspaces \pdfurl{#1}% \setbox0 = \hbox{\ignorespaces #3}% @@ -1996,6 +2592,103 @@ \endlink \endgroup} +% This \urefbreak definition is the active one. +\def\urefbreak{\begingroup \urefcatcodes \dourefbreak} +\let\uref=\urefbreak +\def\dourefbreak#1{\urefbreakfinish #1,,,\finish} +\def\urefbreakfinish#1,#2,#3,#4\finish{% doesn't work in @example + \unsepspaces + \pdfurl{#1}% + \setbox0 = \hbox{\ignorespaces #3}% + \ifdim\wd0 > 0pt + \unhbox0 % third arg given, show only that + \else + \setbox0 = \hbox{\ignorespaces #2}% + \ifdim\wd0 > 0pt + \ifpdf + \unhbox0 % PDF: 2nd arg given, show only it + \else + \unhbox0\ (\urefcode{#1})% DVI: 2nd arg given, show both it and url + \fi + \else + \urefcode{#1}% only url given, so show it + \fi + \fi + \endlink +\endgroup} + +% Allow line breaks around only a few characters (only). +\def\urefcatcodes{% + \catcode\ampChar=\active \catcode\dotChar=\active + \catcode\hashChar=\active \catcode\questChar=\active + \catcode\slashChar=\active +} +{ + \urefcatcodes + % + \global\def\urefcode{\begingroup + \setupmarkupstyle{code}% + \urefcatcodes + \let&\urefcodeamp + \let.\urefcodedot + \let#\urefcodehash + \let?\urefcodequest + \let/\urefcodeslash + \codex + } + % + % By default, they are just regular characters. + \global\def&{\normalamp} + \global\def.{\normaldot} + \global\def#{\normalhash} + \global\def?{\normalquest} + \global\def/{\normalslash} +} + +% we put a little stretch before and after the breakable chars, to help +% line breaking of long url's. The unequal skips make look better in +% cmtt at least, especially for dots. +\def\urefprestretch{\urefprebreak \hskip0pt plus.13em } +\def\urefpoststretch{\urefpostbreak \hskip0pt plus.1em } +% +\def\urefcodeamp{\urefprestretch \&\urefpoststretch} +\def\urefcodedot{\urefprestretch .\urefpoststretch} +\def\urefcodehash{\urefprestretch \#\urefpoststretch} +\def\urefcodequest{\urefprestretch ?\urefpoststretch} +\def\urefcodeslash{\futurelet\next\urefcodeslashfinish} +{ + \catcode`\/=\active + \global\def\urefcodeslashfinish{% + \urefprestretch \slashChar + % Allow line break only after the final / in a sequence of + % slashes, to avoid line break between the slashes in http://. + \ifx\next/\else \urefpoststretch \fi + } +} + +% One more complication: by default we'll break after the special +% characters, but some people like to break before the special chars, so +% allow that. Also allow no breaking at all, for manual control. +% +\parseargdef\urefbreakstyle{% + \def\txiarg{#1}% + \ifx\txiarg\wordnone + \def\urefprebreak{\nobreak}\def\urefpostbreak{\nobreak} + \else\ifx\txiarg\wordbefore + \def\urefprebreak{\allowbreak}\def\urefpostbreak{\nobreak} + \else\ifx\txiarg\wordafter + \def\urefprebreak{\nobreak}\def\urefpostbreak{\allowbreak} + \else + \errhelp = \EMsimple + \errmessage{Unknown @urefbreakstyle setting `\txiarg'}% + \fi\fi\fi +} +\def\wordafter{after} +\def\wordbefore{before} +\def\wordnone{none} + +\urefbreakstyle after + % @url synonym for @uref, since that's how everyone uses it. % \let\url=\uref @@ -2017,34 +2710,65 @@ \let\email=\uref \fi -% Check if we are currently using a typewriter font. Since all the -% Computer Modern typewriter fonts have zero interword stretch (and -% shrink), and it is reasonable to expect all typewriter fonts to have -% this property, we can check that font parameter. -% -\def\ifmonospace{\ifdim\fontdimen3\font=0pt } +% @kbd is like @code, except that if the argument is just one @key command, +% then @kbd has no effect. +\def\kbd#1{{\setupmarkupstyle{kbd}\def\look{#1}\expandafter\kbdfoo\look??\par}} + +% @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always), +% `example' (@kbd uses ttsl only inside of @example and friends), +% or `code' (@kbd uses normal tty font always). +\parseargdef\kbdinputstyle{% + \def\txiarg{#1}% + \ifx\txiarg\worddistinct + \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}% + \else\ifx\txiarg\wordexample + \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}% + \else\ifx\txiarg\wordcode + \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}% + \else + \errhelp = \EMsimple + \errmessage{Unknown @kbdinputstyle setting `\txiarg'}% + \fi\fi\fi +} +\def\worddistinct{distinct} +\def\wordexample{example} +\def\wordcode{code} + +% Default is `distinct'. +\kbdinputstyle distinct + +\def\xkey{\key} +\def\kbdfoo#1#2#3\par{\def\one{#1}\def\three{#3}\def\threex{??}% +\ifx\one\xkey\ifx\threex\three \key{#2}% +\else{\tclose{\kbdfont\setupmarkupstyle{kbd}\look}}\fi +\else{\tclose{\kbdfont\setupmarkupstyle{kbd}\look}}\fi} + +% For @indicateurl, @env, @command quotes seem unnecessary, so use \code. +\let\indicateurl=\code +\let\env=\code +\let\command=\code + +% @clicksequence{File @click{} Open ...} +\def\clicksequence#1{\begingroup #1\endgroup} + +% @clickstyle @arrow (by default) +\parseargdef\clickstyle{\def\click{#1}} +\def\click{\arrow} % Typeset a dimension, e.g., `in' or `pt'. The only reason for the % argument is to make the input look right: @dmn{pt} instead of @dmn{}pt. % \def\dmn#1{\thinspace #1} -\def\kbd#1{\def\look{#1}\expandafter\kbdfoo\look??\par} - % @l was never documented to mean ``switch to the Lisp font'', % and it is not used as such in any manual I can find. We need it for % Polish suppressed-l. --karl, 22sep96. %\def\l#1{{\li #1}\null} -% Explicit font changes: @r, @sc, undocumented @ii. -\def\r#1{{\rm #1}} % roman font -\def\sc#1{{\smallcaps#1}} % smallcaps font -\def\ii#1{{\it #1}} % italic font - % @acronym for "FBI", "NATO", and the like. % We print this one point size smaller, since it's intended for % all-uppercase. -% +% \def\acronym#1{\doacronym #1,,\finish} \def\doacronym#1,#2,#3\finish{% {\selectfonts\lsize #1}% @@ -2052,11 +2776,12 @@ \ifx\temp\empty \else \space ({\unsepspaces \ignorespaces \temp \unskip})% \fi + \null % reset \spacefactor=1000 } % @abbr for "Comput. J." and the like. % No font change, but don't do end-of-sentence spacing. -% +% \def\abbr#1{\doabbr #1,,\finish} \def\doabbr#1,#2,#3\finish{% {\plainfrenchspacing #1}% @@ -2064,7 +2789,254 @@ \ifx\temp\empty \else \space ({\unsepspaces \ignorespaces \temp \unskip})% \fi -} + \null % reset \spacefactor=1000 +} + +% @asis just yields its argument. Used with @table, for example. +% +\def\asis#1{#1} + +% @math outputs its argument in math mode. +% +% One complication: _ usually means subscripts, but it could also mean +% an actual _ character, as in @math{@var{some_variable} + 1}. So make +% _ active, and distinguish by seeing if the current family is \slfam, +% which is what @var uses. +{ + \catcode`\_ = \active + \gdef\mathunderscore{% + \catcode`\_=\active + \def_{\ifnum\fam=\slfam \_\else\sb\fi}% + } +} +% Another complication: we want \\ (and @\) to output a math (or tt) \. +% FYI, plain.tex uses \\ as a temporary control sequence (for no +% particular reason), but this is not advertised and we don't care. +% +% The \mathchar is class=0=ordinary, family=7=ttfam, position=5C=\. +\def\mathbackslash{\ifnum\fam=\ttfam \mathchar"075C \else\backslash \fi} +% +\def\math{% + \tex + \mathunderscore + \let\\ = \mathbackslash + \mathactive + % make the texinfo accent commands work in math mode + \let\"=\ddot + \let\'=\acute + \let\==\bar + \let\^=\hat + \let\`=\grave + \let\u=\breve + \let\v=\check + \let\~=\tilde + \let\dotaccent=\dot + $\finishmath +} +\def\finishmath#1{#1$\endgroup} % Close the group opened by \tex. + +% Some active characters (such as <) are spaced differently in math. +% We have to reset their definitions in case the @math was an argument +% to a command which sets the catcodes (such as @item or @section). +% +{ + \catcode`^ = \active + \catcode`< = \active + \catcode`> = \active + \catcode`+ = \active + \catcode`' = \active + \gdef\mathactive{% + \let^ = \ptexhat + \let< = \ptexless + \let> = \ptexgtr + \let+ = \ptexplus + \let' = \ptexquoteright + } +} + +% @inlinefmt{FMTNAME,PROCESSED-TEXT} and @inlineraw{FMTNAME,RAW-TEXT}. +% Ignore unless FMTNAME == tex; then it is like @iftex and @tex, +% except specified as a normal braced arg, so no newlines to worry about. +% +\def\outfmtnametex{tex} +% +\long\def\inlinefmt#1{\doinlinefmt #1,\finish} +\long\def\doinlinefmt#1,#2,\finish{% + \def\inlinefmtname{#1}% + \ifx\inlinefmtname\outfmtnametex \ignorespaces #2\fi +} +% For raw, must switch into @tex before parsing the argument, to avoid +% setting catcodes prematurely. Doing it this way means that, for +% example, @inlineraw{html, foo{bar} gets a parse error instead of being +% ignored. But this isn't important because if people want a literal +% *right* brace they would have to use a command anyway, so they may as +% well use a command to get a left brace too. We could re-use the +% delimiter character idea from \verb, but it seems like overkill. +% +\long\def\inlineraw{\tex \doinlineraw} +\long\def\doinlineraw#1{\doinlinerawtwo #1,\finish} +\def\doinlinerawtwo#1,#2,\finish{% + \def\inlinerawname{#1}% + \ifx\inlinerawname\outfmtnametex \ignorespaces #2\fi + \endgroup % close group opened by \tex. +} + + +\message{glyphs,} +% and logos. + +% @@ prints an @, as does @atchar{}. +\def\@{\char64 } +\let\atchar=\@ + +% @{ @} @lbracechar{} @rbracechar{} all generate brace characters. +% Unless we're in typewriter, use \ecfont because the CM text fonts do +% not have braces, and we don't want to switch into math. +\def\mylbrace{{\ifmonospace\else\ecfont\fi \char123}} +\def\myrbrace{{\ifmonospace\else\ecfont\fi \char125}} +\let\{=\mylbrace \let\lbracechar=\{ +\let\}=\myrbrace \let\rbracechar=\} +\begingroup + % Definitions to produce \{ and \} commands for indices, + % and @{ and @} for the aux/toc files. + \catcode`\{ = \other \catcode`\} = \other + \catcode`\[ = 1 \catcode`\] = 2 + \catcode`\! = 0 \catcode`\\ = \other + !gdef!lbracecmd[\{]% + !gdef!rbracecmd[\}]% + !gdef!lbraceatcmd[@{]% + !gdef!rbraceatcmd[@}]% +!endgroup + +% @comma{} to avoid , parsing problems. +\let\comma = , + +% Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent +% Others are defined by plain TeX: @` @' @" @^ @~ @= @u @v @H. +\let\, = \ptexc +\let\dotaccent = \ptexdot +\def\ringaccent#1{{\accent23 #1}} +\let\tieaccent = \ptext +\let\ubaraccent = \ptexb +\let\udotaccent = \d + +% Other special characters: @questiondown @exclamdown @ordf @ordm +% Plain TeX defines: @AA @AE @O @OE @L (plus lowercase versions) @ss. +\def\questiondown{?`} +\def\exclamdown{!`} +\def\ordf{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{a}}} +\def\ordm{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{o}}} + +% Dotless i and dotless j, used for accents. +\def\imacro{i} +\def\jmacro{j} +\def\dotless#1{% + \def\temp{#1}% + \ifx\temp\imacro \ifmmode\imath \else\ptexi \fi + \else\ifx\temp\jmacro \ifmmode\jmath \else\j \fi + \else \errmessage{@dotless can be used only with i or j}% + \fi\fi +} + +% The \TeX{} logo, as in plain, but resetting the spacing so that a +% period following counts as ending a sentence. (Idea found in latex.) +% +\edef\TeX{\TeX \spacefactor=1000 } + +% @LaTeX{} logo. Not quite the same results as the definition in +% latex.ltx, since we use a different font for the raised A; it's most +% convenient for us to use an explicitly smaller font, rather than using +% the \scriptstyle font (since we don't reset \scriptstyle and +% \scriptscriptstyle). +% +\def\LaTeX{% + L\kern-.36em + {\setbox0=\hbox{T}% + \vbox to \ht0{\hbox{% + \ifx\textnominalsize\xwordpt + % for 10pt running text, \lllsize (8pt) is too small for the A in LaTeX. + % Revert to plain's \scriptsize, which is 7pt. + \count255=\the\fam $\fam\count255 \scriptstyle A$% + \else + % For 11pt, we can use our lllsize. + \selectfonts\lllsize A% + \fi + }% + \vss + }}% + \kern-.15em + \TeX +} + +% Some math mode symbols. +\def\bullet{$\ptexbullet$} +\def\geq{\ifmmode \ge\else $\ge$\fi} +\def\leq{\ifmmode \le\else $\le$\fi} +\def\minus{\ifmmode -\else $-$\fi} + +% @dots{} outputs an ellipsis using the current font. +% We do .5em per period so that it has the same spacing in the cm +% typewriter fonts as three actual period characters; on the other hand, +% in other typewriter fonts three periods are wider than 1.5em. So do +% whichever is larger. +% +\def\dots{% + \leavevmode + \setbox0=\hbox{...}% get width of three periods + \ifdim\wd0 > 1.5em + \dimen0 = \wd0 + \else + \dimen0 = 1.5em + \fi + \hbox to \dimen0{% + \hskip 0pt plus.25fil + .\hskip 0pt plus1fil + .\hskip 0pt plus1fil + .\hskip 0pt plus.5fil + }% +} + +% @enddots{} is an end-of-sentence ellipsis. +% +\def\enddots{% + \dots + \spacefactor=\endofsentencespacefactor +} + +% @point{}, @result{}, @expansion{}, @print{}, @equiv{}. +% +% Since these characters are used in examples, they should be an even number of +% \tt widths. Each \tt character is 1en, so two makes it 1em. +% +\def\point{$\star$} +\def\arrow{\leavevmode\raise.05ex\hbox to 1em{\hfil$\rightarrow$\hfil}} +\def\result{\leavevmode\raise.05ex\hbox to 1em{\hfil$\Rightarrow$\hfil}} +\def\expansion{\leavevmode\hbox to 1em{\hfil$\mapsto$\hfil}} +\def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}} +\def\equiv{\leavevmode\hbox to 1em{\hfil$\ptexequiv$\hfil}} + +% The @error{} command. +% Adapted from the TeXbook's \boxit. +% +\newbox\errorbox +% +{\tentt \global\dimen0 = 3em}% Width of the box. +\dimen2 = .55pt % Thickness of rules +% The text. (`r' is open on the right, `e' somewhat less so on the left.) +\setbox0 = \hbox{\kern-.75pt \reducedsf \putworderror\kern-1.5pt} +% +\setbox\errorbox=\hbox to \dimen0{\hfil + \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right. + \advance\hsize by -2\dimen2 % Rules. + \vbox{% + \hrule height\dimen2 + \hbox{\vrule width\dimen2 \kern3pt % Space to left of text. + \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below. + \kern3pt\vrule width\dimen2}% Space to right. + \hrule height\dimen2} + \hfil} +% +\def\error{\leavevmode\lower.7ex\copy\errorbox} % @pounds{} is a sterling sign, which Knuth put in the CM italic font. % @@ -2075,49 +3047,113 @@ % Theiling, which support regular, slanted, bold and bold slanted (and % "outlined" (blackboard board, sort of) versions, which we don't need). % It is available from http://www.ctan.org/tex-archive/fonts/eurosym. -% +% % Although only regular is the truly official Euro symbol, we ignore % that. The Euro is designed to be slightly taller than the regular % font height. -% +% % feymr - regular % feymo - slanted % feybr - bold % feybo - bold slanted -% +% % There is no good (free) typewriter version, to my knowledge. % A feymr10 euro is ~7.3pt wide, while a normal cmtt10 char is ~5.25pt wide. % Hmm. -% +% % Also doesn't work in math. Do we need to do math with euro symbols? % Hope not. -% -% +% +% \def\euro{{\eurofont e}} \def\eurofont{% % We set the font at each command, rather than predefining it in % \textfonts and the other font-switching commands, so that % installations which never need the symbol don't have to have the % font installed. - % + % % There is only one designed size (nominal 10pt), so we always scale % that to the current nominal size. - % + % % By the way, simply using "at 1em" works for cmr10 and the like, but % does not work for cmbx10 and other extended/shrunken fonts. - % + % \def\eurosize{\csname\curfontsize nominalsize\endcsname}% % - \ifx\curfontstyle\bfstylename + \ifx\curfontstyle\bfstylename % bold: \font\thiseurofont = \ifusingit{feybo10}{feybr10} at \eurosize - \else + \else % regular: \font\thiseurofont = \ifusingit{feymo10}{feymr10} at \eurosize \fi \thiseurofont } +% Glyphs from the EC fonts. We don't use \let for the aliases, because +% sometimes we redefine the original macro, and the alias should reflect +% the redefinition. +% +% Use LaTeX names for the Icelandic letters. +\def\DH{{\ecfont \char"D0}} % Eth +\def\dh{{\ecfont \char"F0}} % eth +\def\TH{{\ecfont \char"DE}} % Thorn +\def\th{{\ecfont \char"FE}} % thorn +% +\def\guillemetleft{{\ecfont \char"13}} +\def\guillemotleft{\guillemetleft} +\def\guillemetright{{\ecfont \char"14}} +\def\guillemotright{\guillemetright} +\def\guilsinglleft{{\ecfont \char"0E}} +\def\guilsinglright{{\ecfont \char"0F}} +\def\quotedblbase{{\ecfont \char"12}} +\def\quotesinglbase{{\ecfont \char"0D}} +% +% This positioning is not perfect (see the ogonek LaTeX package), but +% we have the precomposed glyphs for the most common cases. We put the +% tests to use those glyphs in the single \ogonek macro so we have fewer +% dummy definitions to worry about for index entries, etc. +% +% ogonek is also used with other letters in Lithuanian (IOU), but using +% the precomposed glyphs for those is not so easy since they aren't in +% the same EC font. +\def\ogonek#1{{% + \def\temp{#1}% + \ifx\temp\macrocharA\Aogonek + \else\ifx\temp\macrochara\aogonek + \else\ifx\temp\macrocharE\Eogonek + \else\ifx\temp\macrochare\eogonek + \else + \ecfont \setbox0=\hbox{#1}% + \ifdim\ht0=1ex\accent"0C #1% + \else\ooalign{\unhbox0\crcr\hidewidth\char"0C \hidewidth}% + \fi + \fi\fi\fi\fi + }% +} +\def\Aogonek{{\ecfont \char"81}}\def\macrocharA{A} +\def\aogonek{{\ecfont \char"A1}}\def\macrochara{a} +\def\Eogonek{{\ecfont \char"86}}\def\macrocharE{E} +\def\eogonek{{\ecfont \char"A6}}\def\macrochare{e} +% +% Use the ec* fonts (cm-super in outline format) for non-CM glyphs. +\def\ecfont{% + % We can't distinguish serif/sans and italic/slanted, but this + % is used for crude hacks anyway (like adding French and German + % quotes to documents typeset with CM, where we lose kerning), so + % hopefully nobody will notice/care. + \edef\ecsize{\csname\curfontsize ecsize\endcsname}% + \edef\nominalsize{\csname\curfontsize nominalsize\endcsname}% + \ifx\curfontstyle\bfstylename + % bold: + \font\thisecfont = ecb\ifusingit{i}{x}\ecsize \space at \nominalsize + \else + % regular: + \font\thisecfont = ec\ifusingit{ti}{rm}\ecsize \space at \nominalsize + \fi + \thisecfont +} + % @registeredsymbol - R in a circle. The font for the R should really % be smaller yet, but lllsize is the best we can do for now. % Adapted from the plain.tex definition of \copyright. @@ -2128,14 +3164,24 @@ }$% } +% @textdegree - the normal degrees sign. +% +\def\textdegree{$^\circ$} + % Laurent Siebenmann reports \Orb undefined with: % Textures 1.7.7 (preloaded format=plain 93.10.14) (68K) 16 APR 2004 02:38 % so we'll define it if necessary. -% -\ifx\Orb\undefined +% +\ifx\Orb\thisisundefined \def\Orb{\mathhexbox20D} \fi +% Quotes. +\chardef\quotedblleft="5C +\chardef\quotedblright=`\" +\chardef\quoteleft=`\` +\chardef\quoteright=`\' + \message{page headings,} @@ -2154,8 +3200,9 @@ \newif\ifsetshortcontentsaftertitlepage \let\setshortcontentsaftertitlepage = \setshortcontentsaftertitlepagetrue -\parseargdef\shorttitlepage{\begingroup\hbox{}\vskip 1.5in \chaprm \centerline{#1}% - \endgroup\page\hbox{}\page} +\parseargdef\shorttitlepage{% + \begingroup \hbox{}\vskip 1.5in \chaprm \centerline{#1}% + \endgroup\page\hbox{}\page} \envdef\titlepage{% % Open one extra group, as we want to close it in the middle of \Etitlepage. @@ -2215,17 +3262,14 @@ \finishedtitlepagetrue } -%%% Macros to be used within @titlepage: +% Macros to be used within @titlepage: \let\subtitlerm=\tenrm \def\subtitlefont{\subtitlerm \normalbaselineskip = 13pt \normalbaselines} -\def\authorfont{\authorrm \normalbaselineskip = 16pt \normalbaselines - \let\tt=\authortt} - \parseargdef\title{% \checkenv\titlepage - \leftline{\titlefonts\rm #1} + \leftline{\titlefonts\rmisbold #1} % print a rule at the page bottom also. \finishedtitlepagefalse \vskip4pt \hrule height 4pt width \hsize \vskip4pt @@ -2246,12 +3290,12 @@ \else \checkenv\titlepage \ifseenauthor\else \vskip 0pt plus 1filll \seenauthortrue \fi - {\authorfont \leftline{#1}}% + {\secfonts\rmisbold \leftline{#1}}% \fi } -%%% Set up page headings and footings. +% Set up page headings and footings. \let\thispage=\folio @@ -2299,12 +3343,39 @@ % % Leave some space for the footline. Hopefully ok to assume % @evenfooting will not be used by itself. - \global\advance\pageheight by -\baselineskip - \global\advance\vsize by -\baselineskip + \global\advance\pageheight by -12pt + \global\advance\vsize by -12pt } \parseargdef\everyfooting{\oddfootingxxx{#1}\evenfootingxxx{#1}} +% @evenheadingmarks top \thischapter <- chapter at the top of a page +% @evenheadingmarks bottom \thischapter <- chapter at the bottom of a page +% +% The same set of arguments for: +% +% @oddheadingmarks +% @evenfootingmarks +% @oddfootingmarks +% @everyheadingmarks +% @everyfootingmarks + +\def\evenheadingmarks{\headingmarks{even}{heading}} +\def\oddheadingmarks{\headingmarks{odd}{heading}} +\def\evenfootingmarks{\headingmarks{even}{footing}} +\def\oddfootingmarks{\headingmarks{odd}{footing}} +\def\everyheadingmarks#1 {\headingmarks{even}{heading}{#1} + \headingmarks{odd}{heading}{#1} } +\def\everyfootingmarks#1 {\headingmarks{even}{footing}{#1} + \headingmarks{odd}{footing}{#1} } +% #1 = even/odd, #2 = heading/footing, #3 = top/bottom. +\def\headingmarks#1#2#3 {% + \expandafter\let\expandafter\temp \csname get#3headingmarks\endcsname + \global\expandafter\let\csname get#1#2marks\endcsname \temp +} + +\everyheadingmarks bottom +\everyfootingmarks bottom % @headings double turns headings on for double-sided printing. % @headings single turns headings on for single-sided printing. @@ -2318,10 +3389,14 @@ \def\headings #1 {\csname HEADINGS#1\endcsname} -\def\HEADINGSoff{% -\global\evenheadline={\hfil} \global\evenfootline={\hfil} -\global\oddheadline={\hfil} \global\oddfootline={\hfil}} -\HEADINGSoff +\def\headingsoff{% non-global headings elimination + \evenheadline={\hfil}\evenfootline={\hfil}% + \oddheadline={\hfil}\oddfootline={\hfil}% +} + +\def\HEADINGSoff{{\globaldefs=1 \headingsoff}} % global setting +\HEADINGSoff % it's the default + % When we turn headings on, set the page number to 1. % For double-sided printing, put current file name in lower left corner, % chapter name on inside top of right hand pages, document @@ -2372,7 +3447,7 @@ % This produces Day Month Year style of output. % Only define if not already defined, in case a txi-??.tex file has set % up a different format (e.g., txi-cs.tex does this). -\ifx\today\undefined +\ifx\today\thisisundefined \def\today{% \number\day\space \ifcase\month @@ -2433,7 +3508,7 @@ \begingroup \advance\leftskip by-\tableindent \advance\hsize by\tableindent - \advance\rightskip by0pt plus1fil + \advance\rightskip by0pt plus1fil\relax \leavevmode\unhbox0\par \endgroup % @@ -2447,7 +3522,7 @@ % cause the example and the item to crash together. So we use this % bizarre value of 10001 as a signal to \aboveenvbreak to insert % \parskip glue after all. Section titles are handled this way also. - % + % \penalty 10001 \endgroup \itemxneedsnegativevskipfalse @@ -2541,9 +3616,18 @@ \parindent=0pt \parskip=\smallskipamount \ifdim\parskip=0pt \parskip=2pt \fi + % + % Try typesetting the item mark that if the document erroneously says + % something like @itemize @samp (intending @table), there's an error + % right away at the @itemize. It's not the best error message in the + % world, but it's better than leaving it to the @item. This means if + % the user wants an empty mark, they have to say @w{} not just @w. \def\itemcontents{#1}% + \setbox0 = \hbox{\itemcontents}% + % % @itemize with no arg is equivalent to @itemize @bullet. \ifx\itemcontents\empty\def\itemcontents{\bullet}\fi + % \let\item=\itemizeitem } @@ -2564,6 +3648,7 @@ \ifnum\lastpenalty<10000 \parskip=0in \fi \noindent \hbox to 0pt{\hss \itemcontents \kern\itemmargin}% + % \vadjust{\penalty 1200}}% not good to break after first line of item. \flushcr } @@ -2785,12 +3870,19 @@ % % @headitem starts a heading row, which we typeset in bold. % Assignments have to be global since we are inside the implicit group -% of an alignment entry. Note that \everycr resets \everytab. -\def\headitem{\checkenv\multitable \crcr \global\everytab={\bf}\the\everytab}% +% of an alignment entry. \everycr resets \everytab so we don't have to +% undo it ourselves. +\def\headitemfont{\b}% for people to use in the template row; not changeable +\def\headitem{% + \checkenv\multitable + \crcr + \global\everytab={\bf}% can't use \headitemfont since the parsing differs + \the\everytab % for the first item +}% % % A \tab used to include \hskip1sp. But then the space in a template % line is not enough. That is bad. So let's go back to just `&' until -% we encounter the problem it was intended to solve again. +% we again encounter the problem the 1sp was intended to solve. % --karl, nathan at acm.org, 20apr99. \def\tab{\checkenv\multitable &\the\everytab}% @@ -2902,18 +3994,18 @@ \setbox0=\vbox{X}\global\multitablelinespace=\the\baselineskip \global\advance\multitablelinespace by-\ht0 \fi -%% Test to see if parskip is larger than space between lines of -%% table. If not, do nothing. -%% If so, set to same dimension as multitablelinespace. +% Test to see if parskip is larger than space between lines of +% table. If not, do nothing. +% If so, set to same dimension as multitablelinespace. \ifdim\multitableparskip>\multitablelinespace \global\multitableparskip=\multitablelinespace -\global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller - %% than skip between lines in the table. +\global\advance\multitableparskip-7pt % to keep parskip somewhat smaller + % than skip between lines in the table. \fi% \ifdim\multitableparskip=0pt \global\multitableparskip=\multitablelinespace -\global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller - %% than skip between lines in the table. +\global\advance\multitableparskip-7pt % to keep parskip somewhat smaller + % than skip between lines in the table. \fi} @@ -2959,6 +4051,7 @@ \def\doignore#1{\begingroup % Scan in ``verbatim'' mode: + \obeylines \catcode`\@ = \other \catcode`\{ = \other \catcode`\} = \other @@ -2979,16 +4072,16 @@ \gdef\dodoignore#1{% % #1 contains the command name as a string, e.g., `ifinfo'. % - % Define a command to find the next `@end #1', which must be on a line - % by itself. - \long\def\doignoretext##1^^M at end #1{\doignoretextyyy##1^^M@#1\_STOP_}% + % Define a command to find the next `@end #1'. + \long\def\doignoretext##1^^M at end #1{% + \doignoretextyyy##1^^M@#1\_STOP_}% + % % And this command to find another #1 command, at the beginning of a % line. (Otherwise, we would consider a line `@c @ifset', for % example, to count as an @ifset for nesting.) \long\def\doignoretextyyy##1^^M@#1##2\_STOP_{\doignoreyyy{##2}\_STOP_}% % % And now expand that command. - \obeylines % \doignoretext ^^M% }% } @@ -3018,7 +4111,12 @@ } % Finish off ignored text. -\def\enddoignore{\endgroup\ignorespaces} +{ \obeylines% + % Ignore anything after the last `@end #1'; this matters in verbatim + % environments, where otherwise the newline after an ignored conditional + % would result in a blank line in the output. + \gdef\enddoignore#1^^M{\endgroup\ignorespaces}% +} % @set VAR sets the variable VAR to an empty value. @@ -3183,11 +4281,11 @@ \def\dosynindex#1#2#3{% % Only do \closeout if we haven't already done it, else we'll end up % closing the target index. - \expandafter \ifx\csname donesynindex#2\endcsname \undefined + \expandafter \ifx\csname donesynindex#2\endcsname \relax % The \closeout helps reduce unnecessary open files; the limit on the % Acorn RISC OS is a mere 16 files. \expandafter\closeout\csname#2indfile\endcsname - \expandafter\let\csname\donesynindex#2\endcsname = 1 + \expandafter\let\csname donesynindex#2\endcsname = 1 \fi % redefine \fooindfile: \expandafter\let\expandafter\temp\expandafter=\csname#3indfile\endcsname @@ -3221,11 +4319,41 @@ \escapechar = `\\ % use backslash in output files. \def\@{@}% change to @@ when we switch to @ as escape char in index files. \def\ {\realbackslash\space }% - % Need these in case \tex is in effect and \{ is a \delimiter again. - % But can't use \lbracecmd and \rbracecmd because texindex assumes - % braces and backslashes are used only as delimiters. - \let\{ = \mylbrace - \let\} = \myrbrace + % + % Need these unexpandable (because we define \tt as a dummy) + % definitions when @{ or @} appear in index entry text. Also, more + % complicated, when \tex is in effect and \{ is a \delimiter again. + % We can't use \lbracecmd and \rbracecmd because texindex assumes + % braces and backslashes are used only as delimiters. Perhaps we + % should define @lbrace and @rbrace commands a la @comma. + \def\{{{\tt\char123}}% + \def\}{{\tt\char125}}% + % + % I don't entirely understand this, but when an index entry is + % generated from a macro call, the \endinput which \scanmacro inserts + % causes processing to be prematurely terminated. This is, + % apparently, because \indexsorttmp is fully expanded, and \endinput + % is an expandable command. The redefinition below makes \endinput + % disappear altogether for that purpose -- although logging shows that + % processing continues to some further point. On the other hand, it + % seems \endinput does not hurt in the printed index arg, since that + % is still getting written without apparent harm. + % + % Sample source (mac-idx3.tex, reported by Graham Percival to + % help-texinfo, 22may06): + % @macro funindex {WORD} + % @findex xyz + % @end macro + % ... + % @funindex commtest + % + % The above is not enough to reproduce the bug, but it gives the flavor. + % + % Sample whatsit resulting: + % . at write3{\entry{xyz}{@folio }{@code {xyz at endinput }}} + % + % So: + \let\endinput = \empty % % Do the redefinitions. \commondummies @@ -3244,6 +4372,7 @@ % % Do the redefinitions. \commondummies + \otherbackslash } % Called from \indexdummies and \atdummies. @@ -3251,7 +4380,7 @@ \def\commondummies{% % % \definedummyword defines \#1 as \string\#1\space, thus effectively - % preventing its expansion. This is used only for control% words, + % preventing its expansion. This is used only for control words, % not control letters, because the \space would be incorrect for % control characters, but is needed to separate the control word % from whatever follows. @@ -3270,23 +4399,28 @@ \commondummiesnofonts % \definedummyletter\_% + \definedummyletter\-% % % Non-English letters. \definedummyword\AA \definedummyword\AE + \definedummyword\DH \definedummyword\L + \definedummyword\O \definedummyword\OE - \definedummyword\O + \definedummyword\TH \definedummyword\aa \definedummyword\ae + \definedummyword\dh + \definedummyword\exclamdown \definedummyword\l + \definedummyword\o \definedummyword\oe - \definedummyword\o - \definedummyword\ss - \definedummyword\exclamdown - \definedummyword\questiondown \definedummyword\ordf \definedummyword\ordm + \definedummyword\questiondown + \definedummyword\ss + \definedummyword\th % % Although these internal commands shouldn't show up, sometimes they do. \definedummyword\bf @@ -3302,21 +4436,39 @@ \definedummyword\TeX % % Assorted special characters. + \definedummyword\arrow \definedummyword\bullet \definedummyword\comma \definedummyword\copyright \definedummyword\registeredsymbol \definedummyword\dots \definedummyword\enddots + \definedummyword\entrybreak \definedummyword\equiv \definedummyword\error \definedummyword\euro \definedummyword\expansion + \definedummyword\geq + \definedummyword\guillemetleft + \definedummyword\guillemetright + \definedummyword\guilsinglleft + \definedummyword\guilsinglright + \definedummyword\lbracechar + \definedummyword\leq \definedummyword\minus + \definedummyword\ogonek \definedummyword\pounds \definedummyword\point \definedummyword\print + \definedummyword\quotedblbase + \definedummyword\quotedblleft + \definedummyword\quotedblright + \definedummyword\quoteleft + \definedummyword\quoteright + \definedummyword\quotesinglbase + \definedummyword\rbracechar \definedummyword\result + \definedummyword\textdegree % % We want to disable all macros so that they are not expanded by \write. \macrolist @@ -3330,63 +4482,72 @@ % \commondummiesnofonts: common to \commondummies and \indexnofonts. % -% Better have this without active chars. -{ - \catcode`\~=\other - \gdef\commondummiesnofonts{% - % Control letters and accents. - \definedummyletter\!% - \definedummyaccent\"% - \definedummyaccent\'% - \definedummyletter\*% - \definedummyaccent\,% - \definedummyletter\.% - \definedummyletter\/% - \definedummyletter\:% - \definedummyaccent\=% - \definedummyletter\?% - \definedummyaccent\^% - \definedummyaccent\`% - \definedummyaccent\~% - \definedummyword\u - \definedummyword\v - \definedummyword\H - \definedummyword\dotaccent - \definedummyword\ringaccent - \definedummyword\tieaccent - \definedummyword\ubaraccent - \definedummyword\udotaccent - \definedummyword\dotless - % - % Texinfo font commands. - \definedummyword\b - \definedummyword\i - \definedummyword\r - \definedummyword\sc - \definedummyword\t - % - % Commands that take arguments. - \definedummyword\acronym - \definedummyword\cite - \definedummyword\code - \definedummyword\command - \definedummyword\dfn - \definedummyword\emph - \definedummyword\env - \definedummyword\file - \definedummyword\kbd - \definedummyword\key - \definedummyword\math - \definedummyword\option - \definedummyword\samp - \definedummyword\strong - \definedummyword\tie - \definedummyword\uref - \definedummyword\url - \definedummyword\var - \definedummyword\verb - \definedummyword\w - } +\def\commondummiesnofonts{% + % Control letters and accents. + \definedummyletter\!% + \definedummyaccent\"% + \definedummyaccent\'% + \definedummyletter\*% + \definedummyaccent\,% + \definedummyletter\.% + \definedummyletter\/% + \definedummyletter\:% + \definedummyaccent\=% + \definedummyletter\?% + \definedummyaccent\^% + \definedummyaccent\`% + \definedummyaccent\~% + \definedummyword\u + \definedummyword\v + \definedummyword\H + \definedummyword\dotaccent + \definedummyword\ogonek + \definedummyword\ringaccent + \definedummyword\tieaccent + \definedummyword\ubaraccent + \definedummyword\udotaccent + \definedummyword\dotless + % + % Texinfo font commands. + \definedummyword\b + \definedummyword\i + \definedummyword\r + \definedummyword\sansserif + \definedummyword\sc + \definedummyword\slanted + \definedummyword\t + % + % Commands that take arguments. + \definedummyword\abbr + \definedummyword\acronym + \definedummyword\anchor + \definedummyword\cite + \definedummyword\code + \definedummyword\command + \definedummyword\dfn + \definedummyword\dmn + \definedummyword\email + \definedummyword\emph + \definedummyword\env + \definedummyword\file + \definedummyword\image + \definedummyword\indicateurl + \definedummyword\inforef + \definedummyword\kbd + \definedummyword\key + \definedummyword\math + \definedummyword\option + \definedummyword\pxref + \definedummyword\ref + \definedummyword\samp + \definedummyword\strong + \definedummyword\tie + \definedummyword\uref + \definedummyword\url + \definedummyword\var + \definedummyword\verb + \definedummyword\w + \definedummyword\xref } % \indexnofonts is used when outputting the strings to sort the index @@ -3399,7 +4560,7 @@ \def\definedummyaccent##1{\let##1\asis}% % We can just ignore other control letters. \def\definedummyletter##1{\let##1\empty}% - % Hopefully, all control words can become @asis. + % All control words become @asis by default; overrides below. \let\definedummyword\definedummyaccent % \commondummiesnofonts @@ -3411,60 +4572,95 @@ % \def\ { }% \def\@{@}% - % how to handle braces? \def\_{\normalunderscore}% + \def\-{}% @- shouldn't affect sorting + % + % Unfortunately, texindex is not prepared to handle braces in the + % content at all. So for index sorting, we map @{ and @} to strings + % starting with |, since that ASCII character is between ASCII { and }. + \def\{{|a}% + \def\lbracechar{|a}% + % + \def\}{|b}% + \def\rbracechar{|b}% % % Non-English letters. \def\AA{AA}% \def\AE{AE}% + \def\DH{DZZ}% \def\L{L}% \def\OE{OE}% \def\O{O}% + \def\TH{ZZZ}% \def\aa{aa}% \def\ae{ae}% + \def\dh{dzz}% + \def\exclamdown{!}% \def\l{l}% \def\oe{oe}% - \def\o{o}% - \def\ss{ss}% - \def\exclamdown{!}% - \def\questiondown{?}% \def\ordf{a}% \def\ordm{o}% + \def\o{o}% + \def\questiondown{?}% + \def\ss{ss}% + \def\th{zzz}% % \def\LaTeX{LaTeX}% \def\TeX{TeX}% % % Assorted special characters. % (The following {} will end up in the sort string, but that's ok.) + \def\arrow{->}% \def\bullet{bullet}% \def\comma{,}% \def\copyright{copyright}% - \def\registeredsymbol{R}% \def\dots{...}% \def\enddots{...}% \def\equiv{==}% \def\error{error}% \def\euro{euro}% \def\expansion{==>}% + \def\geq{>=}% + \def\guillemetleft{<<}% + \def\guillemetright{>>}% + \def\guilsinglleft{<}% + \def\guilsinglright{>}% + \def\leq{<=}% \def\minus{-}% + \def\point{.}% \def\pounds{pounds}% - \def\point{.}% \def\print{-|}% + \def\quotedblbase{"}% + \def\quotedblleft{"}% + \def\quotedblright{"}% + \def\quoteleft{`}% + \def\quoteright{'}% + \def\quotesinglbase{,}% + \def\registeredsymbol{R}% \def\result{=>}% + \def\textdegree{o}% + % + \expandafter\ifx\csname SETtxiindexlquoteignore\endcsname\relax + \else \indexlquoteignore \fi % % We need to get rid of all macros, leaving only the arguments (if present). % Of course this is not nearly correct, but it is the best we can do for now. % makeinfo does not expand macros in the argument to @deffn, which ends up % writing an index entry, and texindex isn't prepared for an index sort entry % that starts with \. - % + % % Since macro invocations are followed by braces, we can just redefine them % to take a single TeX argument. The case of a macro invocation that % goes to end-of-line is not handled. - % + % \macrolist } +% Undocumented (for FSFS 2nd ed.): @set txiindexlquoteignore makes us +% ignore left quotes in the sort term. +{\catcode`\`=\active + \gdef\indexlquoteignore{\let`=\empty}} + \let\indexbackslash=0 %overridden during \printindex. \let\SETmarginindex=\relax % put index entries in margin (undocumented)? @@ -3490,11 +4686,7 @@ % \edef\writeto{\csname#1indfile\endcsname}% % - \ifvmode - \dosubindsanitize - \else - \dosubindwrite - \fi + \safewhatsit\dosubindwrite }% \fi } @@ -3531,13 +4723,13 @@ \temp } -% Take care of unwanted page breaks: +% Take care of unwanted page breaks/skips around a whatsit: % % If a skip is the last thing on the list now, preserve it % by backing up by \lastskip, doing the \write, then inserting % the skip again. Otherwise, the whatsit generated by the -% \write will make \lastskip zero. The result is that sequences -% like this: +% \write or \pdfdest will make \lastskip zero. The result is that +% sequences like this: % @end defun % @tindex whatever % @defun ... @@ -3561,25 +4753,30 @@ % \edef\zeroskipmacro{\expandafter\the\csname z at skip\endcsname} % +\newskip\whatsitskip +\newcount\whatsitpenalty +% % ..., ready, GO: % -\def\dosubindsanitize{% +\def\safewhatsit#1{\ifhmode + #1% + \else % \lastskip and \lastpenalty cannot both be nonzero simultaneously. - \skip0 = \lastskip + \whatsitskip = \lastskip \edef\lastskipmacro{\the\lastskip}% - \count255 = \lastpenalty + \whatsitpenalty = \lastpenalty % % If \lastskip is nonzero, that means the last item was a % skip. And since a skip is discardable, that means this - % -\skip0 glue we're inserting is preceded by a + % -\whatsitskip glue we're inserting is preceded by a % non-discardable item, therefore it is not a potential % breakpoint, therefore no \nobreak needed. \ifx\lastskipmacro\zeroskipmacro \else - \vskip-\skip0 + \vskip-\whatsitskip \fi % - \dosubindwrite + #1% % \ifx\lastskipmacro\zeroskipmacro % If \lastskip was zero, perhaps the last item was a penalty, and @@ -3587,20 +4784,19 @@ % to re-insert the same penalty (values >10000 are used for various % signals); since we just inserted a non-discardable item, any % following glue (such as a \parskip) would be a breakpoint. For example: - % % @deffn deffn-whatever % @vindex index-whatever % Description. % would allow a break between the index-whatever whatsit % and the "Description." paragraph. - \ifnum\count255>9999 \penalty\count255 \fi + \ifnum\whatsitpenalty>9999 \penalty\whatsitpenalty \fi \else % On the other hand, if we had a nonzero \lastskip, % this make-up glue would be preceded by a non-discardable item % (the whatsit from the \write), so we must insert a \nobreak. - \nobreak\vskip\skip0 + \nobreak\vskip\whatsitskip \fi -} +\fi} % The index entry written in the file actually looks like % \entry {sortstring}{page}{topic} @@ -3642,6 +4838,7 @@ % \smallfonts \rm \tolerance = 9500 + \plainfrenchspacing \everypar = {}% don't want the \kern\-parindent from indentation suppression. % % See if the index file exists and is nonempty. @@ -3715,10 +4912,9 @@ % % A straightforward implementation would start like this: % \def\entry#1#2{... -% But this frozes the catcodes in the argument, and can cause problems to +% But this freezes the catcodes in the argument, and can cause problems to % @code, which sets - active. This problem was fixed by a kludge--- % ``-'' was active throughout whole index, but this isn't really right. -% % The right solution is to prevent \entry from swallowing the whole text. % --kasal, 21nov03 \def\entry{% @@ -3755,10 +4951,17 @@ % columns. \vskip 0pt plus1pt % + % When reading the text of entry, convert explicit line breaks + % from @* into spaces. The user might give these in long section + % titles, for instance. + \def\*{\unskip\space\ignorespaces}% + \def\entrybreak{\hfil\break}% + % % Swallow the left brace of the text (first parameter): \afterassignment\doentry \let\temp = } +\def\entrybreak{\unskip\space\ignorespaces}% \def\doentry{% \bgroup % Instead of the swallowed brace. \noindent @@ -3771,11 +4974,8 @@ % The following is kludged to not output a line of dots in the index if % there are no page numbers. The next person who breaks this will be % cursed by a Unix daemon. - \def\tempa{{\rm }}% - \def\tempb{#1}% - \edef\tempc{\tempa}% - \edef\tempd{\tempb}% - \ifx\tempc\tempd + \setbox\boxA = \hbox{#1}% + \ifdim\wd\boxA = 0pt \ % \else % @@ -3799,9 +4999,9 @@ \endgroup } -% Like \dotfill except takes at least 1 em. +% Like plain.tex's \dotfill, except uses up at least 1 em. \def\indexdotfill{\cleaders - \hbox{$\mathsurround=0pt \mkern1.5mu ${\it .}$ \mkern1.5mu$}\hskip 1em plus 1fill} + \hbox{$\mathsurround=0pt \mkern1.5mu.\mkern1.5mu$}\hskip 1em plus 1fill} \def\primary #1{\line{#1\hfil}} @@ -3911,6 +5111,34 @@ % % All done with double columns. \def\enddoublecolumns{% + % The following penalty ensures that the page builder is exercised + % _before_ we change the output routine. This is necessary in the + % following situation: + % + % The last section of the index consists only of a single entry. + % Before this section, \pagetotal is less than \pagegoal, so no + % break occurs before the last section starts. However, the last + % section, consisting of \initial and the single \entry, does not + % fit on the page and has to be broken off. Without the following + % penalty the page builder will not be exercised until \eject + % below, and by that time we'll already have changed the output + % routine to the \balancecolumns version, so the next-to-last + % double-column page will be processed with \balancecolumns, which + % is wrong: The two columns will go to the main vertical list, with + % the broken-off section in the recent contributions. As soon as + % the output routine finishes, TeX starts reconsidering the page + % break. The two columns and the broken-off section both fit on the + % page, because the two columns now take up only half of the page + % goal. When TeX sees \eject from below which follows the final + % section, it invokes the new output routine that we've set after + % \balancecolumns below; \onepageout will try to fit the two columns + % and the final section into the vbox of \pageheight (see + % \pagebody), causing an overfull box. + % + % Note that glue won't work here, because glue does not exercise the + % page builder, unlike penalties (see The TeXbook, pp. 280-281). + \penalty0 + % \output = {% % Split the last of the double-column material. Leave it on the % current page, no automatic page break. @@ -3966,7 +5194,22 @@ \message{sectioning,} % Chapters, sections, etc. -% \unnumberedno is an oxymoron, of course. But we count the unnumbered +% Let's start with @part. +\outer\parseargdef\part{\partzzz{#1}} +\def\partzzz#1{% + \chapoddpage + \null + \vskip.3\vsize % move it down on the page a bit + \begingroup + \noindent \titlefonts\rmisbold #1\par % the text + \let\lastnode=\empty % no node to associate with + \writetocentry{part}{#1}{}% but put it in the toc + \headingsoff % no headline or footline on the part page + \chapoddpage + \endgroup +} + +% \unnumberedno is an oxymoron. But we count the unnumbered % sections so that we can refer to them unambiguously in the pdf % outlines by their "section number". We avoid collisions with chapter % numbers by starting them at 10000. (If a document ever has 10000 @@ -4020,11 +5263,15 @@ \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi} -% Each @chapter defines this as the name of the chapter. -% page headings and footings can use it. @section does likewise. -% However, they are not reliable, because we don't use marks. +% Each @chapter defines these (using marks) as the number+name, number +% and name of the chapter. Page headings and footings can use +% these. @section does likewise. \def\thischapter{} +\def\thischapternum{} +\def\thischaptername{} \def\thissection{} +\def\thissectionnum{} +\def\thissectionname{} \newcount\absseclevel % used to calculate proper heading level \newcount\secbase\secbase=0 % @raisesections/@lowersections modify this count @@ -4041,8 +5288,8 @@ \chardef\maxseclevel = 3 % % A numbered section within an unnumbered changes to unnumbered too. -% To achive this, remember the "biggest" unnum. sec. we are currently in: -\chardef\unmlevel = \maxseclevel +% To achieve this, remember the "biggest" unnum. sec. we are currently in: +\chardef\unnlevel = \maxseclevel % % Trace whether the current chapter is an appendix or not: % \chapheadtype is "N" or "A", unnumbered chapters are ignored. @@ -4067,8 +5314,8 @@ % The heading type: \def\headtype{#1}% \if \headtype U% - \ifnum \absseclevel < \unmlevel - \chardef\unmlevel = \absseclevel + \ifnum \absseclevel < \unnlevel + \chardef\unnlevel = \absseclevel \fi \else % Check for appendix sections: @@ -4080,10 +5327,10 @@ \fi\fi \fi % Check for numbered within unnumbered: - \ifnum \absseclevel > \unmlevel + \ifnum \absseclevel > \unnlevel \def\headtype{U}% \else - \chardef\unmlevel = 3 + \chardef\unnlevel = 3 \fi \fi % Now print the heading: @@ -4137,7 +5384,9 @@ \gdef\chaplevelprefix{\the\chapno.}% \resetallfloatnos % - \message{\putwordChapter\space \the\chapno}% + % \putwordChapter can contain complex things in translations. + \toks0=\expandafter{\putwordChapter}% + \message{\the\toks0 \space \the\chapno}% % % Write the actual heading. \chapmacro{#1}{Ynumbered}{\the\chapno}% @@ -4148,15 +5397,17 @@ \global\let\subsubsection = \numberedsubsubsec } -\outer\parseargdef\appendix{\apphead0{#1}} % normally apphead0 calls appendixzzz +\outer\parseargdef\appendix{\apphead0{#1}} % normally calls appendixzzz +% \def\appendixzzz#1{% \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\appendixno by 1 \gdef\chaplevelprefix{\appendixletter.}% \resetallfloatnos % - \def\appendixnum{\putwordAppendix\space \appendixletter}% - \message{\appendixnum}% + % \putwordAppendix can contain complex things in translations. + \toks0=\expandafter{\putwordAppendix}% + \message{\the\toks0 \space \appendixletter}% % \chapmacro{#1}{Yappendix}{\appendixletter}% % @@ -4165,7 +5416,8 @@ \global\let\subsubsection = \appendixsubsubsec } -\outer\parseargdef\unnumbered{\unnmhead0{#1}} % normally unnmhead0 calls unnumberedzzz +% normally unnmhead0 calls unnumberedzzz: +\outer\parseargdef\unnumbered{\unnmhead0{#1}} \def\unnumberedzzz#1{% \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\unnumberedno by 1 @@ -4209,40 +5461,47 @@ \let\top\unnumbered % Sections. +% \outer\parseargdef\numberedsec{\numhead1{#1}} % normally calls seczzz \def\seczzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Ynumbered}{\the\chapno.\the\secno}% } -\outer\parseargdef\appendixsection{\apphead1{#1}} % normally calls appendixsectionzzz +% normally calls appendixsectionzzz: +\outer\parseargdef\appendixsection{\apphead1{#1}} \def\appendixsectionzzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Yappendix}{\appendixletter.\the\secno}% } \let\appendixsec\appendixsection -\outer\parseargdef\unnumberedsec{\unnmhead1{#1}} % normally calls unnumberedseczzz +% normally calls unnumberedseczzz: +\outer\parseargdef\unnumberedsec{\unnmhead1{#1}} \def\unnumberedseczzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Ynothing}{\the\unnumberedno.\the\secno}% } % Subsections. -\outer\parseargdef\numberedsubsec{\numhead2{#1}} % normally calls numberedsubseczzz +% +% normally calls numberedsubseczzz: +\outer\parseargdef\numberedsubsec{\numhead2{#1}} \def\numberedsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Ynumbered}{\the\chapno.\the\secno.\the\subsecno}% } -\outer\parseargdef\appendixsubsec{\apphead2{#1}} % normally calls appendixsubseczzz +% normally calls appendixsubseczzz: +\outer\parseargdef\appendixsubsec{\apphead2{#1}} \def\appendixsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Yappendix}% {\appendixletter.\the\secno.\the\subsecno}% } -\outer\parseargdef\unnumberedsubsec{\unnmhead2{#1}} %normally calls unnumberedsubseczzz +% normally calls unnumberedsubseczzz: +\outer\parseargdef\unnumberedsubsec{\unnmhead2{#1}} \def\unnumberedsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Ynothing}% @@ -4250,21 +5509,25 @@ } % Subsubsections. -\outer\parseargdef\numberedsubsubsec{\numhead3{#1}} % normally numberedsubsubseczzz +% +% normally numberedsubsubseczzz: +\outer\parseargdef\numberedsubsubsec{\numhead3{#1}} \def\numberedsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Ynumbered}% {\the\chapno.\the\secno.\the\subsecno.\the\subsubsecno}% } -\outer\parseargdef\appendixsubsubsec{\apphead3{#1}} % normally appendixsubsubseczzz +% normally appendixsubsubseczzz: +\outer\parseargdef\appendixsubsubsec{\apphead3{#1}} \def\appendixsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Yappendix}% {\appendixletter.\the\secno.\the\subsecno.\the\subsubsecno}% } -\outer\parseargdef\unnumberedsubsubsec{\unnmhead3{#1}} %normally unnumberedsubsubseczzz +% normally unnumberedsubsubseczzz: +\outer\parseargdef\unnumberedsubsubsec{\unnmhead3{#1}} \def\unnumberedsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Ynothing}% @@ -4288,7 +5551,6 @@ % 3) Likewise, headings look best if no \parindent is used, and % if justification is not attempted. Hence \raggedright. - \def\majorheading{% {\advance\chapheadingskip by 10pt \chapbreak }% \parsearg\chapheadingzzz @@ -4297,8 +5559,8 @@ \def\chapheading{\chapbreak \parsearg\chapheadingzzz} \def\chapheadingzzz#1{% {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 - \parindent=0pt\raggedright - \rm #1\hfill}}% + \parindent=0pt\ptexraggedright + \rmisbold #1\hfill}}% \bigskip \par\penalty 200\relax \suppressfirstparagraphindent } @@ -4315,17 +5577,28 @@ % (including whitespace, linebreaking, etc. around it), % given all the information in convenient, parsed form. -%%% Args are the skip and penalty (usually negative) +% Args are the skip and penalty (usually negative) \def\dobreak#1#2{\par\ifdim\lastskip<#1\removelastskip\penalty#2\vskip#1\fi} -%%% Define plain chapter starts, and page on/off switching for it % Parameter controlling skip before chapter headings (if needed) - \newskip\chapheadingskip +% Define plain chapter starts, and page on/off switching for it. \def\chapbreak{\dobreak \chapheadingskip {-4000}} \def\chappager{\par\vfill\supereject} -\def\chapoddpage{\chappager \ifodd\pageno \else \hbox to 0pt{} \chappager\fi} +% Because \domark is called before \chapoddpage, the filler page will +% get the headings for the next chapter, which is wrong. But we don't +% care -- we just disable all headings on the filler page. +\def\chapoddpage{% + \chappager + \ifodd\pageno \else + \begingroup + \headingsoff + \null + \chappager + \endgroup + \fi +} \def\setchapternewpage #1 {\csname CHAPPAG#1\endcsname} @@ -4359,41 +5632,78 @@ \def\Yappendixkeyword{Yappendix} % \def\chapmacro#1#2#3{% + % Insert the first mark before the heading break (see notes for \domark). + \let\prevchapterdefs=\lastchapterdefs + \let\prevsectiondefs=\lastsectiondefs + \gdef\lastsectiondefs{\gdef\thissectionname{}\gdef\thissectionnum{}% + \gdef\thissection{}}% + % + \def\temptype{#2}% + \ifx\temptype\Ynothingkeyword + \gdef\lastchapterdefs{\gdef\thischaptername{#1}\gdef\thischapternum{}% + \gdef\thischapter{\thischaptername}}% + \else\ifx\temptype\Yomitfromtockeyword + \gdef\lastchapterdefs{\gdef\thischaptername{#1}\gdef\thischapternum{}% + \gdef\thischapter{}}% + \else\ifx\temptype\Yappendixkeyword + \toks0={#1}% + \xdef\lastchapterdefs{% + \gdef\noexpand\thischaptername{\the\toks0}% + \gdef\noexpand\thischapternum{\appendixletter}% + % \noexpand\putwordAppendix avoids expanding indigestible + % commands in some of the translations. + \gdef\noexpand\thischapter{\noexpand\putwordAppendix{} + \noexpand\thischapternum: + \noexpand\thischaptername}% + }% + \else + \toks0={#1}% + \xdef\lastchapterdefs{% + \gdef\noexpand\thischaptername{\the\toks0}% + \gdef\noexpand\thischapternum{\the\chapno}% + % \noexpand\putwordChapter avoids expanding indigestible + % commands in some of the translations. + \gdef\noexpand\thischapter{\noexpand\putwordChapter{} + \noexpand\thischapternum: + \noexpand\thischaptername}% + }% + \fi\fi\fi + % + % Output the mark. Pass it through \safewhatsit, to take care of + % the preceding space. + \safewhatsit\domark + % + % Insert the chapter heading break. \pchapsepmacro + % + % Now the second mark, after the heading break. No break points + % between here and the heading. + \let\prevchapterdefs=\lastchapterdefs + \let\prevsectiondefs=\lastsectiondefs + \domark + % {% - \chapfonts \rm + \chapfonts \rmisbold % - % Have to define \thissection before calling \donoderef, because the + % Have to define \lastsection before calling \donoderef, because the % xref code eventually uses it. On the other hand, it has to be called % after \pchapsepmacro, or the headline will change too soon. - \gdef\thissection{#1}% - \gdef\thischaptername{#1}% + \gdef\lastsection{#1}% % % Only insert the separating space if we have a chapter/appendix % number, and don't print the unnumbered ``number''. - \def\temptype{#2}% \ifx\temptype\Ynothingkeyword \setbox0 = \hbox{}% \def\toctype{unnchap}% - \gdef\thischapter{#1}% \else\ifx\temptype\Yomitfromtockeyword \setbox0 = \hbox{}% contents like unnumbered, but no toc entry \def\toctype{omit}% - \gdef\thischapter{}% \else\ifx\temptype\Yappendixkeyword \setbox0 = \hbox{\putwordAppendix{} #3\enspace}% \def\toctype{app}% - % We don't substitute the actual chapter name into \thischapter - % because we don't want its macros evaluated now. And we don't - % use \thissection because that changes with each section. - % - \xdef\thischapter{\putwordAppendix{} \appendixletter: - \noexpand\thischaptername}% \else \setbox0 = \hbox{#3\enspace}% \def\toctype{numchap}% - \xdef\thischapter{\putwordChapter{} \the\chapno: - \noexpand\thischaptername}% \fi\fi\fi % % Write the toc entry for this chapter. Must come before the @@ -4409,7 +5719,8 @@ \donoderef{#2}% % % Typeset the actual heading. - \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright + \nobreak % Avoid page breaks at the interline glue. + \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \ptexraggedright \hangindent=\wd0 \centerparametersmaybe \unhbox0 #1\par}% }% @@ -4433,8 +5744,8 @@ % \def\unnchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 - \parindent=0pt\raggedright - \rm #1\hfill}}\bigskip \par\nobreak + \parindent=0pt\ptexraggedright + \rmisbold #1\hfill}}\bigskip \par\nobreak } \def\chfopen #1#2{\chapoddpage {\chapfonts \vbox to 3in{\vfil \hbox to\hsize{\hfil #2} \hbox to\hsize{\hfil #1} \vfil}}% @@ -4443,7 +5754,7 @@ \def\centerchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt - \hfill {\rm #1}\hfill}}\bigskip \par\nobreak + \hfill {\rmisbold #1}\hfill}}\bigskip \par\nobreak } \def\CHAPFopen{% \global\let\chapmacro=\chfopen @@ -4471,47 +5782,110 @@ % the section type for xrefs (Ynumbered, Ynothing, Yappendix), #4 is the % section number. % +\def\seckeyword{sec} +% \def\sectionheading#1#2#3#4{% {% + \checkenv{}% should not be in an environment. + % % Switch to the right set of fonts. - \csname #2fonts\endcsname \rm + \csname #2fonts\endcsname \rmisbold + % + \def\sectionlevel{#2}% + \def\temptype{#3}% + % + % Insert first mark before the heading break (see notes for \domark). + \let\prevsectiondefs=\lastsectiondefs + \ifx\temptype\Ynothingkeyword + \ifx\sectionlevel\seckeyword + \gdef\lastsectiondefs{\gdef\thissectionname{#1}\gdef\thissectionnum{}% + \gdef\thissection{\thissectionname}}% + \fi + \else\ifx\temptype\Yomitfromtockeyword + % Don't redefine \thissection. + \else\ifx\temptype\Yappendixkeyword + \ifx\sectionlevel\seckeyword + \toks0={#1}% + \xdef\lastsectiondefs{% + \gdef\noexpand\thissectionname{\the\toks0}% + \gdef\noexpand\thissectionnum{#4}% + % \noexpand\putwordSection avoids expanding indigestible + % commands in some of the translations. + \gdef\noexpand\thissection{\noexpand\putwordSection{} + \noexpand\thissectionnum: + \noexpand\thissectionname}% + }% + \fi + \else + \ifx\sectionlevel\seckeyword + \toks0={#1}% + \xdef\lastsectiondefs{% + \gdef\noexpand\thissectionname{\the\toks0}% + \gdef\noexpand\thissectionnum{#4}% + % \noexpand\putwordSection avoids expanding indigestible + % commands in some of the translations. + \gdef\noexpand\thissection{\noexpand\putwordSection{} + \noexpand\thissectionnum: + \noexpand\thissectionname}% + }% + \fi + \fi\fi\fi + % + % Go into vertical mode. Usually we'll already be there, but we + % don't want the following whatsit to end up in a preceding paragraph + % if the document didn't happen to have a blank line. + \par + % + % Output the mark. Pass it through \safewhatsit, to take care of + % the preceding space. + \safewhatsit\domark % % Insert space above the heading. \csname #2headingbreak\endcsname % + % Now the second mark, after the heading break. No break points + % between here and the heading. + \let\prevsectiondefs=\lastsectiondefs + \domark + % % Only insert the space after the number if we have a section number. - \def\sectionlevel{#2}% - \def\temptype{#3}% - % \ifx\temptype\Ynothingkeyword \setbox0 = \hbox{}% \def\toctype{unn}% - \gdef\thissection{#1}% + \gdef\lastsection{#1}% \else\ifx\temptype\Yomitfromtockeyword % for @headings -- no section number, don't include in toc, - % and don't redefine \thissection. + % and don't redefine \lastsection. \setbox0 = \hbox{}% \def\toctype{omit}% \let\sectionlevel=\empty \else\ifx\temptype\Yappendixkeyword \setbox0 = \hbox{#4\enspace}% \def\toctype{app}% - \gdef\thissection{#1}% + \gdef\lastsection{#1}% \else \setbox0 = \hbox{#4\enspace}% \def\toctype{num}% - \gdef\thissection{#1}% + \gdef\lastsection{#1}% \fi\fi\fi % - % Write the toc entry (before \donoderef). See comments in \chfplain. + % Write the toc entry (before \donoderef). See comments in \chapmacro. \writetocentry{\toctype\sectionlevel}{#1}{#4}% % % Write the node reference (= pdf destination for pdftex). - % Again, see comments in \chfplain. + % Again, see comments in \chapmacro. \donoderef{#3}% % + % Interline glue will be inserted when the vbox is completed. + % That glue will be a valid breakpoint for the page, since it'll be + % preceded by a whatsit (usually from the \donoderef, or from the + % \writetocentry if there was no node). We don't want to allow that + % break, since then the whatsits could end up on page n while the + % section is on page n+1, thus toc/etc. are wrong. Debian bug 276000. + \nobreak + % % Output the actual section heading. - \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright + \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \ptexraggedright \hangindent=\wd0 % zero if no section number \unhbox0 #1}% }% @@ -4525,15 +5899,15 @@ % % We'll almost certainly start a paragraph next, so don't let that % glue accumulate. (Not a breakpoint because it's preceded by a - % discardable item.) + % discardable item.) However, when a paragraph is not started next + % (\startdefun, \cartouche, \center, etc.), this needs to be wiped out + % or the negative glue will cause weirdly wrong output, typically + % obscuring the section heading with something else. \vskip-\parskip - % - % This is purely so the last item on the list is a known \penalty > - % 10000. This is so \startdefun can avoid allowing breakpoints after - % section headings. Otherwise, it would insert a valid breakpoint between: - % - % @section sec-whatever - % @deffn def-whatever + % + % This is so the last item on the main vertical list is a known + % \penalty > 10000, so \startdefun, etc., can recognize the situation + % and do the needful. \penalty 10001 } @@ -4572,7 +5946,7 @@ \edef\temp{% \write\tocfile{@#1entry{#2}{#3}{\lastnode}{\noexpand\folio}}}% \temp - } + }% \fi \fi % @@ -4589,7 +5963,7 @@ % These characters do not print properly in the Computer Modern roman % fonts, so we must take special care. This is more or less redundant % with the Texinfo input format setup at the end of this file. -% +% \def\activecatcodes{% \catcode`\"=\active \catcode`\$=\active @@ -4607,7 +5981,7 @@ \def\readtocfile{% \setupdatafile \activecatcodes - \input \jobname.toc + \input \tocreadfilename } \newskip\contentsrightmargin \contentsrightmargin=1in @@ -4626,7 +6000,6 @@ % % Don't need to put `Contents' or `Short Contents' in the headline. % It is abundantly clear what they are. - \def\thischapter{}% \chapmacro{#1}{Yomitfromtoc}{}% % \savepageno = \pageno @@ -4638,11 +6011,16 @@ \ifnum \pageno>0 \global\pageno = \lastnegativepageno \fi } +% redefined for the two-volume lispref. We always output on +% \jobname.toc even if this is redefined. +% +\def\tocreadfilename{\jobname.toc} % Normal (long) toc. +% \def\contents{% \startcontents{\putwordTOC}% - \openin 1 \jobname.toc + \openin 1 \tocreadfilename\space \ifeof 1 \else \readtocfile \fi @@ -4661,6 +6039,7 @@ \def\summarycontents{% \startcontents{\putwordShortTOC}% % + \let\partentry = \shortpartentry \let\numchapentry = \shortchapentry \let\appentry = \shortchapentry \let\unnchapentry = \shortunnchapentry @@ -4680,7 +6059,7 @@ \let\numsubsubsecentry = \numsecentry \let\appsubsubsecentry = \numsecentry \let\unnsubsubsecentry = \numsecentry - \openin 1 \jobname.toc + \openin 1 \tocreadfilename\space \ifeof 1 \else \readtocfile \fi @@ -4716,6 +6095,19 @@ % The last argument is the page number. % The arguments in between are the chapter number, section number, ... +% Parts, in the main contents. Replace the part number, which doesn't +% exist, with an empty box. Let's hope all the numbers have the same width. +% Also ignore the page number, which is conventionally not printed. +\def\numeralbox{\setbox0=\hbox{8}\hbox to \wd0{\hfil}} +\def\partentry#1#2#3#4{\dochapentry{\numeralbox\labelspace#1}{}} +% +% Parts, in the short toc. +\def\shortpartentry#1#2#3#4{% + \penalty-300 + \vskip.5\baselineskip plus.15\baselineskip minus.1\baselineskip + \shortchapentry{{\bf #1}}{\numeralbox}{}{}% +} + % Chapters, in the main contents. \def\numchapentry#1#2#3#4{\dochapentry{#2\labelspace#1}{#4}} % @@ -4805,45 +6197,12 @@ \message{environments,} % @foo ... @end foo. -% @point{}, @result{}, @expansion{}, @print{}, @equiv{}. -% -% Since these characters are used in examples, it should be an even number of -% \tt widths. Each \tt character is 1en, so two makes it 1em. -% -\def\point{$\star$} -\def\result{\leavevmode\raise.15ex\hbox to 1em{\hfil$\Rightarrow$\hfil}} -\def\expansion{\leavevmode\raise.1ex\hbox to 1em{\hfil$\mapsto$\hfil}} -\def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}} -\def\equiv{\leavevmode\lower.1ex\hbox to 1em{\hfil$\ptexequiv$\hfil}} - -% The @error{} command. -% Adapted from the TeXbook's \boxit. -% -\newbox\errorbox -% -{\tentt \global\dimen0 = 3em}% Width of the box. -\dimen2 = .55pt % Thickness of rules -% The text. (`r' is open on the right, `e' somewhat less so on the left.) -\setbox0 = \hbox{\kern-.75pt \tensf error\kern-1.5pt} -% -\setbox\errorbox=\hbox to \dimen0{\hfil - \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right. - \advance\hsize by -2\dimen2 % Rules. - \vbox{% - \hrule height\dimen2 - \hbox{\vrule width\dimen2 \kern3pt % Space to left of text. - \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below. - \kern3pt\vrule width\dimen2}% Space to right. - \hrule height\dimen2} - \hfil} -% -\def\error{\leavevmode\lower.7ex\copy\errorbox} - -% @tex ... @end tex escapes into raw Tex temporarily. +% @tex ... @end tex escapes into raw TeX temporarily. % One exception: @ is still an escape character, so that @end tex works. -% But \@ or @@ will get a plain tex @ character. +% But \@ or @@ will get a plain @ character. \envdef\tex{% + \setupmarkupstyle{tex}% \catcode `\\=0 \catcode `\{=1 \catcode `\}=2 \catcode `\$=3 \catcode `\&=4 \catcode `\#=6 \catcode `\^=7 \catcode `\_=8 \catcode `\~=\active \let~=\tie @@ -4853,8 +6212,14 @@ \catcode `\|=\other \catcode `\<=\other \catcode `\>=\other + \catcode`\`=\other + \catcode`\'=\other \escapechar=`\\ % + % ' is active in math mode (mathcode"8000). So reset it, and all our + % other math active characters (just in case), to plain's definitions. + \mathactive + % \let\b=\ptexb \let\bullet=\ptexbullet \let\c=\ptexc @@ -4872,6 +6237,7 @@ \let\/=\ptexslash \let\*=\ptexstar \let\t=\ptext + \expandafter \let\csname top\endcsname=\ptextop % outer \let\frenchspacing=\plainfrenchspacing % \def\endldots{\mathinner{\ldots\ldots\ldots\ldots}}% @@ -4957,6 +6323,12 @@ \normbskip=\baselineskip \normpskip=\parskip \normlskip=\lineskip % Flag to tell @lisp, etc., not to narrow margin. \let\nonarrowing = t% + % + % If this cartouche directly follows a sectioning command, we need the + % \parskip glue (backspaced over by default) or the cartouche can + % collide with the section heading. + \ifnum\lastpenalty>10000 \vskip\parskip \penalty\lastpenalty \fi + % \vbox\bgroup \baselineskip=0pt\parskip=0pt\lineskip=0pt \carttop @@ -4970,7 +6342,7 @@ \lineskip=\normlskip \parskip=\normpskip \vskip -\parskip - \comment % For explanation, see the end of \def\group. + \comment % For explanation, see the end of def\group. } \def\Ecartouche{% \ifhmode\par\fi @@ -4987,6 +6359,7 @@ % This macro is called at the beginning of all the @example variants, % inside a group. +\newdimen\nonfillparindent \def\nonfillstart{% \aboveenvbreak \hfuzz = 12pt % Don't be fussy @@ -4994,7 +6367,12 @@ \let\par = \lisppar % don't ignore blank lines \obeylines % each line of input is a line of output \parskip = 0pt + % Turn off paragraph indentation but redefine \indent to emulate + % the normal \indent. + \nonfillparindent=\parindent \parindent = 0pt + \let\indent\nonfillindent + % \emergencystretch = 0pt % don't try to avoid overfull boxes \ifx\nonarrowing\relax \advance \leftskip by \lispnarrowing @@ -5005,6 +6383,24 @@ \let\exdent=\nofillexdent } +\begingroup +\obeyspaces +% We want to swallow spaces (but not other tokens) after the fake +% @indent in our nonfill-environments, where spaces are normally +% active and set to @tie, resulting in them not being ignored after +% @indent. +\gdef\nonfillindent{\futurelet\temp\nonfillindentcheck}% +\gdef\nonfillindentcheck{% +\ifx\temp % +\expandafter\nonfillindentgobble% +\else% +\leavevmode\nonfillindentbox% +\fi% +}% +\endgroup +\def\nonfillindentgobble#1{\nonfillindent} +\def\nonfillindentbox{\hbox to \nonfillparindent{\hss}} + % If you want all examples etc. small: @set dispenvsize small. % If you want even small examples the full size: @set dispenvsize nosmall. % This affects the following displayed environments: @@ -5015,53 +6411,59 @@ \let\SETdispenvsize\relax \def\setnormaldispenv{% \ifx\SETdispenvsize\smallword + % end paragraph for sake of leading, in case document has no blank + % line. This is redundant with what happens in \aboveenvbreak, but + % we need to do it before changing the fonts, and it's inconvenient + % to change the fonts afterward. + \ifnum \lastpenalty=10000 \else \endgraf \fi \smallexamplefonts \rm \fi } \def\setsmalldispenv{% \ifx\SETdispenvsize\nosmallword \else + \ifnum \lastpenalty=10000 \else \endgraf \fi \smallexamplefonts \rm \fi } % We often define two environments, @foo and @smallfoo. -% Let's do it by one command: -\def\makedispenv #1#2{ - \expandafter\envdef\csname#1\endcsname {\setnormaldispenv #2} - \expandafter\envdef\csname small#1\endcsname {\setsmalldispenv #2} +% Let's do it in one command. #1 is the env name, #2 the definition. +\def\makedispenvdef#1#2{% + \expandafter\envdef\csname#1\endcsname {\setnormaldispenv #2}% + \expandafter\envdef\csname small#1\endcsname {\setsmalldispenv #2}% \expandafter\let\csname E#1\endcsname \afterenvbreak \expandafter\let\csname Esmall#1\endcsname \afterenvbreak } -% Define two synonyms: -\def\maketwodispenvs #1#2#3{ - \makedispenv{#1}{#3} - \makedispenv{#2}{#3} -} - -% @lisp: indented, narrowed, typewriter font; @example: same as @lisp. +% Define two environment synonyms (#1 and #2) for an environment. +\def\maketwodispenvdef#1#2#3{% + \makedispenvdef{#1}{#3}% + \makedispenvdef{#2}{#3}% +} +% +% @lisp: indented, narrowed, typewriter font; +% @example: same as @lisp. % % @smallexample and @smalllisp: use smaller fonts. % Originally contributed by Pavel at xerox. % -\maketwodispenvs {lisp}{example}{% +\maketwodispenvdef{lisp}{example}{% \nonfillstart - \tt + \tt\setupmarkupstyle{example}% \let\kbdfont = \kbdexamplefont % Allow @kbd to do something special. - \gobble % eat return -} - + \gobble % eat return +} % @display/@smalldisplay: same as @lisp except keep current font. % -\makedispenv {display}{% +\makedispenvdef{display}{% \nonfillstart \gobble } % @format/@smallformat: same as @display except don't narrow margins. % -\makedispenv{format}{% +\makedispenvdef{format}{% \let\nonarrowing = t% \nonfillstart \gobble @@ -5080,18 +6482,44 @@ \envdef\flushright{% \let\nonarrowing = t% \nonfillstart - \advance\leftskip by 0pt plus 1fill + \advance\leftskip by 0pt plus 1fill\relax \gobble } \let\Eflushright = \afterenvbreak +% @raggedright does more-or-less normal line breaking but no right +% justification. From plain.tex. +\envdef\raggedright{% + \rightskip0pt plus2em \spaceskip.3333em \xspaceskip.5em\relax +} +\let\Eraggedright\par + +\envdef\raggedleft{% + \parindent=0pt \leftskip0pt plus2em + \spaceskip.3333em \xspaceskip.5em \parfillskip=0pt + \hbadness=10000 % Last line will usually be underfull, so turn off + % badness reporting. +} +\let\Eraggedleft\par + +\envdef\raggedcenter{% + \parindent=0pt \rightskip0pt plus1em \leftskip0pt plus1em + \spaceskip.3333em \xspaceskip.5em \parfillskip=0pt + \hbadness=10000 % Last line will usually be underfull, so turn off + % badness reporting. +} +\let\Eraggedcenter\par + + % @quotation does normal linebreaking (hence we can't use \nonfillstart) % and narrows the margins. We keep \parskip nonzero in general, since % we're doing normal filling. So, when using \aboveenvbreak and % \afterenvbreak, temporarily make \parskip 0. % -\envdef\quotation{% +\makedispenvdef{quotation}{\quotationstart} +% +\def\quotationstart{% {\parskip=0pt \aboveenvbreak}% because \aboveenvbreak inserts \parskip \parindent=0pt % @@ -5111,12 +6539,13 @@ % \def\Equotation{% \par - \ifx\quotationauthor\undefined\else + \ifx\quotationauthor\thisisundefined\else % indent a bit. \leftline{\kern 2\leftskip \sl ---\quotationauthor}% \fi {\parskip=0pt \afterenvbreak}% } +\def\Esmallquotation{\Equotation} % If we're given an argument, typeset it in bold with a colon after. \def\quotationlabel#1{% @@ -5141,18 +6570,16 @@ \do\ \do\\\do\{\do\}\do\$\do\&% \do\#\do\^\do\^^K\do\_\do\^^A\do\%\do\~% \do\<\do\>\do\|\do\@\do+\do\"% + % Don't do the quotes -- if we do, @set txicodequoteundirected and + % @set txicodequotebacktick will not have effect on @verb and + % @verbatim, and ?` and !` ligatures won't get disabled. + %\do\`\do\'% } % % [Knuth] p. 380 \def\uncatcodespecials{% \def\do##1{\catcode`##1=\other}\dospecials} % -% [Knuth] pp. 380,381,391 -% Disable Spanish ligatures ?` and !` of \tt font -\begingroup - \catcode`\`=\active\gdef`{\relax\lq} -\endgroup -% % Setup for the @verb command. % % Eight spaces for a tab @@ -5164,7 +6591,7 @@ \def\setupverb{% \tt % easiest (and conventionally used) font for verbatim \def\par{\leavevmode\endgraf}% - \catcode`\`=\active + \setupmarkupstyle{verb}% \tabeightspaces % Respect line breaks, % print special symbols as themselves, and @@ -5175,35 +6602,46 @@ % Setup for the @verbatim environment % -% Real tab expansion +% Real tab expansion. \newdimen\tabw \setbox0=\hbox{\tt\space} \tabw=8\wd0 % tab amount % -\def\starttabbox{\setbox0=\hbox\bgroup} +% We typeset each line of the verbatim in an \hbox, so we can handle +% tabs. The \global is in case the verbatim line starts with an accent, +% or some other command that starts with a begin-group. Otherwise, the +% entire \verbbox would disappear at the corresponding end-group, before +% it is typeset. Meanwhile, we can't have nested verbatim commands +% (can we?), so the \global won't be overwriting itself. +\newbox\verbbox +\def\starttabbox{\global\setbox\verbbox=\hbox\bgroup} +% \begingroup \catcode`\^^I=\active \gdef\tabexpand{% \catcode`\^^I=\active \def^^I{\leavevmode\egroup - \dimen0=\wd0 % the width so far, or since the previous tab - \divide\dimen0 by\tabw - \multiply\dimen0 by\tabw % compute previous multiple of \tabw - \advance\dimen0 by\tabw % advance to next multiple of \tabw - \wd0=\dimen0 \box0 \starttabbox + \dimen\verbbox=\wd\verbbox % the width so far, or since the previous tab + \divide\dimen\verbbox by\tabw + \multiply\dimen\verbbox by\tabw % compute previous multiple of \tabw + \advance\dimen\verbbox by\tabw % advance to next multiple of \tabw + \wd\verbbox=\dimen\verbbox \box\verbbox \starttabbox }% } \endgroup + +% start the verbatim environment. \def\setupverbatim{% \let\nonarrowing = t% \nonfillstart - % Easiest (and conventionally used) font for verbatim - \tt - \def\par{\leavevmode\egroup\box0\endgraf}% - \catcode`\`=\active + \tt % easiest (and conventionally used) font for verbatim + % The \leavevmode here is for blank lines. Otherwise, we would + % never \starttabox and the \egroup would end verbatim mode. + \def\par{\leavevmode\egroup\box\verbbox\endgraf}% \tabexpand + \setupmarkupstyle{verbatim}% % Respect line breaks, % print special symbols as themselves, and - % make each space count - % must do in this order: + % make each space count. + % Must do in this order: \obeylines \uncatcodespecials \sepspaces \everypar{\starttabbox}% } @@ -5259,6 +6697,8 @@ {% \makevalueexpandable \setupverbatim + \indexnofonts % Allow `@@' and other weird things in file names. + \wlog{texinfo.tex: doing @verbatiminclude of #1^^J}% \input #1 \afterenvbreak }% @@ -5284,27 +6724,35 @@ \endgroup } + \message{defuns,} % @defun etc. \newskip\defbodyindent \defbodyindent=.4in \newskip\defargsindent \defargsindent=50pt \newskip\deflastargmargin \deflastargmargin=18pt +\newcount\defunpenalty % Start the processing of @deffn: \def\startdefun{% \ifnum\lastpenalty<10000 \medbreak + \defunpenalty=10003 % Will keep this @deffn together with the + % following @def command, see below. \else % If there are two @def commands in a row, we'll have a \nobreak, % which is there to keep the function description together with its % header. But if there's nothing but headers, we need to allow a % break somewhere. Check specifically for penalty 10002, inserted - % by \defargscommonending, instead of 10000, since the sectioning + % by \printdefunline, instead of 10000, since the sectioning % commands also insert a nobreak penalty, and we don't want to allow % a break between a section heading and a defun. - % - \ifnum\lastpenalty=10002 \penalty2000 \fi + % + % As a further refinement, we avoid "club" headers by signalling + % with penalty of 10003 after the very first @deffn in the + % sequence (see above), and penalty of 10002 after any following + % @def command. + \ifnum\lastpenalty=10002 \penalty2000 \else \defunpenalty=10002 \fi % % Similarly, after a section heading, do not allow a break. % But do insert the glue. @@ -5322,7 +6770,7 @@ % % As above, allow line break if we have multiple x headers in a row. % It's not a great place, though. - \ifnum\lastpenalty=10002 \penalty3000 \fi + \ifnum\lastpenalty=10002 \penalty3000 \else \defunpenalty=10002 \fi % % And now, it's time to reuse the body of the original defun: \expandafter\gobbledefun#1% @@ -5337,10 +6785,10 @@ #1#2 \endheader % common ending: \interlinepenalty = 10000 - \advance\rightskip by 0pt plus 1fil + \advance\rightskip by 0pt plus 1fil\relax \endgraf \nobreak\vskip -\parskip - \penalty 10002 % signal to \startdefun and \dodefunx + \penalty\defunpenalty % signal to \startdefun and \dodefunx % Some of the @defun-type tags do not enable magic parentheses, % rendering the following check redundant. But we don't optimize. \checkparencounts @@ -5350,7 +6798,7 @@ \def\Edefun{\endgraf\medbreak} % \makedefun{deffn} creates \deffn, \deffnx and \Edeffn; -% the only thing remainnig is to define \deffnheader. +% the only thing remaining is to define \deffnheader. % \def\makedefun#1{% \expandafter\let\csname E#1\endcsname = \Edefun @@ -5367,13 +6815,36 @@ \def\domakedefun#1#2#3{% \envdef#1{% \startdefun + \doingtypefnfalse % distinguish typed functions from all else \parseargusing\activeparens{\printdefunline#3}% }% \def#2{\dodefunx#1}% \def#3% } -%%% Untyped functions: +\newif\ifdoingtypefn % doing typed function? +\newif\ifrettypeownline % typeset return type on its own line? + +% @deftypefnnewline on|off says whether the return type of typed functions +% are printed on their own line. This affects @deftypefn, @deftypefun, +% @deftypeop, and @deftypemethod. +% +\parseargdef\deftypefnnewline{% + \def\temp{#1}% + \ifx\temp\onword + \expandafter\let\csname SETtxideftypefnnl\endcsname + = \empty + \else\ifx\temp\offword + \expandafter\let\csname SETtxideftypefnnl\endcsname + = \relax + \else + \errhelp = \EMsimple + \errmessage{Unknown @txideftypefnnl value `\temp', + must be on|off}% + \fi\fi +} + +% Untyped functions: % @deffn category name args \makedefun{deffn}{\deffngeneral{}} @@ -5392,7 +6863,7 @@ \defname{#2}{}{#3}\magicamp\defunargs{#4\unskip}% } -%%% Typed functions: +% Typed functions: % @deftypefn category type name args \makedefun{deftypefn}{\deftypefngeneral{}} @@ -5407,10 +6878,11 @@ % \def\deftypefngeneral#1#2 #3 #4 #5\endheader{% \dosubind{fn}{\code{#4}}{#1}% + \doingtypefntrue \defname{#2}{#3}{#4}\defunargs{#5\unskip}% } -%%% Typed variables: +% Typed variables: % @deftypevr category type var args \makedefun{deftypevr}{\deftypecvgeneral{}} @@ -5428,7 +6900,7 @@ \defname{#2}{#3}{#4}\defunargs{#5\unskip}% } -%%% Untyped variables: +% Untyped variables: % @defvr category var args \makedefun{defvr}#1 {\deftypevrheader{#1} {} } @@ -5439,7 +6911,8 @@ % \defcvof {category of}class var args \def\defcvof#1#2 {\deftypecvof{#1}#2 {} } -%%% Type: +% Types: + % @deftp category name args \makedefun{deftp}#1 #2 #3\endheader{% \doind{tp}{\code{#2}}% @@ -5467,25 +6940,49 @@ % We are followed by (but not passed) the arguments, if any. % \def\defname#1#2#3{% + \par % Get the values of \leftskip and \rightskip as they were outside the @def... \advance\leftskip by -\defbodyindent % - % How we'll format the type name. Putting it in brackets helps + % Determine if we are typesetting the return type of a typed function + % on a line by itself. + \rettypeownlinefalse + \ifdoingtypefn % doing a typed function specifically? + % then check user option for putting return type on its own line: + \expandafter\ifx\csname SETtxideftypefnnl\endcsname\relax \else + \rettypeownlinetrue + \fi + \fi + % + % How we'll format the category name. Putting it in brackets helps % distinguish it from the body text that may end up on the next line % just below it. \def\temp{#1}% \setbox0=\hbox{\kern\deflastargmargin \ifx\temp\empty\else [\rm\temp]\fi} % - % Figure out line sizes for the paragraph shape. + % Figure out line sizes for the paragraph shape. We'll always have at + % least two. + \tempnum = 2 + % % The first line needs space for \box0; but if \rightskip is nonzero, % we need only space for the part of \box0 which exceeds it: \dimen0=\hsize \advance\dimen0 by -\wd0 \advance\dimen0 by \rightskip + % + % If doing a return type on its own line, we'll have another line. + \ifrettypeownline + \advance\tempnum by 1 + \def\maybeshapeline{0in \hsize}% + \else + \def\maybeshapeline{}% + \fi + % % The continuations: \dimen2=\hsize \advance\dimen2 by -\defargsindent - % (plain.tex says that \dimen1 should be used only as global.) - \parshape 2 0in \dimen0 \defargsindent \dimen2 - % - % Put the type name to the right margin. + % + % The final paragraph shape: + \parshape \tempnum 0in \dimen0 \maybeshapeline \defargsindent \dimen2 + % + % Put the category name at the right margin. \noindent \hbox to 0pt{% \hfil\box0 \kern-\hsize @@ -5507,8 +7004,16 @@ % . this still does not fix the ?` and !` ligatures, but so far no % one has made identifiers using them :). \df \tt - \def\temp{#2}% return value type - \ifx\temp\empty\else \tclose{\temp} \fi + \def\temp{#2}% text of the return type + \ifx\temp\empty\else + \tclose{\temp}% typeset the return type + \ifrettypeownline + % put return type on its own line; prohibit line break following: + \hfil\vadjust{\nobreak}\break + \else + \space % type on same line, so just followed by a space + \fi + \fi % no return type #3% output function name }% {\rm\enskip}% hskip 0.5 em of \tenrm @@ -5529,7 +7034,7 @@ % % On the other hand, if an argument has two dashes (for instance), we % want a way to get ttsl. Let's try @var for that. - \let\var=\ttslanted + \def\var##1{{\setupmarkupstyle{var}\ttslanted{##1}}}% #1% \sl\hyphenchar\font=45 } @@ -5609,12 +7114,14 @@ \ifnum\parencount=0 \else \badparencount \fi \ifnum\brackcount=0 \else \badbrackcount \fi } +% these should not use \errmessage; the glibc manual, at least, actually +% has such constructs (when documenting function pointers). \def\badparencount{% - \errmessage{Unbalanced parentheses in @def}% + \message{Warning: unbalanced parentheses in @def...}% \global\parencount=0 } \def\badbrackcount{% - \errmessage{Unbalanced square braces in @def}% + \message{Warning: unbalanced square brackets in @def...}% \global\brackcount=0 } @@ -5624,7 +7131,7 @@ % To do this right we need a feature of e-TeX, \scantokens, % which we arrange to emulate with a temporary file in ordinary TeX. -\ifx\eTeXversion\undefined +\ifx\eTeXversion\thisisundefined \newwrite\macscribble \def\scantokens#1{% \toks0={#1}% @@ -5635,26 +7142,30 @@ } \fi -\def\scanmacro#1{% - \begingroup - \newlinechar`\^^M - \let\xeatspaces\eatspaces - % Undo catcode changes of \startcontents and \doprintindex - % When called from @insertcopying or (short)caption, we need active - % backslash to get it printed correctly. Previously, we had - % \catcode`\\=\other instead. We'll see whether a problem appears - % with macro expansion. --kasal, 19aug04 - \catcode`\@=0 \catcode`\\=\active \escapechar=`\@ - % ... and \example - \spaceisspace - % - % Append \endinput to make sure that TeX does not see the ending newline. - % - % I've verified that it is necessary both for e-TeX and for ordinary TeX - % --kasal, 29nov03 - \scantokens{#1\endinput}% - \endgroup -} +\def\scanmacro#1{\begingroup + \newlinechar`\^^M + \let\xeatspaces\eatspaces + % + % Undo catcode changes of \startcontents and \doprintindex + % When called from @insertcopying or (short)caption, we need active + % backslash to get it printed correctly. Previously, we had + % \catcode`\\=\other instead. We'll see whether a problem appears + % with macro expansion. --kasal, 19aug04 + \catcode`\@=0 \catcode`\\=\active \escapechar=`\@ + % + % ... and for \example: + \spaceisspace + % + % The \empty here causes a following catcode 5 newline to be eaten as + % part of reading whitespace after a control sequence. It does not + % eat a catcode 13 newline. There's no good way to handle the two + % cases (untried: maybe e-TeX's \everyeof could help, though plain TeX + % would then have different behavior). See the Macro Details node in + % the manual for the workaround we recommend for macros and + % line-oriented commands. + % + \scantokens{#1\empty}% +\endgroup} \def\scanexp#1{% \edef\temp{\noexpand\scanmacro{#1}}% @@ -5682,7 +7193,7 @@ % This does \let #1 = #2, with \csnames; that is, % \let \csname#1\endcsname = \csname#2\endcsname % (except of course we have to play expansion games). -% +% \def\cslet#1#2{% \expandafter\let \csname#1\expandafter\endcsname @@ -5708,13 +7219,18 @@ % Macro bodies are absorbed as an argument in a context where % all characters are catcode 10, 11 or 12, except \ which is active -% (as in normal texinfo). It is necessary to change the definition of \. - +% (as in normal texinfo). It is necessary to change the definition of \ +% to recognize macro arguments; this is the job of \mbodybackslash. +% +% Non-ASCII encodings make 8-bit characters active, so un-activate +% them to avoid their expansion. Must do this non-globally, to +% confine the change to the current group. +% % It's necessary to have hard CRs when the macro is executed. This is -% done by making ^^M (\endlinechar) catcode 12 when reading the macro +% done by making ^^M (\endlinechar) catcode 12 when reading the macro % body, and then making it the \newlinechar in \scanmacro. - -\def\scanctxt{% +% +\def\scanctxt{% used as subroutine \catcode`\"=\other \catcode`\+=\other \catcode`\<=\other @@ -5724,15 +7240,16 @@ \catcode`\_=\other \catcode`\|=\other \catcode`\~=\other -} - -\def\scanargctxt{% + \ifx\declaredencoding\ascii \else \setnonasciicharscatcodenonglobal\other \fi +} + +\def\scanargctxt{% used for copying and captions, not macros. \scanctxt \catcode`\\=\other \catcode`\^^M=\other } -\def\macrobodyctxt{% +\def\macrobodyctxt{% used for @macro definitions \scanctxt \catcode`\{=\other \catcode`\}=\other @@ -5740,32 +7257,56 @@ \usembodybackslash } -\def\macroargctxt{% +\def\macroargctxt{% used when scanning invocations \scanctxt - \catcode`\\=\other -} + \catcode`\\=0 +} +% why catcode 0 for \ in the above? To recognize \\ \{ \} as "escapes" +% for the single characters \ { }. Thus, we end up with the "commands" +% that would be written @\ @{ @} in a Texinfo document. +% +% We already have @{ and @}. For @\, we define it here, and only for +% this purpose, to produce a typewriter backslash (so, the @\ that we +% define for @math can't be used with @macro calls): +% +\def\\{\normalbackslash}% +% +% We would like to do this for \, too, since that is what makeinfo does. +% But it is not possible, because Texinfo already has a command @, for a +% cedilla accent. Documents must use @comma{} instead. +% +% \anythingelse will almost certainly be an error of some kind. + % \mbodybackslash is the definition of \ in @macro bodies. % It maps \foo\ => \csname macarg.foo\endcsname => #N % where N is the macro parameter number. % We define \csname macarg.\endcsname to be \realbackslash, so % \\ in macro replacement text gets you a backslash. - +% {\catcode`@=0 @catcode`@\=@active @gdef at usembodybackslash{@let\=@mbodybackslash} @gdef at mbodybackslash#1\{@csname macarg.#1 at endcsname} } \expandafter\def\csname macarg.\endcsname{\realbackslash} +\def\margbackslash#1{\char`\#1 } + \def\macro{\recursivefalse\parsearg\macroxxx} \def\rmacro{\recursivetrue\parsearg\macroxxx} \def\macroxxx#1{% - \getargs{#1}% now \macname is the macname and \argl the arglist + \getargs{#1}% now \macname is the macname and \argl the arglist \ifx\argl\empty % no arguments - \paramno=0% + \paramno=0\relax \else \expandafter\parsemargdef \argl;% + \if\paramno>256\relax + \ifx\eTeXversion\thisisundefined + \errhelp = \EMsimple + \errmessage{You need eTeX to compile a file with macros with more than 256 arguments} + \fi + \fi \fi \if1\csname ismacro.\the\macname\endcsname \message{Warning: redefining \the\macname}% @@ -5812,46 +7353,269 @@ % an opening brace, and that opening brace is not consumed. \def\getargs#1{\getargsxxx#1{}} \def\getargsxxx#1#{\getmacname #1 \relax\getmacargs} -\def\getmacname #1 #2\relax{\macname={#1}} +\def\getmacname#1 #2\relax{\macname={#1}} \def\getmacargs#1{\def\argl{#1}} +% For macro processing make @ a letter so that we can make Texinfo private macro names. +\edef\texiatcatcode{\the\catcode`\@} +\catcode `@=11\relax + % Parse the optional {params} list. Set up \paramno and \paramlist -% so \defmacro knows what to do. Define \macarg.blah for each blah -% in the params list, to be ##N where N is the position in that list. +% so \defmacro knows what to do. Define \macarg.BLAH for each BLAH +% in the params list to some hook where the argument si to be expanded. If +% there are less than 10 arguments that hook is to be replaced by ##N where N +% is the position in that list, that is to say the macro arguments are to be +% defined `a la TeX in the macro body. +% % That gets used by \mbodybackslash (above). - +% % We need to get `macro parameter char #' into several definitions. -% The technique used is stolen from LaTeX: let \hash be something +% The technique used is stolen from LaTeX: let \hash be something % unexpandable, insert that wherever you need a #, and then redefine % it to # just before using the token list produced. % % The same technique is used to protect \eatspaces till just before % the macro is used. - -\def\parsemargdef#1;{\paramno=0\def\paramlist{}% - \let\hash\relax\let\xeatspaces\relax\parsemargdefxxx#1,;,} +% +% If there are 10 or more arguments, a different technique is used, where the +% hook remains in the body, and when macro is to be expanded the body is +% processed again to replace the arguments. +% +% In that case, the hook is \the\toks N-1, and we simply set \toks N-1 to the +% argument N value and then \edef the body (nothing else will expand because of +% the catcode regime underwhich the body was input). +% +% If you compile with TeX (not eTeX), and you have macros with 10 or more +% arguments, you need that no macro has more than 256 arguments, otherwise an +% error is produced. +\def\parsemargdef#1;{% + \paramno=0\def\paramlist{}% + \let\hash\relax + \let\xeatspaces\relax + \parsemargdefxxx#1,;,% + % In case that there are 10 or more arguments we parse again the arguments + % list to set new definitions for the \macarg.BLAH macros corresponding to + % each BLAH argument. It was anyhow needed to parse already once this list + % in order to count the arguments, and as macros with at most 9 arguments + % are by far more frequent than macro with 10 or more arguments, defining + % twice the \macarg.BLAH macros does not cost too much processing power. + \ifnum\paramno<10\relax\else + \paramno0\relax + \parsemmanyargdef@@#1,;,% 10 or more arguments + \fi +} \def\parsemargdefxxx#1,{% \if#1;\let\next=\relax \else \let\next=\parsemargdefxxx - \advance\paramno by 1% + \advance\paramno by 1 \expandafter\edef\csname macarg.\eatspaces{#1}\endcsname {\xeatspaces{\hash\the\paramno}}% \edef\paramlist{\paramlist\hash\the\paramno,}% \fi\next} +\def\parsemmanyargdef@@#1,{% + \if#1;\let\next=\relax + \else + \let\next=\parsemmanyargdef@@ + \edef\tempb{\eatspaces{#1}}% + \expandafter\def\expandafter\tempa + \expandafter{\csname macarg.\tempb\endcsname}% + % Note that we need some extra \noexpand\noexpand, this is because we + % don't want \the to be expanded in the \parsermacbody as it uses an + % \xdef . + \expandafter\edef\tempa + {\noexpand\noexpand\noexpand\the\toks\the\paramno}% + \advance\paramno by 1\relax + \fi\next} + % These two commands read recursive and nonrecursive macro bodies. % (They're different since rec and nonrec macros end differently.) - +% + +\catcode `\@\texiatcatcode \long\def\parsemacbody#1 at end macro% {\xdef\temp{\eatcr{#1}}\endgroup\defmacro}% \long\def\parsermacbody#1 at end rmacro% {\xdef\temp{\eatcr{#1}}\endgroup\defmacro}% - -% This defines the macro itself. There are six cases: recursive and -% nonrecursive macros of zero, one, and many arguments. +\catcode `\@=11\relax + +\let\endargs@\relax +\let\nil@\relax +\def\nilm@{\nil@}% +\long\def\nillm@{\nil@}% + +% This macro is expanded during the Texinfo macro expansion, not during its +% definition. It gets all the arguments values and assigns them to macros +% macarg.ARGNAME +% +% #1 is the macro name +% #2 is the list of argument names +% #3 is the list of argument values +\def\getargvals@#1#2#3{% + \def\macargdeflist@{}% + \def\saveparamlist@{#2}% Need to keep a copy for parameter expansion. + \def\paramlist{#2,\nil@}% + \def\macroname{#1}% + \begingroup + \macroargctxt + \def\argvaluelist{#3,\nil@}% + \def\@tempa{#3}% + \ifx\@tempa\empty + \setemptyargvalues@ + \else + \getargvals@@ + \fi +} + +% +\def\getargvals@@{% + \ifx\paramlist\nilm@ + % Some sanity check needed here that \argvaluelist is also empty. + \ifx\argvaluelist\nillm@ + \else + \errhelp = \EMsimple + \errmessage{Too many arguments in macro `\macroname'!}% + \fi + \let\next\macargexpandinbody@ + \else + \ifx\argvaluelist\nillm@ + % No more arguments values passed to macro. Set remaining named-arg + % macros to empty. + \let\next\setemptyargvalues@ + \else + % pop current arg name into \@tempb + \def\@tempa##1{\pop@{\@tempb}{\paramlist}##1\endargs@}% + \expandafter\@tempa\expandafter{\paramlist}% + % pop current argument value into \@tempc + \def\@tempa##1{\longpop@{\@tempc}{\argvaluelist}##1\endargs@}% + \expandafter\@tempa\expandafter{\argvaluelist}% + % Here \@tempb is the current arg name and \@tempc is the current arg value. + % First place the new argument macro definition into \@tempd + \expandafter\macname\expandafter{\@tempc}% + \expandafter\let\csname macarg.\@tempb\endcsname\relax + \expandafter\def\expandafter\@tempe\expandafter{% + \csname macarg.\@tempb\endcsname}% + \edef\@tempd{\long\def\@tempe{\the\macname}}% + \push@\@tempd\macargdeflist@ + \let\next\getargvals@@ + \fi + \fi + \next +} + +\def\push@#1#2{% + \expandafter\expandafter\expandafter\def + \expandafter\expandafter\expandafter#2% + \expandafter\expandafter\expandafter{% + \expandafter#1#2}% +} + +% Replace arguments by their values in the macro body, and place the result +% in macro \@tempa +\def\macvalstoargs@{% + % To do this we use the property that token registers that are \the'ed + % within an \edef expand only once. So we are going to place all argument + % values into respective token registers. + % + % First we save the token context, and initialize argument numbering. + \begingroup + \paramno0\relax + % Then, for each argument number #N, we place the corresponding argument + % value into a new token list register \toks#N + \expandafter\putargsintokens@\saveparamlist@,;,% + % Then, we expand the body so that argument are replaced by their + % values. The trick for values not to be expanded themselves is that they + % are within tokens and that tokens expand only once in an \edef . + \edef\@tempc{\csname mac.\macroname .body\endcsname}% + % Now we restore the token stack pointer to free the token list registers + % which we have used, but we make sure that expanded body is saved after + % group. + \expandafter + \endgroup + \expandafter\def\expandafter\@tempa\expandafter{\@tempc}% + } + +\def\macargexpandinbody@{% + %% Define the named-macro outside of this group and then close this group. + \expandafter + \endgroup + \macargdeflist@ + % First the replace in body the macro arguments by their values, the result + % is in \@tempa . + \macvalstoargs@ + % Then we point at the \norecurse or \gobble (for recursive) macro value + % with \@tempb . + \expandafter\let\expandafter\@tempb\csname mac.\macroname .recurse\endcsname + % Depending on whether it is recursive or not, we need some tailing + % \egroup . + \ifx\@tempb\gobble + \let\@tempc\relax + \else + \let\@tempc\egroup + \fi + % And now we do the real job: + \edef\@tempd{\noexpand\@tempb{\macroname}\noexpand\scanmacro{\@tempa}\@tempc}% + \@tempd +} + +\def\putargsintokens@#1,{% + \if#1;\let\next\relax + \else + \let\next\putargsintokens@ + % First we allocate the new token list register, and give it a temporary + % alias \@tempb . + \toksdef\@tempb\the\paramno + % Then we place the argument value into that token list register. + \expandafter\let\expandafter\@tempa\csname macarg.#1\endcsname + \expandafter\@tempb\expandafter{\@tempa}% + \advance\paramno by 1\relax + \fi + \next +} + +% Save the token stack pointer into macro #1 +\def\texisavetoksstackpoint#1{\edef#1{\the\@cclvi}} +% Restore the token stack pointer from number in macro #1 +\def\texirestoretoksstackpoint#1{\expandafter\mathchardef\expandafter\@cclvi#1\relax} +% newtoks that can be used non \outer . +\def\texinonouternewtoks{\alloc@ 5\toks \toksdef \@cclvi} + +% Tailing missing arguments are set to empty +\def\setemptyargvalues@{% + \ifx\paramlist\nilm@ + \let\next\macargexpandinbody@ + \else + \expandafter\setemptyargvaluesparser@\paramlist\endargs@ + \let\next\setemptyargvalues@ + \fi + \next +} + +\def\setemptyargvaluesparser@#1,#2\endargs@{% + \expandafter\def\expandafter\@tempa\expandafter{% + \expandafter\def\csname macarg.#1\endcsname{}}% + \push@\@tempa\macargdeflist@ + \def\paramlist{#2}% +} + +% #1 is the element target macro +% #2 is the list macro +% #3,#4\endargs@ is the list value +\def\pop@#1#2#3,#4\endargs@{% + \def#1{#3}% + \def#2{#4}% +} +\long\def\longpop@#1#2#3,#4\endargs@{% + \long\def#1{#3}% + \long\def#2{#4}% +} + +% This defines a Texinfo @macro. There are eight cases: recursive and +% nonrecursive macros of zero, one, up to nine, and many arguments. % Much magic with \expandafter here. % \xdef is used so that macro definitions will survive the file % they're defined in; @include reads the file inside a group. +% \def\defmacro{% \let\hash=##% convert placeholders to macro parameter chars \ifrecursive @@ -5866,17 +7630,25 @@ \expandafter\noexpand\csname\the\macname xxx\endcsname}% \expandafter\xdef\csname\the\macname xxx\endcsname##1{% \egroup\noexpand\scanmacro{\temp}}% - \else % many - \expandafter\xdef\csname\the\macname\endcsname{% - \bgroup\noexpand\macroargctxt - \noexpand\csname\the\macname xx\endcsname}% - \expandafter\xdef\csname\the\macname xx\endcsname##1{% - \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% - \expandafter\expandafter - \expandafter\xdef - \expandafter\expandafter - \csname\the\macname xxx\endcsname - \paramlist{\egroup\noexpand\scanmacro{\temp}}% + \else + \ifnum\paramno<10\relax % at most 9 + \expandafter\xdef\csname\the\macname\endcsname{% + \bgroup\noexpand\macroargctxt + \noexpand\csname\the\macname xx\endcsname}% + \expandafter\xdef\csname\the\macname xx\endcsname##1{% + \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% + \expandafter\expandafter + \expandafter\xdef + \expandafter\expandafter + \csname\the\macname xxx\endcsname + \paramlist{\egroup\noexpand\scanmacro{\temp}}% + \else % 10 or more + \expandafter\xdef\csname\the\macname\endcsname{% + \noexpand\getargvals@{\the\macname}{\argl}% + }% + \global\expandafter\let\csname mac.\the\macname .body\endcsname\temp + \global\expandafter\let\csname mac.\the\macname .recurse\endcsname\gobble + \fi \fi \else \ifcase\paramno @@ -5893,39 +7665,51 @@ \egroup \noexpand\norecurse{\the\macname}% \noexpand\scanmacro{\temp}\egroup}% - \else % many - \expandafter\xdef\csname\the\macname\endcsname{% - \bgroup\noexpand\macroargctxt - \expandafter\noexpand\csname\the\macname xx\endcsname}% - \expandafter\xdef\csname\the\macname xx\endcsname##1{% - \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% - \expandafter\expandafter - \expandafter\xdef - \expandafter\expandafter - \csname\the\macname xxx\endcsname - \paramlist{% - \egroup - \noexpand\norecurse{\the\macname}% - \noexpand\scanmacro{\temp}\egroup}% + \else % at most 9 + \ifnum\paramno<10\relax + \expandafter\xdef\csname\the\macname\endcsname{% + \bgroup\noexpand\macroargctxt + \expandafter\noexpand\csname\the\macname xx\endcsname}% + \expandafter\xdef\csname\the\macname xx\endcsname##1{% + \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% + \expandafter\expandafter + \expandafter\xdef + \expandafter\expandafter + \csname\the\macname xxx\endcsname + \paramlist{% + \egroup + \noexpand\norecurse{\the\macname}% + \noexpand\scanmacro{\temp}\egroup}% + \else % 10 or more: + \expandafter\xdef\csname\the\macname\endcsname{% + \noexpand\getargvals@{\the\macname}{\argl}% + }% + \global\expandafter\let\csname mac.\the\macname .body\endcsname\temp + \global\expandafter\let\csname mac.\the\macname .recurse\endcsname\norecurse + \fi \fi \fi} +\catcode `\@\texiatcatcode\relax + \def\norecurse#1{\bgroup\cslet{#1}{macsave.#1}} % \braceorline decides whether the next nonwhitespace character is a % {. If so it reads up to the closing }, if not, it reads the whole % line. Whatever was read is then fed to the next control sequence -% as an argument (by \parsebrace or \parsearg) -\def\braceorline#1{\let\next=#1\futurelet\nchar\braceorlinexxx} +% as an argument (by \parsebrace or \parsearg). +% +\def\braceorline#1{\let\macnamexxx=#1\futurelet\nchar\braceorlinexxx} \def\braceorlinexxx{% \ifx\nchar\bgroup\else \expandafter\parsearg - \fi \next} + \fi \macnamexxx} % @alias. % We need some trickery to remove the optional spaces around the equal -% sign. Just make them active and then expand them all to nothing. +% sign. Make them active and then expand them all to nothing. +% \def\alias{\parseargusing\obeyspaces\aliasxxx} \def\aliasxxx #1{\aliasyyy#1\relax} \def\aliasyyy #1=#2\relax{% @@ -5941,13 +7725,13 @@ \message{cross references,} \newwrite\auxfile - \newif\ifhavexrefs % True if xref values are known. \newif\ifwarnedxrefs % True if we warned once that they aren't known. % @inforef is relatively simple. \def\inforef #1{\inforefzzz #1,,,,**} -\def\inforefzzz #1,#2,#3,#4**{\putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}}, +\def\inforefzzz #1,#2,#3,#4**{% + \putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}}, node \samp{\ignorespaces#1{}}} % @node's only job in TeX is to define \lastnode, which is used in @@ -5986,7 +7770,7 @@ % \setref{NAME}{SNT} defines a cross-reference point NAME (a node or an % anchor), which consists of three parts: -% 1) NAME-title - the current sectioning name taken from \thissection, +% 1) NAME-title - the current sectioning name taken from \lastsection, % or the anchor name. % 2) NAME-snt - section number and type, passed as the SNT arg, or % empty for anchors. @@ -6005,14 +7789,35 @@ \write\auxfile{@xrdef{#1-% #1 of \setref, expanded by the \edef ##1}{##2}}% these are parameters of \writexrdef }% - \toks0 = \expandafter{\thissection}% + \toks0 = \expandafter{\lastsection}% \immediate \writexrdef{title}{\the\toks0 }% \immediate \writexrdef{snt}{\csname #2\endcsname}% \Ynumbered etc. - \writexrdef{pg}{\folio}% will be written later, during \shipout + \safewhatsit{\writexrdef{pg}{\folio}}% will be written later, at \shipout }% \fi } +% @xrefautosectiontitle on|off says whether @section(ing) names are used +% automatically in xrefs, if the third arg is not explicitly specified. +% This was provided as a "secret" @set xref-automatic-section-title +% variable, now it's official. +% +\parseargdef\xrefautomaticsectiontitle{% + \def\temp{#1}% + \ifx\temp\onword + \expandafter\let\csname SETxref-automatic-section-title\endcsname + = \empty + \else\ifx\temp\offword + \expandafter\let\csname SETxref-automatic-section-title\endcsname + = \relax + \else + \errhelp = \EMsimple + \errmessage{Unknown @xrefautomaticsectiontitle value `\temp', + must be on|off}% + \fi\fi +} + +% % @xref, @pxref, and @ref generate cross-references. For \xrefX, #1 is % the node name, #2 the name of the Info cross-reference, #3 the printed % node name, #4 the name of the Info file, #5 the name of the printed @@ -6021,26 +7826,41 @@ \def\pxref#1{\putwordsee{} \xrefX[#1,,,,,,,]} \def\xref#1{\putwordSee{} \xrefX[#1,,,,,,,]} \def\ref#1{\xrefX[#1,,,,,,,]} +% +\newbox\toprefbox +\newbox\printedrefnamebox +\newbox\infofilenamebox +\newbox\printedmanualbox +% \def\xrefX[#1,#2,#3,#4,#5,#6]{\begingroup \unsepspaces + % + % Get args without leading/trailing spaces. + \def\printedrefname{\ignorespaces #3}% + \setbox\printedrefnamebox = \hbox{\printedrefname\unskip}% + % + \def\infofilename{\ignorespaces #4}% + \setbox\infofilenamebox = \hbox{\infofilename\unskip}% + % \def\printedmanual{\ignorespaces #5}% - \def\printedrefname{\ignorespaces #3}% - \setbox1=\hbox{\printedmanual\unskip}% - \setbox0=\hbox{\printedrefname\unskip}% - \ifdim \wd0 = 0pt + \setbox\printedmanualbox = \hbox{\printedmanual\unskip}% + % + % If the printed reference name (arg #3) was not explicitly given in + % the @xref, figure out what we want to use. + \ifdim \wd\printedrefnamebox = 0pt % No printed node name was explicitly given. - \expandafter\ifx\csname SETxref-automatic-section-title\endcsname\relax - % Use the node name inside the square brackets. + \expandafter\ifx\csname SETxref-automatic-section-title\endcsname \relax + % Not auto section-title: use node name inside the square brackets. \def\printedrefname{\ignorespaces #1}% \else - % Use the actual chapter/section title appear inside - % the square brackets. Use the real section title if we have it. - \ifdim \wd1 > 0pt - % It is in another manual, so we don't have it. + % Auto section-title: use chapter/section title inside + % the square brackets if we have it. + \ifdim \wd\printedmanualbox > 0pt + % It is in another manual, so we don't have it; use node name. \def\printedrefname{\ignorespaces #1}% \else \ifhavexrefs - % We know the real title if we have the xref values. + % We (should) know the real title if we have the xref values. \def\printedrefname{\refx{#1-title}{}}% \else % Otherwise just copy the Info node name. @@ -6052,22 +7872,32 @@ % % Make link in pdf output. \ifpdf - \leavevmode - \getfilename{#4}% - {\turnoffactive - % See comments at \activebackslashdouble. - {\activebackslashdouble \xdef\pdfxrefdest{#1}% - \backslashparens\pdfxrefdest}% + {\indexnofonts + \turnoffactive + \makevalueexpandable + % This expands tokens, so do it after making catcode changes, so _ + % etc. don't get their TeX definitions. This ignores all spaces in + % #4, including (wrongly) those in the middle of the filename. + \getfilename{#4}% % + % This (wrongly) does not take account of leading or trailing + % spaces in #1, which should be ignored. + \edef\pdfxrefdest{#1}% + \ifx\pdfxrefdest\empty + \def\pdfxrefdest{Top}% no empty targets + \else + \txiescapepdf\pdfxrefdest % escape PDF special chars + \fi + % + \leavevmode + \startlink attr{/Border [0 0 0]}% \ifnum\filenamelength>0 - \startlink attr{/Border [0 0 0]}% - goto file{\the\filename.pdf} name{\pdfxrefdest}% + goto file{\the\filename.pdf} name{\pdfxrefdest}% \else - \startlink attr{/Border [0 0 0]}% - goto name{\pdfmkpgn{\pdfxrefdest}}% + goto name{\pdfmkpgn{\pdfxrefdest}}% \fi }% - \linkcolor + \setcolor{\linkcolor}% \fi % % Float references are printed completely differently: "Figure 1.2" @@ -6084,29 +7914,42 @@ \iffloat\Xthisreftitle % If the user specified the print name (third arg) to the ref, % print it instead of our usual "Figure 1.2". - \ifdim\wd0 = 0pt - \refx{#1-snt}% + \ifdim\wd\printedrefnamebox = 0pt + \refx{#1-snt}{}% \else \printedrefname \fi % - % if the user also gave the printed manual name (fifth arg), append + % If the user also gave the printed manual name (fifth arg), append % "in MANUALNAME". - \ifdim \wd1 > 0pt + \ifdim \wd\printedmanualbox > 0pt \space \putwordin{} \cite{\printedmanual}% \fi \else % node/anchor (non-float) references. + % + % If we use \unhbox to print the node names, TeX does not insert + % empty discretionaries after hyphens, which means that it will not + % find a line break at a hyphen in a node names. Since some manuals + % are best written with fairly long node names, containing hyphens, + % this is a loss. Therefore, we give the text of the node name + % again, so it is as if TeX is seeing it for the first time. + % + \ifdim \wd\printedmanualbox > 0pt + % Cross-manual reference with a printed manual name. + % + \crossmanualxref{\cite{\printedmanual\unskip}}% % - % If we use \unhbox0 and \unhbox1 to print the node names, TeX does not - % insert empty discretionaries after hyphens, which means that it will - % not find a line break at a hyphen in a node names. Since some manuals - % are best written with fairly long node names, containing hyphens, this - % is a loss. Therefore, we give the text of the node name again, so it - % is as if TeX is seeing it for the first time. - \ifdim \wd1 > 0pt - \putwordsection{} ``\printedrefname'' \putwordin{} \cite{\printedmanual}% + \else\ifdim \wd\infofilenamebox > 0pt + % Cross-manual reference with only an info filename (arg 4), no + % printed manual name (arg 5). This is essentially the same as + % the case above; we output the filename, since we have nothing else. + % + \crossmanualxref{\code{\infofilename\unskip}}% + % \else + % Reference within this manual. + % % _ (for example) has to be the character _ for the purposes of the % control sequence corresponding to the node, but it has to expand % into the usual \leavevmode...\vrule stuff for purposes of @@ -6118,7 +7961,7 @@ \setbox2 = \hbox{\ignorespaces \refx{#1-snt}{}}% \ifdim \wd2 > 0pt \refx{#1-snt}\space\fi }% - % output the `[mynode]' via a macro so it can be overridden. + % output the `[mynode]' via the macro below so it can be overridden. \xrefprintnodename\printedrefname % % But we always want a comma and a space: @@ -6126,11 +7969,37 @@ % % output the `page 3'. \turnoffactive \putwordpage\tie\refx{#1-pg}{}% - \fi + \fi\fi \fi \endlink \endgroup} +% Output a cross-manual xref to #1. Used just above (twice). +% +% Only include the text "Section ``foo'' in" if the foo is neither +% missing or Top. Thus, @xref{,,,foo,The Foo Manual} outputs simply +% "see The Foo Manual", the idea being to refer to the whole manual. +% +% But, this being TeX, we can't easily compare our node name against the +% string "Top" while ignoring the possible spaces before and after in +% the input. By adding the arbitrary 7sp below, we make it much less +% likely that a real node name would have the same width as "Top" (e.g., +% in a monospaced font). Hopefully it will never happen in practice. +% +% For the same basic reason, we retypeset the "Top" at every +% reference, since the current font is indeterminate. +% +\def\crossmanualxref#1{% + \setbox\toprefbox = \hbox{Top\kern7sp}% + \setbox2 = \hbox{\ignorespaces \printedrefname \unskip \kern7sp}% + \ifdim \wd2 > 7sp % nonempty? + \ifdim \wd2 = \wd\toprefbox \else % same as Top? + \putwordSection{} ``\printedrefname'' \putwordin{}\space + \fi + \fi + #1% +} + % This macro is called from \xrefX for the `[nodename]' part of xref % output. It's a separate macro only so it can be changed more easily, % since square brackets don't work well in some documents. Particularly @@ -6181,7 +8050,8 @@ \angleleft un\-de\-fined\angleright \iflinks \ifhavexrefs - \message{\linenumber Undefined cross reference `#1'.}% + {\toks0 = {#1}% avoid expansion of possibly-complex value + \message{\linenumber Undefined cross reference `\the\toks0'.}}% \else \ifwarnedxrefs\else \global\warnedxrefstrue @@ -6201,10 +8071,18 @@ % collisions). But if this is a float type, we have more work to do. % \def\xrdef#1#2{% - \expandafter\gdef\csname XR#1\endcsname{#2}% remember this xref value. + {% The node name might contain 8-bit characters, which in our current + % implementation are changed to commands like @'e. Don't let these + % mess up the control sequence name. + \indexnofonts + \turnoffactive + \xdef\safexrefname{#1}% + }% + % + \expandafter\gdef\csname XR\safexrefname\endcsname{#2}% remember this xref % % Was that xref control sequence that we just defined for a float? - \expandafter\iffloat\csname XR#1\endcsname + \expandafter\iffloat\csname XR\safexrefname\endcsname % it was a float, and we have the (safe) float type in \iffloattype. \expandafter\let\expandafter\floatlist \csname floatlist\iffloattype\endcsname @@ -6219,7 +8097,8 @@ % % Remember this xref in the control sequence \floatlistFLOATTYPE, % for later use in \listoffloats. - \expandafter\xdef\csname floatlist\iffloattype\endcsname{\the\toks0{#1}}% + \expandafter\xdef\csname floatlist\iffloattype\endcsname{\the\toks0 + {\safexrefname}}% \fi } @@ -6323,6 +8202,7 @@ \input\jobname.#1 \endgroup} + \message{insertions,} % including footnotes. @@ -6335,7 +8215,7 @@ % space to prevent strange expansion errors.) \def\supereject{\par\penalty -20000\footnoteno =0 } -% @footnotestyle is meaningful for info output only. +% @footnotestyle is meaningful for Info output only. \let\footnotestyle=\comment {\catcode `\@=11 @@ -6398,6 +8278,8 @@ % expands into a box, it must come within the paragraph, lest it % provide a place where TeX can split the footnote. \footstrut + % + % Invoke rest of plain TeX footnote routine. \futurelet\next\fo at t } }%end \catcode `\@=11 @@ -6405,7 +8287,7 @@ % In case a @footnote appears in a vbox, save the footnote text and create % the real \insert just after the vbox finished. Otherwise, the insertion % would be lost. -% Similarily, if a @footnote appears inside an alignment, save the footnote +% Similarly, if a @footnote appears inside an alignment, save the footnote % text to a box and make the \insert when a row of the table is finished. % And the same can be done for other insert classes. --kasal, 16nov03. @@ -6485,7 +8367,7 @@ it from ftp://tug.org/tex/epsf.tex.} % \def\image#1{% - \ifx\epsfbox\undefined + \ifx\epsfbox\thisisundefined \ifwarnednoepsf \else \errhelp = \noepsfhelp \errmessage{epsf.tex not found, images will be ignored}% @@ -6501,7 +8383,7 @@ % #2 is (optional) width, #3 is (optional) height. % #4 is (ignored optional) html alt text. % #5 is (ignored optional) extension. -% #6 is just the usual extra ignored arg for parsing this stuff. +% #6 is just the usual extra ignored arg for parsing stuff. \newif\ifimagevmode \def\imagexxx#1,#2,#3,#4,#5,#6\finish{\begingroup \catcode`\^^M = 5 % in case we're inside an example @@ -6509,15 +8391,30 @@ % If the image is by itself, center it. \ifvmode \imagevmodetrue - \nobreak\bigskip + \else \ifx\centersub\centerV + % for @center @image, we need a vbox so we can have our vertical space + \imagevmodetrue + \vbox\bgroup % vbox has better behavior than vtop herev + \fi\fi + % + \ifimagevmode + \nobreak\medskip % Usually we'll have text after the image which will insert % \parskip glue, so insert it here too to equalize the space % above and below. \nobreak\vskip\parskip \nobreak - \line\bgroup\hss \fi % + % Leave vertical mode so that indentation from an enclosing + % environment such as @quotation is respected. + % However, if we're at the top level, we don't want the + % normal paragraph indentation. + % On the other hand, if we are in the case of @center @image, we don't + % want to start a paragraph, which will create a hsize-width box and + % eradicate the centering. + \ifx\centersub\centerV\else \noindent \fi + % % Output the image. \ifpdf \dopdfimage{#1}{#2}{#3}% @@ -6528,7 +8425,10 @@ \epsfbox{#1.eps}% \fi % - \ifimagevmode \hss \egroup \bigbreak \fi % space after the image + \ifimagevmode + \medskip % space after a standalone image + \fi + \ifx\centersub\centerV \egroup \fi \endgroup} @@ -6595,13 +8495,13 @@ \global\advance\floatno by 1 % {% - % This magic value for \thissection is output by \setref as the + % This magic value for \lastsection is output by \setref as the % XREFLABEL-title value. \xrefX uses it to distinguish float % labels (which have a completely different output format) from % node and anchor labels. And \xrdef uses it to construct the % lists of floats. % - \edef\thissection{\floatmagic=\safefloattype}% + \edef\lastsection{\floatmagic=\safefloattype}% \setref{\floatlabel}{Yfloat}% }% \fi @@ -6669,6 +8569,7 @@ % caption if specified, else the full caption if specified, else nothing. {% \atdummies + % % since we read the caption text in the macro world, where ^^M % is turned into a normal character, we have to scan it back, so % we don't write the literal three characters "^^M" into the aux file. @@ -6689,8 +8590,9 @@ % % place the captured inserts % - % BEWARE: when the floats start float, we have to issue warning whenever an - % insert appears inside a float which could possibly float. --kasal, 26may04 + % BEWARE: when the floats start floating, we have to issue warning + % whenever an insert appears inside a float which could possibly + % float. --kasal, 26may04 % \checkinserts } @@ -6734,7 +8636,7 @@ % #1 is the control sequence we are passed; we expand into a conditional % which is true if #1 represents a float ref. That is, the magic -% \thissection value which we \setref above. +% \lastsection value which we \setref above. % \def\iffloat#1{\expandafter\doiffloat#1==\finish} % @@ -6795,39 +8697,909 @@ \writeentry }} + \message{localization,} -% and i18n. - -% @documentlanguage is usually given very early, just after -% @setfilename. If done too late, it may not override everything -% properly. Single argument is the language abbreviation. -% It would be nice if we could set up a hyphenation file here. -% -\parseargdef\documentlanguage{% + +% For single-language documents, @documentlanguage is usually given very +% early, just after @documentencoding. Single argument is the language +% (de) or locale (de_DE) abbreviation. +% +{ + \catcode`\_ = \active + \globaldefs=1 +\parseargdef\documentlanguage{\begingroup + \let_=\normalunderscore % normal _ character for filenames \tex % read txi-??.tex file in plain TeX. - % Read the file if it exists. + % Read the file by the name they passed if it exists. \openin 1 txi-#1.tex \ifeof 1 - \errhelp = \nolanghelp - \errmessage{Cannot read language file txi-#1.tex}% + \documentlanguagetrywithoutunderscore{#1_\finish}% \else + \globaldefs = 1 % everything in the txi-LL files needs to persist \input txi-#1.tex \fi \closein 1 - \endgroup -} + \endgroup % end raw TeX +\endgroup} +% +% If they passed de_DE, and txi-de_DE.tex doesn't exist, +% try txi-de.tex. +% +\gdef\documentlanguagetrywithoutunderscore#1_#2\finish{% + \openin 1 txi-#1.tex + \ifeof 1 + \errhelp = \nolanghelp + \errmessage{Cannot read language file txi-#1.tex}% + \else + \globaldefs = 1 % everything in the txi-LL files needs to persist + \input txi-#1.tex + \fi + \closein 1 +} +}% end of special _ catcode +% \newhelp\nolanghelp{The given language definition file cannot be found or -is empty. Maybe you need to install it? In the current directory -should work if nowhere else does.} - - -% @documentencoding should change something in TeX eventually, most -% likely, but for now just recognize it. -\let\documentencoding = \comment - - -% Page size parameters. -% +is empty. Maybe you need to install it? Putting it in the current +directory should work if nowhere else does.} + +% This macro is called from txi-??.tex files; the first argument is the +% \language name to set (without the "\lang@" prefix), the second and +% third args are \{left,right}hyphenmin. +% +% The language names to pass are determined when the format is built. +% See the etex.log file created at that time, e.g., +% /usr/local/texlive/2008/texmf-var/web2c/pdftex/etex.log. +% +% With TeX Live 2008, etex now includes hyphenation patterns for all +% available languages. This means we can support hyphenation in +% Texinfo, at least to some extent. (This still doesn't solve the +% accented characters problem.) +% +\catcode`@=11 +\def\txisetlanguage#1#2#3{% + % do not set the language if the name is undefined in the current TeX. + \expandafter\ifx\csname lang@#1\endcsname \relax + \message{no patterns for #1}% + \else + \global\language = \csname lang@#1\endcsname + \fi + % but there is no harm in adjusting the hyphenmin values regardless. + \global\lefthyphenmin = #2\relax + \global\righthyphenmin = #3\relax +} + +% Helpers for encodings. +% Set the catcode of characters 128 through 255 to the specified number. +% +\def\setnonasciicharscatcode#1{% + \count255=128 + \loop\ifnum\count255<256 + \global\catcode\count255=#1\relax + \advance\count255 by 1 + \repeat +} + +\def\setnonasciicharscatcodenonglobal#1{% + \count255=128 + \loop\ifnum\count255<256 + \catcode\count255=#1\relax + \advance\count255 by 1 + \repeat +} + +% @documentencoding sets the definition of non-ASCII characters +% according to the specified encoding. +% +\parseargdef\documentencoding{% + % Encoding being declared for the document. + \def\declaredencoding{\csname #1.enc\endcsname}% + % + % Supported encodings: names converted to tokens in order to be able + % to compare them with \ifx. + \def\ascii{\csname US-ASCII.enc\endcsname}% + \def\latnine{\csname ISO-8859-15.enc\endcsname}% + \def\latone{\csname ISO-8859-1.enc\endcsname}% + \def\lattwo{\csname ISO-8859-2.enc\endcsname}% + \def\utfeight{\csname UTF-8.enc\endcsname}% + % + \ifx \declaredencoding \ascii + \asciichardefs + % + \else \ifx \declaredencoding \lattwo + \setnonasciicharscatcode\active + \lattwochardefs + % + \else \ifx \declaredencoding \latone + \setnonasciicharscatcode\active + \latonechardefs + % + \else \ifx \declaredencoding \latnine + \setnonasciicharscatcode\active + \latninechardefs + % + \else \ifx \declaredencoding \utfeight + \setnonasciicharscatcode\active + \utfeightchardefs + % + \else + \message{Unknown document encoding #1, ignoring.}% + % + \fi % utfeight + \fi % latnine + \fi % latone + \fi % lattwo + \fi % ascii +} + +% A message to be logged when using a character that isn't available +% the default font encoding (OT1). +% +\def\missingcharmsg#1{\message{Character missing in OT1 encoding: #1.}} + +% Take account of \c (plain) vs. \, (Texinfo) difference. +\def\cedilla#1{\ifx\c\ptexc\c{#1}\else\,{#1}\fi} + +% First, make active non-ASCII characters in order for them to be +% correctly categorized when TeX reads the replacement text of +% macros containing the character definitions. +\setnonasciicharscatcode\active +% +% Latin1 (ISO-8859-1) character definitions. +\def\latonechardefs{% + \gdef^^a0{\tie} + \gdef^^a1{\exclamdown} + \gdef^^a2{\missingcharmsg{CENT SIGN}} + \gdef^^a3{{\pounds}} + \gdef^^a4{\missingcharmsg{CURRENCY SIGN}} + \gdef^^a5{\missingcharmsg{YEN SIGN}} + \gdef^^a6{\missingcharmsg{BROKEN BAR}} + \gdef^^a7{\S} + \gdef^^a8{\"{}} + \gdef^^a9{\copyright} + \gdef^^aa{\ordf} + \gdef^^ab{\guillemetleft} + \gdef^^ac{$\lnot$} + \gdef^^ad{\-} + \gdef^^ae{\registeredsymbol} + \gdef^^af{\={}} + % + \gdef^^b0{\textdegree} + \gdef^^b1{$\pm$} + \gdef^^b2{$^2$} + \gdef^^b3{$^3$} + \gdef^^b4{\'{}} + \gdef^^b5{$\mu$} + \gdef^^b6{\P} + % + \gdef^^b7{$^.$} + \gdef^^b8{\cedilla\ } + \gdef^^b9{$^1$} + \gdef^^ba{\ordm} + % + \gdef^^bb{\guillemetright} + \gdef^^bc{$1\over4$} + \gdef^^bd{$1\over2$} + \gdef^^be{$3\over4$} + \gdef^^bf{\questiondown} + % + \gdef^^c0{\`A} + \gdef^^c1{\'A} + \gdef^^c2{\^A} + \gdef^^c3{\~A} + \gdef^^c4{\"A} + \gdef^^c5{\ringaccent A} + \gdef^^c6{\AE} + \gdef^^c7{\cedilla C} + \gdef^^c8{\`E} + \gdef^^c9{\'E} + \gdef^^ca{\^E} + \gdef^^cb{\"E} + \gdef^^cc{\`I} + \gdef^^cd{\'I} + \gdef^^ce{\^I} + \gdef^^cf{\"I} + % + \gdef^^d0{\DH} + \gdef^^d1{\~N} + \gdef^^d2{\`O} + \gdef^^d3{\'O} + \gdef^^d4{\^O} + \gdef^^d5{\~O} + \gdef^^d6{\"O} + \gdef^^d7{$\times$} + \gdef^^d8{\O} + \gdef^^d9{\`U} + \gdef^^da{\'U} + \gdef^^db{\^U} + \gdef^^dc{\"U} + \gdef^^dd{\'Y} + \gdef^^de{\TH} + \gdef^^df{\ss} + % + \gdef^^e0{\`a} + \gdef^^e1{\'a} + \gdef^^e2{\^a} + \gdef^^e3{\~a} + \gdef^^e4{\"a} + \gdef^^e5{\ringaccent a} + \gdef^^e6{\ae} + \gdef^^e7{\cedilla c} + \gdef^^e8{\`e} + \gdef^^e9{\'e} + \gdef^^ea{\^e} + \gdef^^eb{\"e} + \gdef^^ec{\`{\dotless i}} + \gdef^^ed{\'{\dotless i}} + \gdef^^ee{\^{\dotless i}} + \gdef^^ef{\"{\dotless i}} + % + \gdef^^f0{\dh} + \gdef^^f1{\~n} + \gdef^^f2{\`o} + \gdef^^f3{\'o} + \gdef^^f4{\^o} + \gdef^^f5{\~o} + \gdef^^f6{\"o} + \gdef^^f7{$\div$} + \gdef^^f8{\o} + \gdef^^f9{\`u} + \gdef^^fa{\'u} + \gdef^^fb{\^u} + \gdef^^fc{\"u} + \gdef^^fd{\'y} + \gdef^^fe{\th} + \gdef^^ff{\"y} +} + +% Latin9 (ISO-8859-15) encoding character definitions. +\def\latninechardefs{% + % Encoding is almost identical to Latin1. + \latonechardefs + % + \gdef^^a4{\euro} + \gdef^^a6{\v S} + \gdef^^a8{\v s} + \gdef^^b4{\v Z} + \gdef^^b8{\v z} + \gdef^^bc{\OE} + \gdef^^bd{\oe} + \gdef^^be{\"Y} +} + +% Latin2 (ISO-8859-2) character definitions. +\def\lattwochardefs{% + \gdef^^a0{\tie} + \gdef^^a1{\ogonek{A}} + \gdef^^a2{\u{}} + \gdef^^a3{\L} + \gdef^^a4{\missingcharmsg{CURRENCY SIGN}} + \gdef^^a5{\v L} + \gdef^^a6{\'S} + \gdef^^a7{\S} + \gdef^^a8{\"{}} + \gdef^^a9{\v S} + \gdef^^aa{\cedilla S} + \gdef^^ab{\v T} + \gdef^^ac{\'Z} + \gdef^^ad{\-} + \gdef^^ae{\v Z} + \gdef^^af{\dotaccent Z} + % + \gdef^^b0{\textdegree} + \gdef^^b1{\ogonek{a}} + \gdef^^b2{\ogonek{ }} + \gdef^^b3{\l} + \gdef^^b4{\'{}} + \gdef^^b5{\v l} + \gdef^^b6{\'s} + \gdef^^b7{\v{}} + \gdef^^b8{\cedilla\ } + \gdef^^b9{\v s} + \gdef^^ba{\cedilla s} + \gdef^^bb{\v t} + \gdef^^bc{\'z} + \gdef^^bd{\H{}} + \gdef^^be{\v z} + \gdef^^bf{\dotaccent z} + % + \gdef^^c0{\'R} + \gdef^^c1{\'A} + \gdef^^c2{\^A} + \gdef^^c3{\u A} + \gdef^^c4{\"A} + \gdef^^c5{\'L} + \gdef^^c6{\'C} + \gdef^^c7{\cedilla C} + \gdef^^c8{\v C} + \gdef^^c9{\'E} + \gdef^^ca{\ogonek{E}} + \gdef^^cb{\"E} + \gdef^^cc{\v E} + \gdef^^cd{\'I} + \gdef^^ce{\^I} + \gdef^^cf{\v D} + % + \gdef^^d0{\DH} + \gdef^^d1{\'N} + \gdef^^d2{\v N} + \gdef^^d3{\'O} + \gdef^^d4{\^O} + \gdef^^d5{\H O} + \gdef^^d6{\"O} + \gdef^^d7{$\times$} + \gdef^^d8{\v R} + \gdef^^d9{\ringaccent U} + \gdef^^da{\'U} + \gdef^^db{\H U} + \gdef^^dc{\"U} + \gdef^^dd{\'Y} + \gdef^^de{\cedilla T} + \gdef^^df{\ss} + % + \gdef^^e0{\'r} + \gdef^^e1{\'a} + \gdef^^e2{\^a} + \gdef^^e3{\u a} + \gdef^^e4{\"a} + \gdef^^e5{\'l} + \gdef^^e6{\'c} + \gdef^^e7{\cedilla c} + \gdef^^e8{\v c} + \gdef^^e9{\'e} + \gdef^^ea{\ogonek{e}} + \gdef^^eb{\"e} + \gdef^^ec{\v e} + \gdef^^ed{\'{\dotless{i}}} + \gdef^^ee{\^{\dotless{i}}} + \gdef^^ef{\v d} + % + \gdef^^f0{\dh} + \gdef^^f1{\'n} + \gdef^^f2{\v n} + \gdef^^f3{\'o} + \gdef^^f4{\^o} + \gdef^^f5{\H o} + \gdef^^f6{\"o} + \gdef^^f7{$\div$} + \gdef^^f8{\v r} + \gdef^^f9{\ringaccent u} + \gdef^^fa{\'u} + \gdef^^fb{\H u} + \gdef^^fc{\"u} + \gdef^^fd{\'y} + \gdef^^fe{\cedilla t} + \gdef^^ff{\dotaccent{}} +} + +% UTF-8 character definitions. +% +% This code to support UTF-8 is based on LaTeX's utf8.def, with some +% changes for Texinfo conventions. It is included here under the GPL by +% permission from Frank Mittelbach and the LaTeX team. +% +\newcount\countUTFx +\newcount\countUTFy +\newcount\countUTFz + +\gdef\UTFviiiTwoOctets#1#2{\expandafter + \UTFviiiDefined\csname u8:#1\string #2\endcsname} +% +\gdef\UTFviiiThreeOctets#1#2#3{\expandafter + \UTFviiiDefined\csname u8:#1\string #2\string #3\endcsname} +% +\gdef\UTFviiiFourOctets#1#2#3#4{\expandafter + \UTFviiiDefined\csname u8:#1\string #2\string #3\string #4\endcsname} + +\gdef\UTFviiiDefined#1{% + \ifx #1\relax + \message{\linenumber Unicode char \string #1 not defined for Texinfo}% + \else + \expandafter #1% + \fi +} + +\begingroup + \catcode`\~13 + \catcode`\"12 + + \def\UTFviiiLoop{% + \global\catcode\countUTFx\active + \uccode`\~\countUTFx + \uppercase\expandafter{\UTFviiiTmp}% + \advance\countUTFx by 1 + \ifnum\countUTFx < \countUTFy + \expandafter\UTFviiiLoop + \fi} + + \countUTFx = "C2 + \countUTFy = "E0 + \def\UTFviiiTmp{% + \xdef~{\noexpand\UTFviiiTwoOctets\string~}} + \UTFviiiLoop + + \countUTFx = "E0 + \countUTFy = "F0 + \def\UTFviiiTmp{% + \xdef~{\noexpand\UTFviiiThreeOctets\string~}} + \UTFviiiLoop + + \countUTFx = "F0 + \countUTFy = "F4 + \def\UTFviiiTmp{% + \xdef~{\noexpand\UTFviiiFourOctets\string~}} + \UTFviiiLoop +\endgroup + +\begingroup + \catcode`\"=12 + \catcode`\<=12 + \catcode`\.=12 + \catcode`\,=12 + \catcode`\;=12 + \catcode`\!=12 + \catcode`\~=13 + + \gdef\DeclareUnicodeCharacter#1#2{% + \countUTFz = "#1\relax + %\wlog{\space\space defining Unicode char U+#1 (decimal \the\countUTFz)}% + \begingroup + \parseXMLCharref + \def\UTFviiiTwoOctets##1##2{% + \csname u8:##1\string ##2\endcsname}% + \def\UTFviiiThreeOctets##1##2##3{% + \csname u8:##1\string ##2\string ##3\endcsname}% + \def\UTFviiiFourOctets##1##2##3##4{% + \csname u8:##1\string ##2\string ##3\string ##4\endcsname}% + \expandafter\expandafter\expandafter\expandafter + \expandafter\expandafter\expandafter + \gdef\UTFviiiTmp{#2}% + \endgroup} + + \gdef\parseXMLCharref{% + \ifnum\countUTFz < "A0\relax + \errhelp = \EMsimple + \errmessage{Cannot define Unicode char value < 00A0}% + \else\ifnum\countUTFz < "800\relax + \parseUTFviiiA,% + \parseUTFviiiB C\UTFviiiTwoOctets.,% + \else\ifnum\countUTFz < "10000\relax + \parseUTFviiiA;% + \parseUTFviiiA,% + \parseUTFviiiB E\UTFviiiThreeOctets.{,;}% + \else + \parseUTFviiiA;% + \parseUTFviiiA,% + \parseUTFviiiA!% + \parseUTFviiiB F\UTFviiiFourOctets.{!,;}% + \fi\fi\fi + } + + \gdef\parseUTFviiiA#1{% + \countUTFx = \countUTFz + \divide\countUTFz by 64 + \countUTFy = \countUTFz + \multiply\countUTFz by 64 + \advance\countUTFx by -\countUTFz + \advance\countUTFx by 128 + \uccode `#1\countUTFx + \countUTFz = \countUTFy} + + \gdef\parseUTFviiiB#1#2#3#4{% + \advance\countUTFz by "#10\relax + \uccode `#3\countUTFz + \uppercase{\gdef\UTFviiiTmp{#2#3#4}}} +\endgroup + +\def\utfeightchardefs{% + \DeclareUnicodeCharacter{00A0}{\tie} + \DeclareUnicodeCharacter{00A1}{\exclamdown} + \DeclareUnicodeCharacter{00A3}{\pounds} + \DeclareUnicodeCharacter{00A8}{\"{ }} + \DeclareUnicodeCharacter{00A9}{\copyright} + \DeclareUnicodeCharacter{00AA}{\ordf} + \DeclareUnicodeCharacter{00AB}{\guillemetleft} + \DeclareUnicodeCharacter{00AD}{\-} + \DeclareUnicodeCharacter{00AE}{\registeredsymbol} + \DeclareUnicodeCharacter{00AF}{\={ }} + + \DeclareUnicodeCharacter{00B0}{\ringaccent{ }} + \DeclareUnicodeCharacter{00B4}{\'{ }} + \DeclareUnicodeCharacter{00B8}{\cedilla{ }} + \DeclareUnicodeCharacter{00BA}{\ordm} + \DeclareUnicodeCharacter{00BB}{\guillemetright} + \DeclareUnicodeCharacter{00BF}{\questiondown} + + \DeclareUnicodeCharacter{00C0}{\`A} + \DeclareUnicodeCharacter{00C1}{\'A} + \DeclareUnicodeCharacter{00C2}{\^A} + \DeclareUnicodeCharacter{00C3}{\~A} + \DeclareUnicodeCharacter{00C4}{\"A} + \DeclareUnicodeCharacter{00C5}{\AA} + \DeclareUnicodeCharacter{00C6}{\AE} + \DeclareUnicodeCharacter{00C7}{\cedilla{C}} + \DeclareUnicodeCharacter{00C8}{\`E} + \DeclareUnicodeCharacter{00C9}{\'E} + \DeclareUnicodeCharacter{00CA}{\^E} + \DeclareUnicodeCharacter{00CB}{\"E} + \DeclareUnicodeCharacter{00CC}{\`I} + \DeclareUnicodeCharacter{00CD}{\'I} + \DeclareUnicodeCharacter{00CE}{\^I} + \DeclareUnicodeCharacter{00CF}{\"I} + + \DeclareUnicodeCharacter{00D0}{\DH} + \DeclareUnicodeCharacter{00D1}{\~N} + \DeclareUnicodeCharacter{00D2}{\`O} + \DeclareUnicodeCharacter{00D3}{\'O} + \DeclareUnicodeCharacter{00D4}{\^O} + \DeclareUnicodeCharacter{00D5}{\~O} + \DeclareUnicodeCharacter{00D6}{\"O} + \DeclareUnicodeCharacter{00D8}{\O} + \DeclareUnicodeCharacter{00D9}{\`U} + \DeclareUnicodeCharacter{00DA}{\'U} + \DeclareUnicodeCharacter{00DB}{\^U} + \DeclareUnicodeCharacter{00DC}{\"U} + \DeclareUnicodeCharacter{00DD}{\'Y} + \DeclareUnicodeCharacter{00DE}{\TH} + \DeclareUnicodeCharacter{00DF}{\ss} + + \DeclareUnicodeCharacter{00E0}{\`a} + \DeclareUnicodeCharacter{00E1}{\'a} + \DeclareUnicodeCharacter{00E2}{\^a} + \DeclareUnicodeCharacter{00E3}{\~a} + \DeclareUnicodeCharacter{00E4}{\"a} + \DeclareUnicodeCharacter{00E5}{\aa} + \DeclareUnicodeCharacter{00E6}{\ae} + \DeclareUnicodeCharacter{00E7}{\cedilla{c}} + \DeclareUnicodeCharacter{00E8}{\`e} + \DeclareUnicodeCharacter{00E9}{\'e} + \DeclareUnicodeCharacter{00EA}{\^e} + \DeclareUnicodeCharacter{00EB}{\"e} + \DeclareUnicodeCharacter{00EC}{\`{\dotless{i}}} + \DeclareUnicodeCharacter{00ED}{\'{\dotless{i}}} + \DeclareUnicodeCharacter{00EE}{\^{\dotless{i}}} + \DeclareUnicodeCharacter{00EF}{\"{\dotless{i}}} + + \DeclareUnicodeCharacter{00F0}{\dh} + \DeclareUnicodeCharacter{00F1}{\~n} + \DeclareUnicodeCharacter{00F2}{\`o} + \DeclareUnicodeCharacter{00F3}{\'o} + \DeclareUnicodeCharacter{00F4}{\^o} + \DeclareUnicodeCharacter{00F5}{\~o} + \DeclareUnicodeCharacter{00F6}{\"o} + \DeclareUnicodeCharacter{00F8}{\o} + \DeclareUnicodeCharacter{00F9}{\`u} + \DeclareUnicodeCharacter{00FA}{\'u} + \DeclareUnicodeCharacter{00FB}{\^u} + \DeclareUnicodeCharacter{00FC}{\"u} + \DeclareUnicodeCharacter{00FD}{\'y} + \DeclareUnicodeCharacter{00FE}{\th} + \DeclareUnicodeCharacter{00FF}{\"y} + + \DeclareUnicodeCharacter{0100}{\=A} + \DeclareUnicodeCharacter{0101}{\=a} + \DeclareUnicodeCharacter{0102}{\u{A}} + \DeclareUnicodeCharacter{0103}{\u{a}} + \DeclareUnicodeCharacter{0104}{\ogonek{A}} + \DeclareUnicodeCharacter{0105}{\ogonek{a}} + \DeclareUnicodeCharacter{0106}{\'C} + \DeclareUnicodeCharacter{0107}{\'c} + \DeclareUnicodeCharacter{0108}{\^C} + \DeclareUnicodeCharacter{0109}{\^c} + \DeclareUnicodeCharacter{0118}{\ogonek{E}} + \DeclareUnicodeCharacter{0119}{\ogonek{e}} + \DeclareUnicodeCharacter{010A}{\dotaccent{C}} + \DeclareUnicodeCharacter{010B}{\dotaccent{c}} + \DeclareUnicodeCharacter{010C}{\v{C}} + \DeclareUnicodeCharacter{010D}{\v{c}} + \DeclareUnicodeCharacter{010E}{\v{D}} + + \DeclareUnicodeCharacter{0112}{\=E} + \DeclareUnicodeCharacter{0113}{\=e} + \DeclareUnicodeCharacter{0114}{\u{E}} + \DeclareUnicodeCharacter{0115}{\u{e}} + \DeclareUnicodeCharacter{0116}{\dotaccent{E}} + \DeclareUnicodeCharacter{0117}{\dotaccent{e}} + \DeclareUnicodeCharacter{011A}{\v{E}} + \DeclareUnicodeCharacter{011B}{\v{e}} + \DeclareUnicodeCharacter{011C}{\^G} + \DeclareUnicodeCharacter{011D}{\^g} + \DeclareUnicodeCharacter{011E}{\u{G}} + \DeclareUnicodeCharacter{011F}{\u{g}} + + \DeclareUnicodeCharacter{0120}{\dotaccent{G}} + \DeclareUnicodeCharacter{0121}{\dotaccent{g}} + \DeclareUnicodeCharacter{0124}{\^H} + \DeclareUnicodeCharacter{0125}{\^h} + \DeclareUnicodeCharacter{0128}{\~I} + \DeclareUnicodeCharacter{0129}{\~{\dotless{i}}} + \DeclareUnicodeCharacter{012A}{\=I} + \DeclareUnicodeCharacter{012B}{\={\dotless{i}}} + \DeclareUnicodeCharacter{012C}{\u{I}} + \DeclareUnicodeCharacter{012D}{\u{\dotless{i}}} + + \DeclareUnicodeCharacter{0130}{\dotaccent{I}} + \DeclareUnicodeCharacter{0131}{\dotless{i}} + \DeclareUnicodeCharacter{0132}{IJ} + \DeclareUnicodeCharacter{0133}{ij} + \DeclareUnicodeCharacter{0134}{\^J} + \DeclareUnicodeCharacter{0135}{\^{\dotless{j}}} + \DeclareUnicodeCharacter{0139}{\'L} + \DeclareUnicodeCharacter{013A}{\'l} + + \DeclareUnicodeCharacter{0141}{\L} + \DeclareUnicodeCharacter{0142}{\l} + \DeclareUnicodeCharacter{0143}{\'N} + \DeclareUnicodeCharacter{0144}{\'n} + \DeclareUnicodeCharacter{0147}{\v{N}} + \DeclareUnicodeCharacter{0148}{\v{n}} + \DeclareUnicodeCharacter{014C}{\=O} + \DeclareUnicodeCharacter{014D}{\=o} + \DeclareUnicodeCharacter{014E}{\u{O}} + \DeclareUnicodeCharacter{014F}{\u{o}} + + \DeclareUnicodeCharacter{0150}{\H{O}} + \DeclareUnicodeCharacter{0151}{\H{o}} + \DeclareUnicodeCharacter{0152}{\OE} + \DeclareUnicodeCharacter{0153}{\oe} + \DeclareUnicodeCharacter{0154}{\'R} + \DeclareUnicodeCharacter{0155}{\'r} + \DeclareUnicodeCharacter{0158}{\v{R}} + \DeclareUnicodeCharacter{0159}{\v{r}} + \DeclareUnicodeCharacter{015A}{\'S} + \DeclareUnicodeCharacter{015B}{\'s} + \DeclareUnicodeCharacter{015C}{\^S} + \DeclareUnicodeCharacter{015D}{\^s} + \DeclareUnicodeCharacter{015E}{\cedilla{S}} + \DeclareUnicodeCharacter{015F}{\cedilla{s}} + + \DeclareUnicodeCharacter{0160}{\v{S}} + \DeclareUnicodeCharacter{0161}{\v{s}} + \DeclareUnicodeCharacter{0162}{\cedilla{t}} + \DeclareUnicodeCharacter{0163}{\cedilla{T}} + \DeclareUnicodeCharacter{0164}{\v{T}} + + \DeclareUnicodeCharacter{0168}{\~U} + \DeclareUnicodeCharacter{0169}{\~u} + \DeclareUnicodeCharacter{016A}{\=U} + \DeclareUnicodeCharacter{016B}{\=u} + \DeclareUnicodeCharacter{016C}{\u{U}} + \DeclareUnicodeCharacter{016D}{\u{u}} + \DeclareUnicodeCharacter{016E}{\ringaccent{U}} + \DeclareUnicodeCharacter{016F}{\ringaccent{u}} + + \DeclareUnicodeCharacter{0170}{\H{U}} + \DeclareUnicodeCharacter{0171}{\H{u}} + \DeclareUnicodeCharacter{0174}{\^W} + \DeclareUnicodeCharacter{0175}{\^w} + \DeclareUnicodeCharacter{0176}{\^Y} + \DeclareUnicodeCharacter{0177}{\^y} + \DeclareUnicodeCharacter{0178}{\"Y} + \DeclareUnicodeCharacter{0179}{\'Z} + \DeclareUnicodeCharacter{017A}{\'z} + \DeclareUnicodeCharacter{017B}{\dotaccent{Z}} + \DeclareUnicodeCharacter{017C}{\dotaccent{z}} + \DeclareUnicodeCharacter{017D}{\v{Z}} + \DeclareUnicodeCharacter{017E}{\v{z}} + + \DeclareUnicodeCharacter{01C4}{D\v{Z}} + \DeclareUnicodeCharacter{01C5}{D\v{z}} + \DeclareUnicodeCharacter{01C6}{d\v{z}} + \DeclareUnicodeCharacter{01C7}{LJ} + \DeclareUnicodeCharacter{01C8}{Lj} + \DeclareUnicodeCharacter{01C9}{lj} + \DeclareUnicodeCharacter{01CA}{NJ} + \DeclareUnicodeCharacter{01CB}{Nj} + \DeclareUnicodeCharacter{01CC}{nj} + \DeclareUnicodeCharacter{01CD}{\v{A}} + \DeclareUnicodeCharacter{01CE}{\v{a}} + \DeclareUnicodeCharacter{01CF}{\v{I}} + + \DeclareUnicodeCharacter{01D0}{\v{\dotless{i}}} + \DeclareUnicodeCharacter{01D1}{\v{O}} + \DeclareUnicodeCharacter{01D2}{\v{o}} + \DeclareUnicodeCharacter{01D3}{\v{U}} + \DeclareUnicodeCharacter{01D4}{\v{u}} + + \DeclareUnicodeCharacter{01E2}{\={\AE}} + \DeclareUnicodeCharacter{01E3}{\={\ae}} + \DeclareUnicodeCharacter{01E6}{\v{G}} + \DeclareUnicodeCharacter{01E7}{\v{g}} + \DeclareUnicodeCharacter{01E8}{\v{K}} + \DeclareUnicodeCharacter{01E9}{\v{k}} + + \DeclareUnicodeCharacter{01F0}{\v{\dotless{j}}} + \DeclareUnicodeCharacter{01F1}{DZ} + \DeclareUnicodeCharacter{01F2}{Dz} + \DeclareUnicodeCharacter{01F3}{dz} + \DeclareUnicodeCharacter{01F4}{\'G} + \DeclareUnicodeCharacter{01F5}{\'g} + \DeclareUnicodeCharacter{01F8}{\`N} + \DeclareUnicodeCharacter{01F9}{\`n} + \DeclareUnicodeCharacter{01FC}{\'{\AE}} + \DeclareUnicodeCharacter{01FD}{\'{\ae}} + \DeclareUnicodeCharacter{01FE}{\'{\O}} + \DeclareUnicodeCharacter{01FF}{\'{\o}} + + \DeclareUnicodeCharacter{021E}{\v{H}} + \DeclareUnicodeCharacter{021F}{\v{h}} + + \DeclareUnicodeCharacter{0226}{\dotaccent{A}} + \DeclareUnicodeCharacter{0227}{\dotaccent{a}} + \DeclareUnicodeCharacter{0228}{\cedilla{E}} + \DeclareUnicodeCharacter{0229}{\cedilla{e}} + \DeclareUnicodeCharacter{022E}{\dotaccent{O}} + \DeclareUnicodeCharacter{022F}{\dotaccent{o}} + + \DeclareUnicodeCharacter{0232}{\=Y} + \DeclareUnicodeCharacter{0233}{\=y} + \DeclareUnicodeCharacter{0237}{\dotless{j}} + + \DeclareUnicodeCharacter{02DB}{\ogonek{ }} + + \DeclareUnicodeCharacter{1E02}{\dotaccent{B}} + \DeclareUnicodeCharacter{1E03}{\dotaccent{b}} + \DeclareUnicodeCharacter{1E04}{\udotaccent{B}} + \DeclareUnicodeCharacter{1E05}{\udotaccent{b}} + \DeclareUnicodeCharacter{1E06}{\ubaraccent{B}} + \DeclareUnicodeCharacter{1E07}{\ubaraccent{b}} + \DeclareUnicodeCharacter{1E0A}{\dotaccent{D}} + \DeclareUnicodeCharacter{1E0B}{\dotaccent{d}} + \DeclareUnicodeCharacter{1E0C}{\udotaccent{D}} + \DeclareUnicodeCharacter{1E0D}{\udotaccent{d}} + \DeclareUnicodeCharacter{1E0E}{\ubaraccent{D}} + \DeclareUnicodeCharacter{1E0F}{\ubaraccent{d}} + + \DeclareUnicodeCharacter{1E1E}{\dotaccent{F}} + \DeclareUnicodeCharacter{1E1F}{\dotaccent{f}} + + \DeclareUnicodeCharacter{1E20}{\=G} + \DeclareUnicodeCharacter{1E21}{\=g} + \DeclareUnicodeCharacter{1E22}{\dotaccent{H}} + \DeclareUnicodeCharacter{1E23}{\dotaccent{h}} + \DeclareUnicodeCharacter{1E24}{\udotaccent{H}} + \DeclareUnicodeCharacter{1E25}{\udotaccent{h}} + \DeclareUnicodeCharacter{1E26}{\"H} + \DeclareUnicodeCharacter{1E27}{\"h} + + \DeclareUnicodeCharacter{1E30}{\'K} + \DeclareUnicodeCharacter{1E31}{\'k} + \DeclareUnicodeCharacter{1E32}{\udotaccent{K}} + \DeclareUnicodeCharacter{1E33}{\udotaccent{k}} + \DeclareUnicodeCharacter{1E34}{\ubaraccent{K}} + \DeclareUnicodeCharacter{1E35}{\ubaraccent{k}} + \DeclareUnicodeCharacter{1E36}{\udotaccent{L}} + \DeclareUnicodeCharacter{1E37}{\udotaccent{l}} + \DeclareUnicodeCharacter{1E3A}{\ubaraccent{L}} + \DeclareUnicodeCharacter{1E3B}{\ubaraccent{l}} + \DeclareUnicodeCharacter{1E3E}{\'M} + \DeclareUnicodeCharacter{1E3F}{\'m} + + \DeclareUnicodeCharacter{1E40}{\dotaccent{M}} + \DeclareUnicodeCharacter{1E41}{\dotaccent{m}} + \DeclareUnicodeCharacter{1E42}{\udotaccent{M}} + \DeclareUnicodeCharacter{1E43}{\udotaccent{m}} + \DeclareUnicodeCharacter{1E44}{\dotaccent{N}} + \DeclareUnicodeCharacter{1E45}{\dotaccent{n}} + \DeclareUnicodeCharacter{1E46}{\udotaccent{N}} + \DeclareUnicodeCharacter{1E47}{\udotaccent{n}} + \DeclareUnicodeCharacter{1E48}{\ubaraccent{N}} + \DeclareUnicodeCharacter{1E49}{\ubaraccent{n}} + + \DeclareUnicodeCharacter{1E54}{\'P} + \DeclareUnicodeCharacter{1E55}{\'p} + \DeclareUnicodeCharacter{1E56}{\dotaccent{P}} + \DeclareUnicodeCharacter{1E57}{\dotaccent{p}} + \DeclareUnicodeCharacter{1E58}{\dotaccent{R}} + \DeclareUnicodeCharacter{1E59}{\dotaccent{r}} + \DeclareUnicodeCharacter{1E5A}{\udotaccent{R}} + \DeclareUnicodeCharacter{1E5B}{\udotaccent{r}} + \DeclareUnicodeCharacter{1E5E}{\ubaraccent{R}} + \DeclareUnicodeCharacter{1E5F}{\ubaraccent{r}} + + \DeclareUnicodeCharacter{1E60}{\dotaccent{S}} + \DeclareUnicodeCharacter{1E61}{\dotaccent{s}} + \DeclareUnicodeCharacter{1E62}{\udotaccent{S}} + \DeclareUnicodeCharacter{1E63}{\udotaccent{s}} + \DeclareUnicodeCharacter{1E6A}{\dotaccent{T}} + \DeclareUnicodeCharacter{1E6B}{\dotaccent{t}} + \DeclareUnicodeCharacter{1E6C}{\udotaccent{T}} + \DeclareUnicodeCharacter{1E6D}{\udotaccent{t}} + \DeclareUnicodeCharacter{1E6E}{\ubaraccent{T}} + \DeclareUnicodeCharacter{1E6F}{\ubaraccent{t}} + + \DeclareUnicodeCharacter{1E7C}{\~V} + \DeclareUnicodeCharacter{1E7D}{\~v} + \DeclareUnicodeCharacter{1E7E}{\udotaccent{V}} + \DeclareUnicodeCharacter{1E7F}{\udotaccent{v}} + + \DeclareUnicodeCharacter{1E80}{\`W} + \DeclareUnicodeCharacter{1E81}{\`w} + \DeclareUnicodeCharacter{1E82}{\'W} + \DeclareUnicodeCharacter{1E83}{\'w} + \DeclareUnicodeCharacter{1E84}{\"W} + \DeclareUnicodeCharacter{1E85}{\"w} + \DeclareUnicodeCharacter{1E86}{\dotaccent{W}} + \DeclareUnicodeCharacter{1E87}{\dotaccent{w}} + \DeclareUnicodeCharacter{1E88}{\udotaccent{W}} + \DeclareUnicodeCharacter{1E89}{\udotaccent{w}} + \DeclareUnicodeCharacter{1E8A}{\dotaccent{X}} + \DeclareUnicodeCharacter{1E8B}{\dotaccent{x}} + \DeclareUnicodeCharacter{1E8C}{\"X} + \DeclareUnicodeCharacter{1E8D}{\"x} + \DeclareUnicodeCharacter{1E8E}{\dotaccent{Y}} + \DeclareUnicodeCharacter{1E8F}{\dotaccent{y}} + + \DeclareUnicodeCharacter{1E90}{\^Z} + \DeclareUnicodeCharacter{1E91}{\^z} + \DeclareUnicodeCharacter{1E92}{\udotaccent{Z}} + \DeclareUnicodeCharacter{1E93}{\udotaccent{z}} + \DeclareUnicodeCharacter{1E94}{\ubaraccent{Z}} + \DeclareUnicodeCharacter{1E95}{\ubaraccent{z}} + \DeclareUnicodeCharacter{1E96}{\ubaraccent{h}} + \DeclareUnicodeCharacter{1E97}{\"t} + \DeclareUnicodeCharacter{1E98}{\ringaccent{w}} + \DeclareUnicodeCharacter{1E99}{\ringaccent{y}} + + \DeclareUnicodeCharacter{1EA0}{\udotaccent{A}} + \DeclareUnicodeCharacter{1EA1}{\udotaccent{a}} + + \DeclareUnicodeCharacter{1EB8}{\udotaccent{E}} + \DeclareUnicodeCharacter{1EB9}{\udotaccent{e}} + \DeclareUnicodeCharacter{1EBC}{\~E} + \DeclareUnicodeCharacter{1EBD}{\~e} + + \DeclareUnicodeCharacter{1ECA}{\udotaccent{I}} + \DeclareUnicodeCharacter{1ECB}{\udotaccent{i}} + \DeclareUnicodeCharacter{1ECC}{\udotaccent{O}} + \DeclareUnicodeCharacter{1ECD}{\udotaccent{o}} + + \DeclareUnicodeCharacter{1EE4}{\udotaccent{U}} + \DeclareUnicodeCharacter{1EE5}{\udotaccent{u}} + + \DeclareUnicodeCharacter{1EF2}{\`Y} + \DeclareUnicodeCharacter{1EF3}{\`y} + \DeclareUnicodeCharacter{1EF4}{\udotaccent{Y}} + + \DeclareUnicodeCharacter{1EF8}{\~Y} + \DeclareUnicodeCharacter{1EF9}{\~y} + + \DeclareUnicodeCharacter{2013}{--} + \DeclareUnicodeCharacter{2014}{---} + \DeclareUnicodeCharacter{2018}{\quoteleft} + \DeclareUnicodeCharacter{2019}{\quoteright} + \DeclareUnicodeCharacter{201A}{\quotesinglbase} + \DeclareUnicodeCharacter{201C}{\quotedblleft} + \DeclareUnicodeCharacter{201D}{\quotedblright} + \DeclareUnicodeCharacter{201E}{\quotedblbase} + \DeclareUnicodeCharacter{2022}{\bullet} + \DeclareUnicodeCharacter{2026}{\dots} + \DeclareUnicodeCharacter{2039}{\guilsinglleft} + \DeclareUnicodeCharacter{203A}{\guilsinglright} + \DeclareUnicodeCharacter{20AC}{\euro} + + \DeclareUnicodeCharacter{2192}{\expansion} + \DeclareUnicodeCharacter{21D2}{\result} + + \DeclareUnicodeCharacter{2212}{\minus} + \DeclareUnicodeCharacter{2217}{\point} + \DeclareUnicodeCharacter{2261}{\equiv} +}% end of \utfeightchardefs + + +% US-ASCII character definitions. +\def\asciichardefs{% nothing need be done + \relax +} + +% Make non-ASCII characters printable again for compatibility with +% existing Texinfo documents that may use them, even without declaring a +% document encoding. +% +\setnonasciicharscatcode \other + + +\message{formatting,} + \newdimen\defaultparindent \defaultparindent = 15pt \chapheadingskip = 15pt plus 4pt minus 2pt @@ -6837,10 +9609,10 @@ % Prevent underfull vbox error messages. \vbadness = 10000 -% Don't be so finicky about underfull hboxes, either. -\hbadness = 2000 - -% Following George Bush, just get rid of widows and orphans. +% Don't be very finicky about underfull hboxes, either. +\hbadness = 6666 + +% Following George Bush, get rid of widows and orphans. \widowpenalty=10000 \clubpenalty=10000 @@ -6887,6 +9659,10 @@ \ifpdf \pdfpageheight #7\relax \pdfpagewidth #8\relax + % if we don't reset these, they will remain at "1 true in" of + % whatever layout pdftex was dumped with. + \pdfhorigin = 1 true in + \pdfvorigin = 1 true in \fi % \setleading{\textleading} @@ -6901,7 +9677,7 @@ \textleading = 13.2pt % % If page is nothing but text, make it come out even. - \internalpagesizes{46\baselineskip}{6in}% + \internalpagesizes{607.2pt}{6in}% that's 46 lines {\voffset}{.25in}% {\bindingoffset}{36pt}% {11in}{8.5in}% @@ -6913,7 +9689,7 @@ \textleading = 12pt % \internalpagesizes{7.5in}{5in}% - {\voffset}{.25in}% + {-.2in}{0in}% {\bindingoffset}{16pt}% {9.25in}{7in}% % @@ -6957,7 +9733,7 @@ % \global\normaloffset = -6mm % \global\bindingoffset = 10mm % @end tex - \internalpagesizes{51\baselineskip}{160mm} + \internalpagesizes{673.2pt}{160mm}% that's 51 lines {\voffset}{\hoffset}% {\bindingoffset}{44pt}% {297mm}{210mm}% @@ -7022,7 +9798,7 @@ \parskip = 3pt plus 2pt minus 1pt \setleading{\textleading}% % - \dimen0 = #1 + \dimen0 = #1\relax \advance\dimen0 by \voffset % \dimen2 = \hsize @@ -7041,25 +9817,21 @@ \message{and turning on texinfo input format.} +\def^^L{\par} % remove \outer, so ^L can appear in an @comment + +% DEL is a comment character, in case @c does not suffice. +\catcode`\^^? = 14 + % Define macros to output various characters with catcode for normal text. -\catcode`\"=\other -\catcode`\~=\other -\catcode`\^=\other -\catcode`\_=\other -\catcode`\|=\other -\catcode`\<=\other -\catcode`\>=\other -\catcode`\+=\other -\catcode`\$=\other -\def\normaldoublequote{"} -\def\normaltilde{~} -\def\normalcaret{^} -\def\normalunderscore{_} -\def\normalverticalbar{|} -\def\normalless{<} -\def\normalgreater{>} -\def\normalplus{+} -\def\normaldollar{$}%$ font-lock fix +\catcode`\"=\other \def\normaldoublequote{"} +\catcode`\$=\other \def\normaldollar{$}%$ font-lock fix +\catcode`\+=\other \def\normalplus{+} +\catcode`\<=\other \def\normalless{<} +\catcode`\>=\other \def\normalgreater{>} +\catcode`\^=\other \def\normalcaret{^} +\catcode`\_=\other \def\normalunderscore{_} +\catcode`\|=\other \def\normalverticalbar{|} +\catcode`\~=\other \def\normaltilde{~} % This macro is used to make a character print one way in \tt % (where it can probably be output as-is), and another way in other fonts, @@ -7117,6 +9889,13 @@ % \otherifyactive is called near the end of this file. \def\otherifyactive{\catcode`+=\other \catcode`\_=\other} +% Used sometimes to turn off (effectively) the active characters even after +% parsing them. +\def\turnoffactive{% + \normalturnoffactive + \otherbackslash +} + \catcode`\@=0 % \backslashcurfont outputs one backslash character in current font, @@ -7124,45 +9903,52 @@ \global\chardef\backslashcurfont=`\\ \global\let\rawbackslashxx=\backslashcurfont % let existing .??s files work -% \rawbackslash defines an active \ to do \backslashcurfont. -% \otherbackslash defines an active \ to be a literal `\' character with -% catcode other. -{\catcode`\\=\active - @gdef at rawbackslash{@let\=@backslashcurfont} - @gdef at otherbackslash{@let\=@realbackslash} -} - % \realbackslash is an actual character `\' with catcode other, and % \doublebackslash is two of them (for the pdf outlines). {\catcode`\\=\other @gdef at realbackslash{\} @gdef at doublebackslash{\\}} -% \normalbackslash outputs one backslash in fixed width font. -\def\normalbackslash{{\tt\backslashcurfont}} - -\catcode`\\=\active - -% Used sometimes to turn off (effectively) the active characters -% even after parsing them. - at def@turnoffactive{% +% In texinfo, backslash is an active character; it prints the backslash +% in fixed width font. +\catcode`\\=\active % @ for escape char from now on. + +% The story here is that in math mode, the \char of \backslashcurfont +% ends up printing the roman \ from the math symbol font (because \char +% in math mode uses the \mathcode, and plain.tex sets +% \mathcode`\\="026E). It seems better for @backslashchar{} to always +% print a typewriter backslash, hence we use an explicit \mathchar, +% which is the decimal equivalent of "715c (class 7, e.g., use \fam; +% ignored family value; char position "5C). We can't use " for the +% usual hex value because it has already been made active. + at def@normalbackslash{{@tt @ifmmode @mathchar29020 @else @backslashcurfont @fi}} + at let@backslashchar = @normalbackslash % @backslashchar{} is for user documents. + +% On startup, @fixbackslash assigns: +% @let \ = @normalbackslash +% \rawbackslash defines an active \ to do \backslashcurfont. +% \otherbackslash defines an active \ to be a literal `\' character with +% catcode other. We switch back and forth between these. + at gdef@rawbackslash{@let\=@backslashcurfont} + at gdef@otherbackslash{@let\=@realbackslash} + +% Same as @turnoffactive except outputs \ as {\tt\char`\\} instead of +% the literal character `\'. +% + at def@normalturnoffactive{% @let"=@normaldoublequote - @let\=@realbackslash - @let~=@normaltilde + @let$=@normaldollar %$ font-lock fix + @let+=@normalplus + @let<=@normalless + @let>=@normalgreater + @let\=@normalbackslash @let^=@normalcaret @let_=@normalunderscore @let|=@normalverticalbar - @let<=@normalless - @let>=@normalgreater - @let+=@normalplus - @let$=@normaldollar %$ font-lock fix + @let~=@normaltilde + @markupsetuplqdefault + @markupsetuprqdefault @unsepspaces } -% Same as @turnoffactive except outputs \ as {\tt\char`\\} instead of -% the literal character `\'. (Thus, \ is not expandable when this is in -% effect.) -% - at def@normalturnoffactive{@turnoffactive @let\=@normalbackslash} - % Make _ and + \other characters, temporarily. % This is canceled by @fixbackslash. @otherifyactive @@ -7175,7 +9961,7 @@ @global at let\ = @eatinput % On the other hand, perhaps the file did not have a `\input texinfo'. Then -% the first `\{ in the file would cause an error. This macro tries to fix +% the first `\' in the file would cause an error. This macro tries to fix % that, assuming it is called before the first `\' could plausibly occur. % Also turn back on active characters that might appear in the input % file name, in case not using a pre-dumped format. @@ -7189,11 +9975,28 @@ % Say @foo, not \foo, in error messages. @escapechar = `@@ +% These (along with & and #) are made active for url-breaking, so need +% active definitions as the normal characters. + at def@normaldot{.} + at def@normalquest{?} + at def@normalslash{/} + % These look ok in all fonts, so just make them not special. - at catcode`@& = @other - at catcode`@# = @other - at catcode`@% = @other - +% @hashchar{} gets its own user-level command, because of #line. + at catcode`@& = @other @def at normalamp{&} + at catcode`@# = @other @def at normalhash{#} + at catcode`@% = @other @def at normalpercent{%} + + at let @hashchar = @normalhash + + at c Finally, make ` and ' active, so that txicodequoteundirected and + at c txicodequotebacktick work right in, e.g., @w{@code{`foo'}}. If we + at c don't make ` and ' active, @code will not get them as active chars. + at c Do this last of all since we use ` in the previous @catcode assignments. + at catcode`@'=@active + at catcode`@`=@active + at markupsetuplqdefault + at markupsetuprqdefault @c Local variables: @c eval: (add-hook 'write-file-hooks 'time-stamp) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 20:37:48 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 20:37:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzEzNTU1?= =?utf-8?q?=3A_cPickle_now_supports_files_larger_than_2_GiB=2E?= Message-ID: <3Z5Dfh2WfbzQLn@mail.python.org> http://hg.python.org/cpython/rev/680959a3ae2e changeset: 82180:680959a3ae2e branch: 2.7 parent: 82173:035cbc654889 user: Serhiy Storchaka date: Tue Feb 12 21:36:47 2013 +0200 summary: Issue #13555: cPickle now supports files larger than 2 GiB. files: Lib/test/pickletester.py | 31 ++- Lib/test/test_cpickle.py | 17 +- Lib/test/test_pickle.py | 20 +- Misc/NEWS | 2 + Modules/cPickle.c | 286 +++++++++++++++----------- Modules/cStringIO.c | 7 +- 6 files changed, 225 insertions(+), 138 deletions(-) diff --git a/Lib/test/pickletester.py b/Lib/test/pickletester.py --- a/Lib/test/pickletester.py +++ b/Lib/test/pickletester.py @@ -6,7 +6,8 @@ import pickletools import copy_reg -from test.test_support import TestFailed, have_unicode, TESTFN +from test.test_support import (TestFailed, have_unicode, TESTFN, _2G, _1M, + precisionbigmemtest) # Tests that try a number of pickle protocols should have a # for proto in protocols: @@ -1280,3 +1281,31 @@ f.write(pickled2) f.seek(0) self.assertEqual(unpickler.load(), data2) + +class BigmemPickleTests(unittest.TestCase): + + # Memory requirements: 1 byte per character for input strings, 1 byte + # for pickled data, 1 byte for unpickled strings, 1 byte for internal + # buffer and 1 byte of free space for resizing of internal buffer. + + @precisionbigmemtest(size=_2G + 100*_1M, memuse=5) + def test_huge_strlist(self, size): + chunksize = 2**20 + data = [] + while size > chunksize: + data.append('x' * chunksize) + size -= chunksize + chunksize += 1 + data.append('y' * size) + + try: + for proto in protocols: + try: + pickled = self.dumps(data, proto) + res = self.loads(pickled) + self.assertEqual(res, data) + finally: + res = None + pickled = None + finally: + data = None diff --git a/Lib/test/test_cpickle.py b/Lib/test/test_cpickle.py --- a/Lib/test/test_cpickle.py +++ b/Lib/test/test_cpickle.py @@ -1,7 +1,9 @@ import cPickle, unittest from cStringIO import StringIO -from test.pickletester import AbstractPickleTests, AbstractPickleModuleTests -from test.pickletester import AbstractPicklerUnpicklerObjectTests +from test.pickletester import (AbstractPickleTests, + AbstractPickleModuleTests, + AbstractPicklerUnpicklerObjectTests, + BigmemPickleTests) from test import test_support class cPickleTests(AbstractPickleTests, AbstractPickleModuleTests): @@ -101,6 +103,16 @@ pickler_class = cPickle.Pickler unpickler_class = cPickle.Unpickler +class cPickleBigmemPickleTests(BigmemPickleTests): + + def dumps(self, arg, proto=0, fast=0): + # Ignore fast + return cPickle.dumps(arg, proto) + + def loads(self, buf): + # Ignore fast + return cPickle.loads(buf) + class Node(object): pass @@ -133,6 +145,7 @@ cPickleFastPicklerTests, cPickleDeepRecursive, cPicklePicklerUnpicklerObjectTests, + cPickleBigmemPickleTests, ) if __name__ == "__main__": diff --git a/Lib/test/test_pickle.py b/Lib/test/test_pickle.py --- a/Lib/test/test_pickle.py +++ b/Lib/test/test_pickle.py @@ -3,10 +3,11 @@ from test import test_support -from test.pickletester import AbstractPickleTests -from test.pickletester import AbstractPickleModuleTests -from test.pickletester import AbstractPersistentPicklerTests -from test.pickletester import AbstractPicklerUnpicklerObjectTests +from test.pickletester import (AbstractPickleTests, + AbstractPickleModuleTests, + AbstractPersistentPicklerTests, + AbstractPicklerUnpicklerObjectTests, + BigmemPickleTests) class PickleTests(AbstractPickleTests, AbstractPickleModuleTests): @@ -66,6 +67,16 @@ pickler_class = pickle.Pickler unpickler_class = pickle.Unpickler +class PickleBigmemPickleTests(BigmemPickleTests): + + def dumps(self, arg, proto=0, fast=0): + # Ignore fast + return pickle.dumps(arg, proto) + + def loads(self, buf): + # Ignore fast + return pickle.loads(buf) + def test_main(): test_support.run_unittest( @@ -73,6 +84,7 @@ PicklerTests, PersPicklerTests, PicklerUnpicklerObjectTests, + PickleBigmemPickleTests, ) test_support.run_doctest(pickle) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,8 @@ Library ------- +- Issue #13555: cPickle now supports files larger than 2 GiB. + - Issue #17052: unittest discovery should use self.testLoader. - Issue #4591: Uid and gid values larger than 2**31 are supported now. diff --git a/Modules/cPickle.c b/Modules/cPickle.c --- a/Modules/cPickle.c +++ b/Modules/cPickle.c @@ -139,15 +139,15 @@ typedef struct { PyObject_HEAD - int length; /* number of initial slots in data currently used */ - int size; /* number of slots in data allocated */ + Py_ssize_t length; /* number of initial slots in data currently used */ + Py_ssize_t size; /* number of slots in data allocated */ PyObject **data; } Pdata; static void Pdata_dealloc(Pdata *self) { - int i; + Py_ssize_t i; PyObject **p; for (i = self->length, p = self->data; --i >= 0; p++) { @@ -193,9 +193,9 @@ * number of items, this is a (non-erroneous) NOP. */ static int -Pdata_clear(Pdata *self, int clearto) +Pdata_clear(Pdata *self, Py_ssize_t clearto) { - int i; + Py_ssize_t i; PyObject **p; if (clearto < 0) return stackUnderflow(); @@ -214,18 +214,17 @@ static int Pdata_grow(Pdata *self) { - int bigger; - size_t nbytes; + Py_ssize_t bigger; + Py_ssize_t nbytes; + PyObject **tmp; + if (self->size > (PY_SSIZE_T_MAX >> 1)) + goto nomemory; bigger = self->size << 1; - if (bigger <= 0) /* was 0, or new value overflows */ + if (bigger > (PY_SSIZE_T_MAX / sizeof(PyObject *))) goto nomemory; - if ((int)(size_t)bigger != bigger) - goto nomemory; - nbytes = (size_t)bigger * sizeof(PyObject *); - if (nbytes / sizeof(PyObject *) != (size_t)bigger) - goto nomemory; + nbytes = bigger * sizeof(PyObject *); tmp = realloc(self->data, nbytes); if (tmp == NULL) goto nomemory; @@ -280,10 +279,10 @@ static PyObject * -Pdata_popTuple(Pdata *self, int start) +Pdata_popTuple(Pdata *self, Py_ssize_t start) { PyObject *r; - int i, j, l; + Py_ssize_t i, j, l; l = self->length-start; r = PyTuple_New(l); @@ -297,10 +296,10 @@ } static PyObject * -Pdata_popList(Pdata *self, int start) +Pdata_popList(Pdata *self, Py_ssize_t start) { PyObject *r; - int i, j, l; + Py_ssize_t i, j, l; l=self->length-start; if (!( r=PyList_New(l))) return NULL; @@ -347,9 +346,9 @@ int bin; int fast; /* Fast mode doesn't save in memo, don't use if circ ref */ - int (*write_func)(struct Picklerobject *, const char *, Py_ssize_t); + Py_ssize_t (*write_func)(struct Picklerobject *, const char *, Py_ssize_t); char *write_buf; - int buf_size; + Py_ssize_t buf_size; PyObject *dispatch_table; int fast_container; /* count nested container dumps */ PyObject *fast_memo; @@ -373,12 +372,12 @@ PyObject *mark; PyObject *pers_func; PyObject *last_string; - int *marks; - int num_marks; - int marks_size; + Py_ssize_t *marks; + Py_ssize_t num_marks; + Py_ssize_t marks_size; Py_ssize_t (*read_func)(struct Unpicklerobject *, char **, Py_ssize_t); Py_ssize_t (*readline_func)(struct Unpicklerobject *, char **); - int buf_size; + Py_ssize_t buf_size; char *buf; PyObject *find_class; } Unpicklerobject; @@ -424,7 +423,7 @@ return NULL; } -static int +static Py_ssize_t write_file(Picklerobject *self, const char *s, Py_ssize_t n) { size_t nbyteswritten; @@ -433,11 +432,6 @@ return 0; } - if (n > INT_MAX) { - /* String too large */ - return -1; - } - PyFile_IncUseCount((PyFileObject *)self->file); Py_BEGIN_ALLOW_THREADS nbyteswritten = fwrite(s, sizeof(char), n, self->fp); @@ -448,40 +442,44 @@ return -1; } - return (int)n; + return n; } -static int +static Py_ssize_t write_cStringIO(Picklerobject *self, const char *s, Py_ssize_t n) { + Py_ssize_t len = n; + if (s == NULL) { return 0; } + while (n > INT_MAX) { + if (PycStringIO->cwrite((PyObject *)self->file, s, INT_MAX) != INT_MAX) { + return -1; + } + n -= INT_MAX; + } + if (PycStringIO->cwrite((PyObject *)self->file, s, n) != n) { return -1; } - return (int)n; + return len; } -static int +static Py_ssize_t write_none(Picklerobject *self, const char *s, Py_ssize_t n) { if (s == NULL) return 0; - if (n > INT_MAX) return -1; - return (int)n; + return n; } -static int -write_other(Picklerobject *self, const char *s, Py_ssize_t _n) +static Py_ssize_t +write_other(Picklerobject *self, const char *s, Py_ssize_t n) { PyObject *py_str = 0, *junk = 0; - int n; - - if (_n > INT_MAX) - return -1; - n = (int)_n; + if (s == NULL) { if (!( self->buf_size )) return 0; py_str = PyString_FromStringAndSize(self->write_buf, @@ -490,7 +488,7 @@ return -1; } else { - if (self->buf_size && (n + self->buf_size) > WRITE_BUF_SIZE) { + if (self->buf_size && n > WRITE_BUF_SIZE - self->buf_size) { if (write_other(self, NULL, 0) < 0) return -1; } @@ -531,7 +529,7 @@ size_t nbytesread; if (self->buf_size == 0) { - int size; + Py_ssize_t size; size = ((n < 32) ? 32 : n); if (!( self->buf = (char *)malloc(size))) { @@ -575,7 +573,7 @@ static Py_ssize_t readline_file(Unpicklerobject *self, char **s) { - int i; + Py_ssize_t i; if (self->buf_size == 0) { if (!( self->buf = (char *)malloc(40))) { @@ -587,7 +585,7 @@ i = 0; while (1) { - int bigger; + Py_ssize_t bigger; char *newbuf; for (; i < (self->buf_size - 1); i++) { if (feof(self->fp) || @@ -597,13 +595,13 @@ return i + 1; } } - bigger = self->buf_size << 1; - if (bigger <= 0) { /* overflow */ + if (self->buf_size < (PY_SSIZE_T_MAX >> 1)) { PyErr_NoMemory(); return -1; } + bigger = self->buf_size << 1; newbuf = (char *)realloc(self->buf, bigger); - if (!newbuf) { + if (newbuf == NULL) { PyErr_NoMemory(); return -1; } @@ -616,30 +614,63 @@ static Py_ssize_t read_cStringIO(Unpicklerobject *self, char **s, Py_ssize_t n) { - char *ptr; - - if (PycStringIO->cread((PyObject *)self->file, &ptr, n) != n) { - PyErr_SetNone(PyExc_EOFError); - return -1; - } - - *s = ptr; - - return n; + Py_ssize_t len = n; + char *start, *end = NULL; + + while (1) { + int k; + char *ptr; + if (n > INT_MAX) + k = INT_MAX; + else + k = (int)n; + if (PycStringIO->cread((PyObject *)self->file, &ptr, k) != k) { + PyErr_SetNone(PyExc_EOFError); + return -1; + } + if (end == NULL) + start = ptr; + else if (ptr != end) { + /* non-continuous area */ + return -1; + } + if (n <= INT_MAX) + break; + end = ptr + INT_MAX; + n -= INT_MAX; + } + + *s = start; + + return len; } static Py_ssize_t readline_cStringIO(Unpicklerobject *self, char **s) { - Py_ssize_t n; - char *ptr; - - if ((n = PycStringIO->creadline((PyObject *)self->file, &ptr)) < 0) { - return -1; - } - - *s = ptr; + Py_ssize_t n = 0; + char *start = NULL, *end = NULL; + + while (1) { + int k; + char *ptr; + if ((k = PycStringIO->creadline((PyObject *)self->file, &ptr)) < 0) { + return -1; + } + n += k; + if (end == NULL) + start = ptr; + else if (ptr != end) { + /* non-continuous area */ + return -1; + } + if (k == 0 || ptr[k - 1] == '\n') + break; + end = ptr + k; + } + + *s = start; return n; } @@ -700,7 +731,7 @@ * The caller is responsible for free()'ing the return value. */ static char * -pystrndup(const char *s, int n) +pystrndup(const char *s, Py_ssize_t n) { char *r = (char *)malloc(n+1); if (r == NULL) @@ -715,7 +746,7 @@ get(Picklerobject *self, PyObject *id) { PyObject *value, *mv; - long c_value; + Py_ssize_t c_value; char s[30]; size_t len; @@ -735,7 +766,8 @@ if (!self->bin) { s[0] = GET; - PyOS_snprintf(s + 1, sizeof(s) - 1, "%ld\n", c_value); + PyOS_snprintf(s + 1, sizeof(s) - 1, + "%" PY_FORMAT_SIZE_T "d\n", c_value); len = strlen(s); } else if (Pdata_Check(self->file)) { @@ -780,8 +812,7 @@ put2(Picklerobject *self, PyObject *ob) { char c_str[30]; - int p; - size_t len; + Py_ssize_t len, p; int res = -1; PyObject *py_ob_id = 0, *memo_len = 0, *t = 0; @@ -818,7 +849,8 @@ if (!self->bin) { c_str[0] = PUT; - PyOS_snprintf(c_str + 1, sizeof(c_str) - 1, "%d\n", p); + PyOS_snprintf(c_str + 1, sizeof(c_str) - 1, + "%" PY_FORMAT_SIZE_T "d\n", p); len = strlen(c_str); } else if (Pdata_Check(self->file)) { @@ -994,7 +1026,7 @@ { char c_str[32]; long l = PyInt_AS_LONG((PyIntObject *)args); - int len = 0; + Py_ssize_t len = 0; if (!self->bin #if SIZEOF_LONG > 4 @@ -1201,7 +1233,7 @@ static int save_string(Picklerobject *self, PyObject *args, int doput) { - int size, len; + Py_ssize_t size, len; PyObject *repr=0; if ((size = PyString_Size(args)) < 0) @@ -1448,7 +1480,7 @@ static int store_tuple_elements(Picklerobject *self, PyObject *t, int len) { - int i; + Py_ssize_t i; int res = -1; /* guilty until proved innocent */ assert(PyTuple_Size(t) == len); @@ -1477,7 +1509,7 @@ save_tuple(Picklerobject *self, PyObject *args) { PyObject *py_tuple_id = NULL; - int len, i; + Py_ssize_t len, i; int res = -1; static char tuple = TUPLE; @@ -1690,7 +1722,7 @@ { int res = -1; char s[3]; - int len; + Py_ssize_t len; PyObject *iter; if (self->fast && !fast_save_enter(self, args)) @@ -1943,7 +1975,7 @@ { int res = -1; char s[3]; - int len; + Py_ssize_t len; if (self->fast && !fast_save_enter(self, args)) goto finally; @@ -2027,7 +2059,7 @@ if ((getinitargs_func = PyObject_GetAttr(args, __getinitargs___str))) { PyObject *element = 0; - int i, len; + Py_ssize_t i, len; if (!( class_args = PyObject_Call(getinitargs_func, empty_tuple, NULL))) @@ -2289,7 +2321,8 @@ save_pers(Picklerobject *self, PyObject *args, PyObject *f) { PyObject *pid = 0; - int size, res = -1; + Py_ssize_t size; + int res = -1; static char persid = PERSID, binpersid = BINPERSID; @@ -2431,7 +2464,7 @@ if (use_newobj) { PyObject *cls; PyObject *newargtup; - int n, i; + Py_ssize_t n, i; /* Sanity checks. */ n = PyTuple_Size(argtup); @@ -2815,7 +2848,7 @@ static PyObject * Pickle_getvalue(Picklerobject *self, PyObject *args) { - int l, i, rsize, ssize, clear=1, lm; + Py_ssize_t l, i, rsize, ssize, clear=1, lm; long ik; PyObject *k, *r; char *s, *p, *have_get; @@ -3314,7 +3347,7 @@ return global; } -static int +static Py_ssize_t marker(Unpicklerobject *self) { if (self->num_marks < 1) { @@ -3345,7 +3378,8 @@ { PyObject *py_int = 0; char *endptr, *s; - int len, res = -1; + Py_ssize_t len; + int res = -1; long l; if ((len = self->readline_func(self, &s)) < 0) return -1; @@ -3477,7 +3511,8 @@ { PyObject *l = 0; char *end, *s; - int len, res = -1; + Py_ssize_t len; + int res = -1; if ((len = self->readline_func(self, &s)) < 0) return -1; if (len < 2) return bad_readline(); @@ -3541,7 +3576,8 @@ { PyObject *py_float = 0; char *endptr, *s; - int len, res = -1; + Py_ssize_t len; + int res = -1; double d; if ((len = self->readline_func(self, &s)) < 0) return -1; @@ -3597,7 +3633,8 @@ load_string(Unpicklerobject *self) { PyObject *str = 0; - int len, res = -1; + Py_ssize_t len; + int res = -1; char *s, *p; if ((len = self->readline_func(self, &s)) < 0) return -1; @@ -3639,7 +3676,7 @@ load_binstring(Unpicklerobject *self) { PyObject *py_string = 0; - long l; + Py_ssize_t l; char *s; if (self->read_func(self, &s, 4) < 0) return -1; @@ -3691,20 +3728,17 @@ load_unicode(Unpicklerobject *self) { PyObject *str = 0; - int len, res = -1; + Py_ssize_t len; char *s; if ((len = self->readline_func(self, &s)) < 0) return -1; if (len < 1) return bad_readline(); if (!( str = PyUnicode_DecodeRawUnicodeEscape(s, len - 1, NULL))) - goto finally; + return -1; PDATA_PUSH(self->stack, str, -1); return 0; - - finally: - return res; } #endif @@ -3714,7 +3748,7 @@ load_binunicode(Unpicklerobject *self) { PyObject *unicode; - long l; + Py_ssize_t l; char *s; if (self->read_func(self, &s, 4) < 0) return -1; @@ -3745,7 +3779,7 @@ load_tuple(Unpicklerobject *self) { PyObject *tup; - int i; + Py_ssize_t i; if ((i = marker(self)) < 0) return -1; if (!( tup=Pdata_popTuple(self->stack, i))) return -1; @@ -3798,7 +3832,7 @@ load_list(Unpicklerobject *self) { PyObject *list = 0; - int i; + Py_ssize_t i; if ((i = marker(self)) < 0) return -1; if (!( list=Pdata_popList(self->stack, i))) return -1; @@ -3810,7 +3844,7 @@ load_dict(Unpicklerobject *self) { PyObject *dict, *key, *value; - int i, j, k; + Py_ssize_t i, j, k; if ((i = marker(self)) < 0) return -1; j=self->stack->length; @@ -3886,7 +3920,7 @@ load_obj(Unpicklerobject *self) { PyObject *class, *tup, *obj=0; - int i; + Py_ssize_t i; if ((i = marker(self)) < 0) return -1; if (!( tup=Pdata_popTuple(self->stack, i+1))) return -1; @@ -3907,7 +3941,7 @@ load_inst(Unpicklerobject *self) { PyObject *tup, *class=0, *obj=0, *module_name, *class_name; - int i, len; + Py_ssize_t i, len; char *s; if ((i = marker(self)) < 0) return -1; @@ -3993,7 +4027,7 @@ load_global(Unpicklerobject *self) { PyObject *class = 0, *module_name = 0, *class_name = 0; - int len; + Py_ssize_t len; char *s; if ((len = self->readline_func(self, &s)) < 0) return -1; @@ -4024,7 +4058,7 @@ load_persid(Unpicklerobject *self) { PyObject *pid = 0; - int len; + Py_ssize_t len; char *s; if (self->pers_func) { @@ -4102,7 +4136,7 @@ static int load_pop(Unpicklerobject *self) { - int len = self->stack->length; + Py_ssize_t len = self->stack->length; /* Note that we split the (pickle.py) stack into two stacks, an object stack and a mark stack. We have to be clever and @@ -4127,7 +4161,7 @@ static int load_pop_mark(Unpicklerobject *self) { - int i; + Py_ssize_t i; if ((i = marker(self)) < 0) return -1; @@ -4142,7 +4176,7 @@ load_dup(Unpicklerobject *self) { PyObject *last; - int len; + Py_ssize_t len; if ((len = self->stack->length) <= 0) return stackUnderflow(); last=self->stack->data[len-1]; @@ -4156,7 +4190,7 @@ load_get(Unpicklerobject *self) { PyObject *py_str = 0, *value = 0; - int len; + Py_ssize_t len; char *s; int rc; @@ -4214,7 +4248,7 @@ PyObject *py_key = 0, *value = 0; unsigned char c; char *s; - long key; + Py_ssize_t key; int rc; if (self->read_func(self, &s, 4) < 0) return -1; @@ -4317,7 +4351,7 @@ load_put(Unpicklerobject *self) { PyObject *py_str = 0, *value = 0; - int len, l; + Py_ssize_t len, l; char *s; if ((l = self->readline_func(self, &s)) < 0) return -1; @@ -4337,7 +4371,7 @@ PyObject *py_key = 0, *value = 0; unsigned char key; char *s; - int len; + Py_ssize_t len; if (self->read_func(self, &s, 1) < 0) return -1; if (!( (len=self->stack->length) > 0 )) return stackUnderflow(); @@ -4356,10 +4390,10 @@ load_long_binput(Unpicklerobject *self) { PyObject *py_key = 0, *value = 0; - long key; + Py_ssize_t key; unsigned char c; char *s; - int len; + Py_ssize_t len; if (self->read_func(self, &s, 4) < 0) return -1; if (!( len=self->stack->length )) return stackUnderflow(); @@ -4382,10 +4416,10 @@ static int -do_append(Unpicklerobject *self, int x) +do_append(Unpicklerobject *self, Py_ssize_t x) { PyObject *value = 0, *list = 0, *append_method = 0; - int len, i; + Py_ssize_t len, i; len=self->stack->length; if (!( len >= x && x > 0 )) return stackUnderflow(); @@ -4451,11 +4485,11 @@ } -static int -do_setitems(Unpicklerobject *self, int x) +static Py_ssize_t +do_setitems(Unpicklerobject *self, Py_ssize_t x) { PyObject *value = 0, *key = 0, *dict = 0; - int len, i, r=0; + Py_ssize_t len, i, r=0; if (!( (len=self->stack->length) >= x && x > 0 )) return stackUnderflow(); @@ -4496,8 +4530,8 @@ PyObject *state, *inst, *slotstate; PyObject *__setstate__; PyObject *d_key, *d_value; + int res = -1; Py_ssize_t i; - int res = -1; /* Stack is ... instance, state. We want to leave instance at * the stack top, possibly mutated via instance.__setstate__(state). @@ -4596,7 +4630,7 @@ static int load_mark(Unpicklerobject *self) { - int s; + Py_ssize_t s; /* Note that we split the (pickle.py) stack into two stacks, an object stack and a mark stack. Here we push a mark onto the @@ -4604,14 +4638,14 @@ */ if ((self->num_marks + 1) >= self->marks_size) { - int *marks; + Py_ssize_t *marks; s=self->marks_size+20; if (s <= self->num_marks) s=self->num_marks + 1; if (self->marks == NULL) - marks=(int *)malloc(s * sizeof(int)); + marks=(Py_ssize_t *)malloc(s * sizeof(Py_ssize_t)); else - marks=(int *)realloc(self->marks, - s * sizeof(int)); + marks=(Py_ssize_t *)realloc(self->marks, + s * sizeof(Py_ssize_t)); if (!marks) { PyErr_NoMemory(); return -1; @@ -4981,7 +5015,7 @@ static int noload_obj(Unpicklerobject *self) { - int i; + Py_ssize_t i; if ((i = marker(self)) < 0) return -1; return Pdata_clear(self->stack, i+1); @@ -4991,7 +5025,7 @@ static int noload_inst(Unpicklerobject *self) { - int i; + Py_ssize_t i; char *s; if ((i = marker(self)) < 0) return -1; @@ -5068,7 +5102,7 @@ static int noload_appends(Unpicklerobject *self) { - int i; + Py_ssize_t i; if ((i = marker(self)) < 0) return -1; return Pdata_clear(self->stack, i); } @@ -5082,7 +5116,7 @@ static int noload_setitems(Unpicklerobject *self) { - int i; + Py_ssize_t i; if ((i = marker(self)) < 0) return -1; return Pdata_clear(self->stack, i); } diff --git a/Modules/cStringIO.c b/Modules/cStringIO.c --- a/Modules/cStringIO.c +++ b/Modules/cStringIO.c @@ -210,11 +210,8 @@ if (n < end) n++; len = n - start; - if (len > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "length too large"); - return -1; - } + if (len > INT_MAX) + len = INT_MAX; *output=start; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 23:42:26 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 23:42:26 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2ODAw?= =?utf-8?q?=3A_tempfile=2Egettempdir=28=29_no_longer_left_temporary_files_?= =?utf-8?q?when?= Message-ID: <3Z5Jlk1yB5zMt7@mail.python.org> http://hg.python.org/cpython/rev/b368fc93dca8 changeset: 82181:b368fc93dca8 branch: 2.7 user: Serhiy Storchaka date: Wed Feb 13 00:34:46 2013 +0200 summary: Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. files: Lib/tempfile.py | 18 +++++--- Lib/test/test_support.py | 27 +++++++++++++ Lib/test/test_tempfile.py | 56 ++++++++++++++++++++++++-- Misc/ACKS | 1 + Misc/NEWS | 3 + 5 files changed, 93 insertions(+), 12 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -29,6 +29,7 @@ # Imports. +import io as _io import os as _os import errno as _errno from random import Random as _Random @@ -193,14 +194,17 @@ name = namer.next() filename = _os.path.join(dir, name) try: - fd = _os.open(filename, flags, 0600) - fp = _os.fdopen(fd, 'w') - fp.write('blat') - fp.close() - _os.unlink(filename) - del fp, fd + fd = _os.open(filename, flags, 0o600) + try: + try: + fp = _io.open(fd, 'wb', buffering=0, closefd=False) + fp.write(b'blat') + finally: + _os.close(fd) + finally: + _os.unlink(filename) return dir - except (OSError, IOError), e: + except (OSError, IOError) as e: if e[0] != _errno.EEXIST: break # no point trying more names in this directory pass diff --git a/Lib/test/test_support.py b/Lib/test/test_support.py --- a/Lib/test/test_support.py +++ b/Lib/test/test_support.py @@ -1298,6 +1298,33 @@ except: break + at contextlib.contextmanager +def swap_attr(obj, attr, new_val): + """Temporary swap out an attribute with a new object. + + Usage: + with swap_attr(obj, "attr", 5): + ... + + This will set obj.attr to 5 for the duration of the with: block, + restoring the old value at the end of the block. If `attr` doesn't + exist on `obj`, it will be created and then deleted at the end of the + block. + """ + if hasattr(obj, attr): + real_val = getattr(obj, attr) + setattr(obj, attr, new_val) + try: + yield + finally: + setattr(obj, attr, real_val) + else: + setattr(obj, attr, new_val) + try: + yield + finally: + delattr(obj, attr) + def py3k_bytes(b): """Emulate the py3k bytes() constructor. diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -1,13 +1,16 @@ # tempfile.py unit tests. import tempfile +import errno +import io import os import signal +import shutil import sys import re import warnings import unittest -from test import test_support +from test import test_support as support warnings.filterwarnings("ignore", category=RuntimeWarning, @@ -177,7 +180,7 @@ # _candidate_tempdir_list contains the expected directories # Make sure the interesting environment variables are all set. - with test_support.EnvironmentVarGuard() as env: + with support.EnvironmentVarGuard() as env: for envname in 'TMPDIR', 'TEMP', 'TMP': dirname = os.getenv(envname) if not dirname: @@ -202,8 +205,51 @@ test_classes.append(test__candidate_tempdir_list) +# We test _get_default_tempdir some more by testing gettempdir. -# We test _get_default_tempdir by testing gettempdir. +class TestGetDefaultTempdir(TC): + """Test _get_default_tempdir().""" + + def test_no_files_left_behind(self): + # use a private empty directory + our_temp_directory = tempfile.mkdtemp() + try: + # force _get_default_tempdir() to consider our empty directory + def our_candidate_list(): + return [our_temp_directory] + + with support.swap_attr(tempfile, "_candidate_tempdir_list", + our_candidate_list): + # verify our directory is empty after _get_default_tempdir() + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) + + def raise_OSError(*args, **kwargs): + raise OSError(-1) + + with support.swap_attr(io, "open", raise_OSError): + # test again with failing io.open() + with self.assertRaises(IOError) as cm: + tempfile._get_default_tempdir() + self.assertEqual(cm.exception.errno, errno.ENOENT) + self.assertEqual(os.listdir(our_temp_directory), []) + + open = io.open + def bad_writer(*args, **kwargs): + fp = open(*args, **kwargs) + fp.write = raise_OSError + return fp + + with support.swap_attr(io, "open", bad_writer): + # test again with failing write() + with self.assertRaises(IOError) as cm: + tempfile._get_default_tempdir() + self.assertEqual(cm.exception.errno, errno.ENOENT) + self.assertEqual(os.listdir(our_temp_directory), []) + finally: + shutil.rmtree(our_temp_directory) + +test_classes.append(TestGetDefaultTempdir) class test__get_candidate_names(TC): @@ -299,7 +345,7 @@ if not has_spawnl: return # ugh, can't use SkipTest. - if test_support.verbose: + if support.verbose: v="v" else: v="q" @@ -913,7 +959,7 @@ test_classes.append(test_TemporaryFile) def test_main(): - test_support.run_unittest(*test_classes) + support.run_unittest(*test_classes) if __name__ == "__main__": test_main() diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -974,6 +974,7 @@ Paul Swartz Thenault Sylvain P?ter Szab? +Amir Szekely Arfrever Frehtes Taifersar Arahesis Geoff Talvola William Tanksley diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -202,6 +202,9 @@ Library ------- +- Issue #16800: tempfile.gettempdir() no longer left temporary files when + the disk is full. Original patch by Amir Szekely. + - Issue #13555: cPickle now supports files larger than 2 GiB. - Issue #17052: unittest discovery should use self.testLoader. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 23:42:27 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 23:42:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2ODAw?= =?utf-8?q?=3A_tempfile=2Egettempdir=28=29_no_longer_left_temporary_files_?= =?utf-8?q?when?= Message-ID: <3Z5Jll6QM7zMt7@mail.python.org> http://hg.python.org/cpython/rev/377123f10820 changeset: 82182:377123f10820 branch: 3.2 parent: 82172:3893ab574c55 user: Serhiy Storchaka date: Wed Feb 13 00:35:30 2013 +0200 summary: Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. files: Lib/tempfile.py | 13 ++++--- Lib/test/test_tempfile.py | 44 ++++++++++++++++++++++++++- Misc/ACKS | 1 + Misc/NEWS | 3 + 4 files changed, 55 insertions(+), 6 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -175,11 +175,14 @@ filename = _os.path.join(dir, name) try: fd = _os.open(filename, _bin_openflags, 0o600) - fp = _io.open(fd, 'wb') - fp.write(b'blat') - fp.close() - _os.unlink(filename) - del fp, fd + try: + try: + fp = _io.open(fd, 'wb', buffering=0, closefd=False) + fp.write(b'blat') + finally: + _os.close(fd) + finally: + _os.unlink(filename) return dir except (OSError, IOError) as e: if e.args[0] != _errno.EEXIST: diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -1,5 +1,7 @@ # tempfile.py unit tests. import tempfile +import errno +import io import os import signal import sys @@ -211,8 +213,48 @@ test_classes.append(test__candidate_tempdir_list) +# We test _get_default_tempdir some more by testing gettempdir. -# We test _get_default_tempdir by testing gettempdir. +class TestGetDefaultTempdir(TC): + """Test _get_default_tempdir().""" + + def test_no_files_left_behind(self): + # use a private empty directory + with tempfile.TemporaryDirectory() as our_temp_directory: + # force _get_default_tempdir() to consider our empty directory + def our_candidate_list(): + return [our_temp_directory] + + with support.swap_attr(tempfile, "_candidate_tempdir_list", + our_candidate_list): + # verify our directory is empty after _get_default_tempdir() + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) + + def raise_OSError(*args, **kwargs): + raise OSError(-1) + + with support.swap_attr(io, "open", raise_OSError): + # test again with failing io.open() + with self.assertRaises(IOError) as cm: + tempfile._get_default_tempdir() + self.assertEqual(cm.exception.args[0], errno.ENOENT) + self.assertEqual(os.listdir(our_temp_directory), []) + + open = io.open + def bad_writer(*args, **kwargs): + fp = open(*args, **kwargs) + fp.write = raise_OSError + return fp + + with support.swap_attr(io, "open", bad_writer): + # test again with failing write() + with self.assertRaises(IOError) as cm: + tempfile._get_default_tempdir() + self.assertEqual(cm.exception.errno, errno.ENOENT) + self.assertEqual(os.listdir(our_temp_directory), []) + +test_classes.append(TestGetDefaultTempdir) class test__get_candidate_names(TC): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1051,6 +1051,7 @@ Paul Swartz Thenault Sylvain P?ter Szab? +Amir Szekely Arfrever Frehtes Taifersar Arahesis Neil Tallim Geoff Talvola diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -221,6 +221,9 @@ Library ------- +- Issue #16800: tempfile.gettempdir() no longer left temporary files when + the disk is full. Original patch by Amir Szekely. + - Issue #16564: Fixed regression relative to Python2 in the operation of email.encoders.encode_7or8bit when used with binary data. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 23:42:29 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 23:42:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316800=3A_tempfile=2Egettempdir=28=29_no_longer_left_t?= =?utf-8?q?emporary_files_when?= Message-ID: <3Z5Jln50zPzQ6H@mail.python.org> http://hg.python.org/cpython/rev/6f432bb11b28 changeset: 82183:6f432bb11b28 branch: 3.3 parent: 82174:574410153e73 parent: 82182:377123f10820 user: Serhiy Storchaka date: Wed Feb 13 00:37:29 2013 +0200 summary: Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. files: Lib/tempfile.py | 13 +++++--- Lib/test/test_tempfile.py | 40 ++++++++++++++++++++++++++- Misc/ACKS | 1 + Misc/NEWS | 3 ++ 4 files changed, 51 insertions(+), 6 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -172,11 +172,14 @@ filename = _os.path.join(dir, name) try: fd = _os.open(filename, _bin_openflags, 0o600) - fp = _io.open(fd, 'wb') - fp.write(b'blat') - fp.close() - _os.unlink(filename) - del fp, fd + try: + try: + fp = _io.open(fd, 'wb', buffering=0, closefd=False) + fp.write(b'blat') + finally: + _os.close(fd) + finally: + _os.unlink(filename) return dir except FileExistsError: pass diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -1,6 +1,7 @@ # tempfile.py unit tests. import tempfile import errno +import io import os import signal import sys @@ -198,7 +199,44 @@ # paths in this list. -# We test _get_default_tempdir by testing gettempdir. +# We test _get_default_tempdir some more by testing gettempdir. + +class TestGetDefaultTempdir(BaseTestCase): + """Test _get_default_tempdir().""" + + def test_no_files_left_behind(self): + # use a private empty directory + with tempfile.TemporaryDirectory() as our_temp_directory: + # force _get_default_tempdir() to consider our empty directory + def our_candidate_list(): + return [our_temp_directory] + + with support.swap_attr(tempfile, "_candidate_tempdir_list", + our_candidate_list): + # verify our directory is empty after _get_default_tempdir() + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) + + def raise_OSError(*args, **kwargs): + raise OSError() + + with support.swap_attr(io, "open", raise_OSError): + # test again with failing io.open() + with self.assertRaises(FileNotFoundError): + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) + + open = io.open + def bad_writer(*args, **kwargs): + fp = open(*args, **kwargs) + fp.write = raise_OSError + return fp + + with support.swap_attr(io, "open", bad_writer): + # test again with failing write() + with self.assertRaises(FileNotFoundError): + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) class TestGetCandidateNames(BaseTestCase): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1163,6 +1163,7 @@ Paul Swartz Thenault Sylvain P?ter Szab? +Amir Szekely Arfrever Frehtes Taifersar Arahesis Neil Tallim Geoff Talvola diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -175,6 +175,9 @@ Library ------- +- Issue #16800: tempfile.gettempdir() no longer left temporary files when + the disk is full. Original patch by Amir Szekely. + - Issue #16564: Fixed regression relative to Python2 in the operation of email.encoders.encode_7or8bit when used with binary data. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 12 23:42:31 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 12 Feb 2013 23:42:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316800=3A_tempfile=2Egettempdir=28=29_no_longer_?= =?utf-8?q?left_temporary_files_when?= Message-ID: <3Z5Jlq2JPDzQ7h@mail.python.org> http://hg.python.org/cpython/rev/b66a5b41d82f changeset: 82184:b66a5b41d82f parent: 82179:7727be7613f9 parent: 82183:6f432bb11b28 user: Serhiy Storchaka date: Wed Feb 13 00:38:48 2013 +0200 summary: Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. files: Lib/tempfile.py | 13 +++++--- Lib/test/test_tempfile.py | 40 ++++++++++++++++++++++++++- Misc/ACKS | 1 + Misc/NEWS | 3 ++ 4 files changed, 51 insertions(+), 6 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -174,11 +174,14 @@ filename = _os.path.join(dir, name) try: fd = _os.open(filename, _bin_openflags, 0o600) - fp = _io.open(fd, 'wb') - fp.write(b'blat') - fp.close() - _os.unlink(filename) - del fp, fd + try: + try: + fp = _io.open(fd, 'wb', buffering=0, closefd=False) + fp.write(b'blat') + finally: + _os.close(fd) + finally: + _os.unlink(filename) return dir except FileExistsError: pass diff --git a/Lib/test/test_tempfile.py b/Lib/test/test_tempfile.py --- a/Lib/test/test_tempfile.py +++ b/Lib/test/test_tempfile.py @@ -1,6 +1,7 @@ # tempfile.py unit tests. import tempfile import errno +import io import os import signal import sys @@ -198,7 +199,44 @@ # paths in this list. -# We test _get_default_tempdir by testing gettempdir. +# We test _get_default_tempdir some more by testing gettempdir. + +class TestGetDefaultTempdir(BaseTestCase): + """Test _get_default_tempdir().""" + + def test_no_files_left_behind(self): + # use a private empty directory + with tempfile.TemporaryDirectory() as our_temp_directory: + # force _get_default_tempdir() to consider our empty directory + def our_candidate_list(): + return [our_temp_directory] + + with support.swap_attr(tempfile, "_candidate_tempdir_list", + our_candidate_list): + # verify our directory is empty after _get_default_tempdir() + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) + + def raise_OSError(*args, **kwargs): + raise OSError() + + with support.swap_attr(io, "open", raise_OSError): + # test again with failing io.open() + with self.assertRaises(FileNotFoundError): + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) + + open = io.open + def bad_writer(*args, **kwargs): + fp = open(*args, **kwargs) + fp.write = raise_OSError + return fp + + with support.swap_attr(io, "open", bad_writer): + # test again with failing write() + with self.assertRaises(FileNotFoundError): + tempfile._get_default_tempdir() + self.assertEqual(os.listdir(our_temp_directory), []) class TestGetCandidateNames(BaseTestCase): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1175,6 +1175,7 @@ Paul Swartz Thenault Sylvain P?ter Szab? +Amir Szekely Arfrever Frehtes Taifersar Arahesis Neil Tallim Geoff Talvola diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -250,6 +250,9 @@ Library ------- +- Issue #16800: tempfile.gettempdir() no longer left temporary files when + the disk is full. Original patch by Amir Szekely. + - Issue #16564: Fixed regression relative to Python2 in the operation of email.encoders.encode_7or8bit when used with binary data. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 00:02:27 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 00:02:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_for_issue_?= =?utf-8?q?=2316800=3A_Use_buffered_write_to_handle_EINTR=2E?= Message-ID: <3Z5KBq57mRzPp5@mail.python.org> http://hg.python.org/cpython/rev/a43f67e95ef0 changeset: 82185:a43f67e95ef0 branch: 2.7 parent: 82181:b368fc93dca8 user: Serhiy Storchaka date: Wed Feb 13 00:59:11 2013 +0200 summary: Fix for issue #16800: Use buffered write to handle EINTR. files: Lib/tempfile.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -197,8 +197,8 @@ fd = _os.open(filename, flags, 0o600) try: try: - fp = _io.open(fd, 'wb', buffering=0, closefd=False) - fp.write(b'blat') + with _io.open(fd, 'wb', closefd=False) as fp: + fp.write(b'blat') finally: _os.close(fd) finally: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 00:02:29 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 00:02:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fix_for_issue_?= =?utf-8?q?=2316800=3A_Use_buffered_write_to_handle_EINTR=2E?= Message-ID: <3Z5KBs0fNxzPnQ@mail.python.org> http://hg.python.org/cpython/rev/4622206db91b changeset: 82186:4622206db91b branch: 3.2 parent: 82182:377123f10820 user: Serhiy Storchaka date: Wed Feb 13 00:59:26 2013 +0200 summary: Fix for issue #16800: Use buffered write to handle EINTR. files: Lib/tempfile.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -177,8 +177,8 @@ fd = _os.open(filename, _bin_openflags, 0o600) try: try: - fp = _io.open(fd, 'wb', buffering=0, closefd=False) - fp.write(b'blat') + with _io.open(fd, 'wb', closefd=False) as fp: + fp.write(b'blat') finally: _os.close(fd) finally: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 00:02:30 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 00:02:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_for_issue_=2316800=3A_Use_buffered_write_to_handle_EINTR?= =?utf-8?q?=2E?= Message-ID: <3Z5KBt3d0HzMvQ@mail.python.org> http://hg.python.org/cpython/rev/2fb03fe354e3 changeset: 82187:2fb03fe354e3 branch: 3.3 parent: 82183:6f432bb11b28 parent: 82186:4622206db91b user: Serhiy Storchaka date: Wed Feb 13 00:59:53 2013 +0200 summary: Fix for issue #16800: Use buffered write to handle EINTR. files: Lib/tempfile.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -174,8 +174,8 @@ fd = _os.open(filename, _bin_openflags, 0o600) try: try: - fp = _io.open(fd, 'wb', buffering=0, closefd=False) - fp.write(b'blat') + with _io.open(fd, 'wb', closefd=False) as fp: + fp.write(b'blat') finally: _os.close(fd) finally: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 00:02:31 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 00:02:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_for_issue_=2316800=3A_Use_buffered_write_to_handle_E?= =?utf-8?q?INTR=2E?= Message-ID: <3Z5KBv6NyVzPwq@mail.python.org> http://hg.python.org/cpython/rev/fec33725f319 changeset: 82188:fec33725f319 parent: 82184:b66a5b41d82f parent: 82187:2fb03fe354e3 user: Serhiy Storchaka date: Wed Feb 13 01:00:17 2013 +0200 summary: Fix for issue #16800: Use buffered write to handle EINTR. files: Lib/tempfile.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -176,8 +176,8 @@ fd = _os.open(filename, _bin_openflags, 0o600) try: try: - fp = _io.open(fd, 'wb', buffering=0, closefd=False) - fp.write(b'blat') + with _io.open(fd, 'wb', closefd=False) as fp: + fp.write(b'blat') finally: _os.close(fd) finally: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 04:31:05 2013 From: python-checkins at python.org (daniel.holth) Date: Wed, 13 Feb 2013 04:31:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_pep-0427=3A_Require_metadata_?= =?utf-8?q?1=2E1_or_greater?= Message-ID: <3Z5R8n6g0SzNBG@mail.python.org> http://hg.python.org/peps/rev/c11b02eef533 changeset: 4736:c11b02eef533 user: Daniel Holth date: Tue Feb 12 22:30:53 2013 -0500 summary: pep-0427: Require metadata 1.1 or greater files: pep-0427.txt | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -163,8 +163,8 @@ ``b'#!python'`` in order to enjoy script wrapper generation and ``#!python`` rewriting at install time. They may have any or no extension. -#. ``{distribution}-{version}.dist-info/METADATA`` is Metadata version 1.2 - (PEP 345) or greater format metadata. +#. ``{distribution}-{version}.dist-info/METADATA`` is Metadata version 1.1 + (PEP 314, PEP 345, PEP 426) or greater format metadata. #. ``{distribution}-{version}.dist-info/WHEEL`` is metadata about the archive itself:: -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Wed Feb 13 06:02:19 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Wed, 13 Feb 2013 06:02:19 +0100 Subject: [Python-checkins] Daily reference leaks (fec33725f319): sum=7 Message-ID: results for fec33725f319 on branch "default" -------------------------------------------- test_support leaked [1, 0, 0] references, sum=1 test_support leaked [1, 2, 1] memory blocks, sum=4 test_concurrent_futures leaked [-2, 3, 1] memory blocks, sum=2 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogq9h1jF', '-x'] From python-checkins at python.org Wed Feb 13 11:15:19 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:15:19 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzUzMDg6?= =?utf-8?q?_Raise_ValueError_when_marshalling_too_large_object_=28a_sequen?= =?utf-8?q?ce?= Message-ID: <3Z5c7C0T8KzRCW@mail.python.org> http://hg.python.org/cpython/rev/385d982ce641 changeset: 82189:385d982ce641 branch: 2.7 parent: 82185:a43f67e95ef0 user: Serhiy Storchaka date: Wed Feb 13 12:07:43 2013 +0200 summary: Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. files: Lib/test/test_marshal.py | 51 ++++++++++++- Lib/test/test_support.py | 13 +- Misc/NEWS | 3 + Python/marshal.c | 112 +++++++++++++------------- 4 files changed, 116 insertions(+), 63 deletions(-) diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py --- a/Lib/test/test_marshal.py +++ b/Lib/test/test_marshal.py @@ -269,6 +269,53 @@ invalid_string = 'l\x02\x00\x00\x00\x00\x00\x00\x00' self.assertRaises(ValueError, marshal.loads, invalid_string) +LARGE_SIZE = 2**31 +character_size = 4 if sys.maxunicode > 0xFFFF else 2 +pointer_size = 8 if sys.maxsize > 0xFFFFFFFF else 4 + + at unittest.skipIf(LARGE_SIZE > sys.maxsize, "test cannot run on 32-bit systems") +class LargeValuesTestCase(unittest.TestCase): + def check_unmarshallable(self, data): + f = open(test_support.TESTFN, 'wb') + self.addCleanup(test_support.unlink, test_support.TESTFN) + with f: + self.assertRaises(ValueError, marshal.dump, data, f) + + @test_support.precisionbigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytes(self, size): + self.check_unmarshallable(b'x' * size) + + @test_support.precisionbigmemtest(size=LARGE_SIZE, + memuse=character_size, dry_run=False) + def test_str(self, size): + self.check_unmarshallable('x' * size) + + @test_support.precisionbigmemtest(size=LARGE_SIZE, + memuse=pointer_size, dry_run=False) + def test_tuple(self, size): + self.check_unmarshallable((None,) * size) + + @test_support.precisionbigmemtest(size=LARGE_SIZE, + memuse=pointer_size, dry_run=False) + def test_list(self, size): + self.check_unmarshallable([None] * size) + + @test_support.precisionbigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_set(self, size): + self.check_unmarshallable(set(range(size))) + + @test_support.precisionbigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_frozenset(self, size): + self.check_unmarshallable(frozenset(range(size))) + + @test_support.precisionbigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytearray(self, size): + self.check_unmarshallable(bytearray(size)) + def test_main(): test_support.run_unittest(IntTestCase, @@ -277,7 +324,9 @@ CodeTestCase, ContainerTestCase, ExceptionTestCase, - BugsTestCase) + BugsTestCase, + LargeValuesTestCase, + ) if __name__ == "__main__": test_main() diff --git a/Lib/test/test_support.py b/Lib/test/test_support.py --- a/Lib/test/test_support.py +++ b/Lib/test/test_support.py @@ -1062,7 +1062,7 @@ return wrapper return decorator -def precisionbigmemtest(size, memuse, overhead=5*_1M): +def precisionbigmemtest(size, memuse, overhead=5*_1M, dry_run=True): def decorator(f): def wrapper(self): if not real_max_memuse: @@ -1070,11 +1070,12 @@ else: maxsize = size - if real_max_memuse and real_max_memuse < maxsize * memuse: - if verbose: - sys.stderr.write("Skipping %s because of memory " - "constraint\n" % (f.__name__,)) - return + if ((real_max_memuse or not dry_run) + and real_max_memuse < maxsize * memuse): + if verbose: + sys.stderr.write("Skipping %s because of memory " + "constraint\n" % (f.__name__,)) + return return f(self, maxsize) wrapper.size = size diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -9,6 +9,9 @@ Core and Builtins ----------------- +- Issue #5308: Raise ValueError when marshalling too large object (a sequence + with size >= 2**31), instead of producing illegal marshal data. + - Issue #17043: The unicode-internal decoder no longer read past the end of input buffer. diff --git a/Python/marshal.c b/Python/marshal.c --- a/Python/marshal.c +++ b/Python/marshal.c @@ -88,7 +88,7 @@ } static void -w_string(char *s, int n, WFILE *p) +w_string(char *s, Py_ssize_t n, WFILE *p) { if (p->fp != NULL) { fwrite(s, 1, n, p->fp); @@ -126,6 +126,21 @@ } #endif +#define SIZE32_MAX 0x7FFFFFFF + +#if SIZEOF_SIZE_T > 4 +# define W_SIZE(n, p) do { \ + if ((n) > SIZE32_MAX) { \ + (p)->depth--; \ + (p)->error = WFERR_UNMARSHALLABLE; \ + return; \ + } \ + w_long((long)(n), p); \ + } while(0) +#else +# define W_SIZE w_long +#endif + /* We assume that Python longs are stored internally in base some power of 2**15; for the sake of portability we'll always read and write them in base exactly 2**15. */ @@ -159,6 +174,11 @@ d >>= PyLong_MARSHAL_SHIFT; l++; } while (d != 0); + if (l > SIZE32_MAX) { + p->depth--; + p->error = WFERR_UNMARSHALLABLE; + return; + } w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p); for (i=0; i < n-1; i++) { @@ -244,7 +264,7 @@ n = strlen(buf); w_byte(TYPE_FLOAT, p); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } @@ -277,7 +297,7 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); buf = PyOS_double_to_string(PyComplex_ImagAsDouble(v), 'g', 17, 0, NULL); @@ -287,7 +307,7 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } @@ -319,14 +339,8 @@ w_byte(TYPE_STRING, p); } n = PyString_GET_SIZE(v); - if (n > INT_MAX) { - /* huge strings are not supported */ - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyString_AS_STRING(v), (int)n, p); + W_SIZE(n, p); + w_string(PyString_AS_STRING(v), n, p); } #ifdef Py_USING_UNICODE else if (PyUnicode_CheckExact(v)) { @@ -339,20 +353,15 @@ } w_byte(TYPE_UNICODE, p); n = PyString_GET_SIZE(utf8); - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyString_AS_STRING(utf8), (int)n, p); + W_SIZE(n, p); + w_string(PyString_AS_STRING(utf8), n, p); Py_DECREF(utf8); } #endif else if (PyTuple_CheckExact(v)) { w_byte(TYPE_TUPLE, p); n = PyTuple_Size(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyTuple_GET_ITEM(v, i), p); } @@ -360,7 +369,7 @@ else if (PyList_CheckExact(v)) { w_byte(TYPE_LIST, p); n = PyList_GET_SIZE(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyList_GET_ITEM(v, i), p); } @@ -390,7 +399,7 @@ p->error = WFERR_UNMARSHALLABLE; return; } - w_long((long)n, p); + W_SIZE(n, p); it = PyObject_GetIter(v); if (it == NULL) { p->depth--; @@ -432,13 +441,8 @@ PyBufferProcs *pb = v->ob_type->tp_as_buffer; w_byte(TYPE_STRING, p); n = (*pb->bf_getreadbuffer)(v, 0, (void **)&s); - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(s, (int)n, p); + W_SIZE(n, p); + w_string(s, n, p); } else { w_byte(TYPE_UNKNOWN, p); @@ -480,14 +484,14 @@ #define r_byte(p) ((p)->fp ? getc((p)->fp) : rs_byte(p)) -static int -r_string(char *s, int n, RFILE *p) +static Py_ssize_t +r_string(char *s, Py_ssize_t n, RFILE *p) { if (p->fp != NULL) /* The result fits into int because it must be <=n. */ - return (int)fread(s, 1, n, p->fp); + return fread(s, 1, n, p->fp); if (p->end - p->ptr < n) - n = (int)(p->end - p->ptr); + n = p->end - p->ptr; memcpy(s, p->ptr, n); p->ptr += n; return n; @@ -563,14 +567,14 @@ r_PyLong(RFILE *p) { PyLongObject *ob; - int size, i, j, md, shorts_in_top_digit; - long n; + long n, size, i; + int j, md, shorts_in_top_digit; digit d; n = r_long(p); if (n == 0) return (PyObject *)_PyLong_New(0); - if (n < -INT_MAX || n > INT_MAX) { + if (n < -SIZE32_MAX || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (long size out of range)"); return NULL; @@ -691,7 +695,7 @@ char buf[256]; double dx; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); retval = NULL; @@ -732,7 +736,7 @@ char buf[256]; Py_complex c; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); retval = NULL; @@ -745,7 +749,7 @@ break; } n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); retval = NULL; @@ -795,7 +799,7 @@ case TYPE_INTERNED: case TYPE_STRING: n = r_long(p); - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)"); retval = NULL; break; @@ -805,7 +809,7 @@ retval = NULL; break; } - if (r_string(PyString_AS_STRING(v), (int)n, p) != n) { + if (r_string(PyString_AS_STRING(v), n, p) != n) { Py_DECREF(v); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -840,7 +844,7 @@ char *buffer; n = r_long(p); - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (unicode size out of range)"); retval = NULL; break; @@ -850,7 +854,7 @@ retval = PyErr_NoMemory(); break; } - if (r_string(buffer, (int)n, p) != n) { + if (r_string(buffer, n, p) != n) { PyMem_DEL(buffer); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -866,12 +870,12 @@ case TYPE_TUPLE: n = r_long(p); - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (tuple size out of range)"); retval = NULL; break; } - v = PyTuple_New((int)n); + v = PyTuple_New(n); if (v == NULL) { retval = NULL; break; @@ -886,19 +890,19 @@ v = NULL; break; } - PyTuple_SET_ITEM(v, (int)i, v2); + PyTuple_SET_ITEM(v, i, v2); } retval = v; break; case TYPE_LIST: n = r_long(p); - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (list size out of range)"); retval = NULL; break; } - v = PyList_New((int)n); + v = PyList_New(n); if (v == NULL) { retval = NULL; break; @@ -913,7 +917,7 @@ v = NULL; break; } - PyList_SET_ITEM(v, (int)i, v2); + PyList_SET_ITEM(v, i, v2); } retval = v; break; @@ -945,7 +949,7 @@ case TYPE_SET: case TYPE_FROZENSET: n = r_long(p); - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (set size out of range)"); retval = NULL; break; @@ -1134,12 +1138,8 @@ if (filesize > 0 && filesize <= REASONABLE_FILE_LIMIT) { char* pBuf = (char *)PyMem_MALLOC(filesize); if (pBuf != NULL) { - PyObject* v; - size_t n; - /* filesize must fit into an int, because it - is smaller than REASONABLE_FILE_LIMIT */ - n = fread(pBuf, 1, (int)filesize, fp); - v = PyMarshal_ReadObjectFromString(pBuf, n); + size_t n = fread(pBuf, 1, (size_t)filesize, fp); + PyObject* v = PyMarshal_ReadObjectFromString(pBuf, n); PyMem_FREE(pBuf); return v; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:15:20 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:15:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzUzMDg6?= =?utf-8?q?_Raise_ValueError_when_marshalling_too_large_object_=28a_sequen?= =?utf-8?q?ce?= Message-ID: <3Z5c7D4ZHdzRGJ@mail.python.org> http://hg.python.org/cpython/rev/e0464fa28c85 changeset: 82190:e0464fa28c85 branch: 3.2 parent: 82186:4622206db91b user: Serhiy Storchaka date: Wed Feb 13 12:08:15 2013 +0200 summary: Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. files: Lib/test/test_marshal.py | 61 ++++++++++++- Misc/NEWS | 3 + Python/marshal.c | 118 +++++++++++++------------- 3 files changed, 116 insertions(+), 66 deletions(-) diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py --- a/Lib/test/test_marshal.py +++ b/Lib/test/test_marshal.py @@ -262,16 +262,63 @@ unicode_string = 'T' self.assertRaises(TypeError, marshal.loads, unicode_string) +LARGE_SIZE = 2**31 +character_size = 4 if sys.maxunicode > 0xFFFF else 2 +pointer_size = 8 if sys.maxsize > 0xFFFFFFFF else 4 + +class NullWriter: + def write(self, s): + pass + + at unittest.skipIf(LARGE_SIZE > sys.maxsize, "test cannot run on 32-bit systems") +class LargeValuesTestCase(unittest.TestCase): + def check_unmarshallable(self, data): + self.assertRaises(ValueError, marshal.dump, data, NullWriter()) + + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytes(self, size): + self.check_unmarshallable(b'x' * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=character_size, dry_run=False) + def test_str(self, size): + self.check_unmarshallable('x' * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=pointer_size, dry_run=False) + def test_tuple(self, size): + self.check_unmarshallable((None,) * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=pointer_size, dry_run=False) + def test_list(self, size): + self.check_unmarshallable([None] * size) + + @support.bigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_set(self, size): + self.check_unmarshallable(set(range(size))) + + @support.bigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_frozenset(self, size): + self.check_unmarshallable(frozenset(range(size))) + + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytearray(self, size): + self.check_unmarshallable(bytearray(size)) + def test_main(): support.run_unittest(IntTestCase, - FloatTestCase, - StringTestCase, - CodeTestCase, - ContainerTestCase, - ExceptionTestCase, - BufferTestCase, - BugsTestCase) + FloatTestCase, + StringTestCase, + CodeTestCase, + ContainerTestCase, + ExceptionTestCase, + BufferTestCase, + BugsTestCase, + LargeValuesTestCase, + ) if __name__ == "__main__": test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #5308: Raise ValueError when marshalling too large object (a sequence + with size >= 2**31), instead of producing illegal marshal data. + - Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError and a full traceback including line number. diff --git a/Python/marshal.c b/Python/marshal.c --- a/Python/marshal.c +++ b/Python/marshal.c @@ -92,7 +92,7 @@ } static void -w_string(char *s, int n, WFILE *p) +w_string(char *s, Py_ssize_t n, WFILE *p) { if (p->fp != NULL) { fwrite(s, 1, n, p->fp); @@ -130,6 +130,21 @@ } #endif +#define SIZE32_MAX 0x7FFFFFFF + +#if SIZEOF_SIZE_T > 4 +# define W_SIZE(n, p) do { \ + if ((n) > SIZE32_MAX) { \ + (p)->depth--; \ + (p)->error = WFERR_UNMARSHALLABLE; \ + return; \ + } \ + w_long((long)(n), p); \ + } while(0) +#else +# define W_SIZE w_long +#endif + /* We assume that Python longs are stored internally in base some power of 2**15; for the sake of portability we'll always read and write them in base exactly 2**15. */ @@ -163,6 +178,11 @@ d >>= PyLong_MARSHAL_SHIFT; l++; } while (d != 0); + if (l > SIZE32_MAX) { + p->depth--; + p->error = WFERR_UNMARSHALLABLE; + return; + } w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p); for (i=0; i < n-1; i++) { @@ -251,7 +271,7 @@ n = strlen(buf); w_byte(TYPE_FLOAT, p); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } @@ -283,7 +303,7 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); buf = PyOS_double_to_string(PyComplex_ImagAsDouble(v), 'g', 17, 0, NULL); @@ -293,21 +313,15 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } else if (PyBytes_CheckExact(v)) { w_byte(TYPE_STRING, p); n = PyBytes_GET_SIZE(v); - if (n > INT_MAX) { - /* huge strings are not supported */ - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyBytes_AS_STRING(v), (int)n, p); + W_SIZE(n, p); + w_string(PyBytes_AS_STRING(v), n, p); } else if (PyUnicode_CheckExact(v)) { PyObject *utf8; @@ -321,19 +335,14 @@ } w_byte(TYPE_UNICODE, p); n = PyBytes_GET_SIZE(utf8); - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyBytes_AS_STRING(utf8), (int)n, p); + W_SIZE(n, p); + w_string(PyBytes_AS_STRING(utf8), n, p); Py_DECREF(utf8); } else if (PyTuple_CheckExact(v)) { w_byte(TYPE_TUPLE, p); n = PyTuple_Size(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyTuple_GET_ITEM(v, i), p); } @@ -341,7 +350,7 @@ else if (PyList_CheckExact(v)) { w_byte(TYPE_LIST, p); n = PyList_GET_SIZE(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyList_GET_ITEM(v, i), p); } @@ -371,7 +380,7 @@ p->error = WFERR_UNMARSHALLABLE; return; } - w_long((long)n, p); + W_SIZE(n, p); it = PyObject_GetIter(v); if (it == NULL) { p->depth--; @@ -421,13 +430,8 @@ w_byte(TYPE_STRING, p); n = view.len; s = view.buf; - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(s, (int)n, p); + W_SIZE(n, p); + w_string(s, n, p); PyBuffer_Release(&view); } else { @@ -467,25 +471,25 @@ #define rs_byte(p) (((p)->ptr < (p)->end) ? (unsigned char)*(p)->ptr++ : EOF) -static int -r_string(char *s, int n, RFILE *p) +static Py_ssize_t +r_string(char *s, Py_ssize_t n, RFILE *p) { char *ptr; - int read, left; + Py_ssize_t read, left; if (!p->readable) { if (p->fp != NULL) /* The result fits into int because it must be <=n. */ - read = (int) fread(s, 1, n, p->fp); + read = fread(s, 1, n, p->fp); else { - left = (int)(p->end - p->ptr); + left = p->end - p->ptr; read = (left < n) ? left : n; memcpy(s, p->ptr, read); p->ptr += read; } } else { - PyObject *data = PyObject_CallMethod(p->readable, "read", "i", n); + PyObject *data = PyObject_CallMethod(p->readable, "read", "n", n); read = 0; if (data != NULL) { if (!PyBytes_Check(data)) { @@ -515,7 +519,7 @@ { int c = EOF; unsigned char ch; - int n; + Py_ssize_t n; if (!p->readable) c = p->fp ? getc(p->fp) : rs_byte(p); @@ -599,8 +603,8 @@ r_PyLong(RFILE *p) { PyLongObject *ob; - int size, i, j, md, shorts_in_top_digit; - long n; + long n, size, i; + int j, md, shorts_in_top_digit; digit d; n = r_long(p); @@ -608,7 +612,7 @@ return NULL; if (n == 0) return (PyObject *)_PyLong_New(0); - if (n < -INT_MAX || n > INT_MAX) { + if (n < -SIZE32_MAX || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (long size out of range)"); return NULL; @@ -739,7 +743,7 @@ double dx; retval = NULL; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -777,7 +781,7 @@ Py_complex c; retval = NULL; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -787,7 +791,7 @@ if (c.real == -1.0 && PyErr_Occurred()) break; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -836,7 +840,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)"); retval = NULL; break; @@ -846,7 +850,7 @@ retval = NULL; break; } - if (r_string(PyBytes_AS_STRING(v), (int)n, p) != n) { + if (r_string(PyBytes_AS_STRING(v), n, p) != n) { Py_DECREF(v); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -865,7 +869,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (unicode size out of range)"); retval = NULL; break; @@ -875,7 +879,7 @@ retval = PyErr_NoMemory(); break; } - if (r_string(buffer, (int)n, p) != n) { + if (r_string(buffer, n, p) != n) { PyMem_DEL(buffer); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -894,12 +898,12 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (tuple size out of range)"); retval = NULL; break; } - v = PyTuple_New((int)n); + v = PyTuple_New(n); if (v == NULL) { retval = NULL; break; @@ -914,7 +918,7 @@ v = NULL; break; } - PyTuple_SET_ITEM(v, (int)i, v2); + PyTuple_SET_ITEM(v, i, v2); } retval = v; break; @@ -925,12 +929,12 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (list size out of range)"); retval = NULL; break; } - v = PyList_New((int)n); + v = PyList_New(n); if (v == NULL) { retval = NULL; break; @@ -945,7 +949,7 @@ v = NULL; break; } - PyList_SET_ITEM(v, (int)i, v2); + PyList_SET_ITEM(v, i, v2); } retval = v; break; @@ -981,7 +985,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (set size out of range)"); retval = NULL; break; @@ -1177,12 +1181,8 @@ if (filesize > 0 && filesize <= REASONABLE_FILE_LIMIT) { char* pBuf = (char *)PyMem_MALLOC(filesize); if (pBuf != NULL) { - PyObject* v; - size_t n; - /* filesize must fit into an int, because it - is smaller than REASONABLE_FILE_LIMIT */ - n = fread(pBuf, 1, (int)filesize, fp); - v = PyMarshal_ReadObjectFromString(pBuf, n); + size_t n = fread(pBuf, 1, (size_t)filesize, fp); + PyObject* v = PyMarshal_ReadObjectFromString(pBuf, n); PyMem_FREE(pBuf); return v; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:15:22 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:15:22 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=235308=3A_Raise_ValueError_when_marshalling_too_large_o?= =?utf-8?q?bject_=28a_sequence?= Message-ID: <3Z5c7G1YrNzRJp@mail.python.org> http://hg.python.org/cpython/rev/b48e1cd2d3be changeset: 82191:b48e1cd2d3be branch: 3.3 parent: 82187:2fb03fe354e3 parent: 82190:e0464fa28c85 user: Serhiy Storchaka date: Wed Feb 13 12:11:03 2013 +0200 summary: Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. files: Lib/test/test_marshal.py | 61 ++++++++++++- Misc/NEWS | 3 + Python/marshal.c | 118 +++++++++++++------------- 3 files changed, 116 insertions(+), 66 deletions(-) diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py --- a/Lib/test/test_marshal.py +++ b/Lib/test/test_marshal.py @@ -279,16 +279,63 @@ unicode_string = 'T' self.assertRaises(TypeError, marshal.loads, unicode_string) +LARGE_SIZE = 2**31 +character_size = 4 if sys.maxunicode > 0xFFFF else 2 +pointer_size = 8 if sys.maxsize > 0xFFFFFFFF else 4 + +class NullWriter: + def write(self, s): + pass + + at unittest.skipIf(LARGE_SIZE > sys.maxsize, "test cannot run on 32-bit systems") +class LargeValuesTestCase(unittest.TestCase): + def check_unmarshallable(self, data): + self.assertRaises(ValueError, marshal.dump, data, NullWriter()) + + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytes(self, size): + self.check_unmarshallable(b'x' * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=character_size, dry_run=False) + def test_str(self, size): + self.check_unmarshallable('x' * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=pointer_size, dry_run=False) + def test_tuple(self, size): + self.check_unmarshallable((None,) * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=pointer_size, dry_run=False) + def test_list(self, size): + self.check_unmarshallable([None] * size) + + @support.bigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_set(self, size): + self.check_unmarshallable(set(range(size))) + + @support.bigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_frozenset(self, size): + self.check_unmarshallable(frozenset(range(size))) + + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytearray(self, size): + self.check_unmarshallable(bytearray(size)) + def test_main(): support.run_unittest(IntTestCase, - FloatTestCase, - StringTestCase, - CodeTestCase, - ContainerTestCase, - ExceptionTestCase, - BufferTestCase, - BugsTestCase) + FloatTestCase, + StringTestCase, + CodeTestCase, + ContainerTestCase, + ExceptionTestCase, + BufferTestCase, + BugsTestCase, + LargeValuesTestCase, + ) if __name__ == "__main__": test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #5308: Raise ValueError when marshalling too large object (a sequence + with size >= 2**31), instead of producing illegal marshal data. + - Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError and a full traceback including line number. diff --git a/Python/marshal.c b/Python/marshal.c --- a/Python/marshal.c +++ b/Python/marshal.c @@ -95,7 +95,7 @@ } static void -w_string(char *s, int n, WFILE *p) +w_string(char *s, Py_ssize_t n, WFILE *p) { if (p->fp != NULL) { fwrite(s, 1, n, p->fp); @@ -124,6 +124,21 @@ w_byte((char)((x>>24) & 0xff), p); } +#define SIZE32_MAX 0x7FFFFFFF + +#if SIZEOF_SIZE_T > 4 +# define W_SIZE(n, p) do { \ + if ((n) > SIZE32_MAX) { \ + (p)->depth--; \ + (p)->error = WFERR_UNMARSHALLABLE; \ + return; \ + } \ + w_long((long)(n), p); \ + } while(0) +#else +# define W_SIZE w_long +#endif + /* We assume that Python longs are stored internally in base some power of 2**15; for the sake of portability we'll always read and write them in base exactly 2**15. */ @@ -157,6 +172,11 @@ d >>= PyLong_MARSHAL_SHIFT; l++; } while (d != 0); + if (l > SIZE32_MAX) { + p->depth--; + p->error = WFERR_UNMARSHALLABLE; + return; + } w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p); for (i=0; i < n-1; i++) { @@ -245,7 +265,7 @@ n = strlen(buf); w_byte(TYPE_FLOAT, p); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } @@ -277,7 +297,7 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); buf = PyOS_double_to_string(PyComplex_ImagAsDouble(v), 'g', 17, 0, NULL); @@ -287,21 +307,15 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } else if (PyBytes_CheckExact(v)) { w_byte(TYPE_STRING, p); n = PyBytes_GET_SIZE(v); - if (n > INT_MAX) { - /* huge strings are not supported */ - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyBytes_AS_STRING(v), (int)n, p); + W_SIZE(n, p); + w_string(PyBytes_AS_STRING(v), n, p); } else if (PyUnicode_CheckExact(v)) { PyObject *utf8; @@ -313,19 +327,14 @@ } w_byte(TYPE_UNICODE, p); n = PyBytes_GET_SIZE(utf8); - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyBytes_AS_STRING(utf8), (int)n, p); + W_SIZE(n, p); + w_string(PyBytes_AS_STRING(utf8), n, p); Py_DECREF(utf8); } else if (PyTuple_CheckExact(v)) { w_byte(TYPE_TUPLE, p); n = PyTuple_Size(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyTuple_GET_ITEM(v, i), p); } @@ -333,7 +342,7 @@ else if (PyList_CheckExact(v)) { w_byte(TYPE_LIST, p); n = PyList_GET_SIZE(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyList_GET_ITEM(v, i), p); } @@ -363,7 +372,7 @@ p->error = WFERR_UNMARSHALLABLE; return; } - w_long((long)n, p); + W_SIZE(n, p); it = PyObject_GetIter(v); if (it == NULL) { p->depth--; @@ -413,13 +422,8 @@ w_byte(TYPE_STRING, p); n = view.len; s = view.buf; - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(s, (int)n, p); + W_SIZE(n, p); + w_string(s, n, p); PyBuffer_Release(&view); } else { @@ -456,18 +460,18 @@ #define rs_byte(p) (((p)->ptr < (p)->end) ? (unsigned char)*(p)->ptr++ : EOF) -static int -r_string(char *s, int n, RFILE *p) +static Py_ssize_t +r_string(char *s, Py_ssize_t n, RFILE *p) { char *ptr; - int read, left; + Py_ssize_t read, left; if (!p->readable) { if (p->fp != NULL) /* The result fits into int because it must be <=n. */ - read = (int) fread(s, 1, n, p->fp); + read = fread(s, 1, n, p->fp); else { - left = (int)(p->end - p->ptr); + left = p->end - p->ptr; read = (left < n) ? left : n; memcpy(s, p->ptr, read); p->ptr += read; @@ -476,7 +480,7 @@ else { _Py_IDENTIFIER(read); - PyObject *data = _PyObject_CallMethodId(p->readable, &PyId_read, "i", n); + PyObject *data = _PyObject_CallMethodId(p->readable, &PyId_read, "n", n); read = 0; if (data != NULL) { if (!PyBytes_Check(data)) { @@ -506,7 +510,7 @@ { int c = EOF; unsigned char ch; - int n; + Py_ssize_t n; if (!p->readable) c = p->fp ? getc(p->fp) : rs_byte(p); @@ -590,8 +594,8 @@ r_PyLong(RFILE *p) { PyLongObject *ob; - int size, i, j, md, shorts_in_top_digit; - long n; + long n, size, i; + int j, md, shorts_in_top_digit; digit d; n = r_long(p); @@ -599,7 +603,7 @@ return NULL; if (n == 0) return (PyObject *)_PyLong_New(0); - if (n < -INT_MAX || n > INT_MAX) { + if (n < -SIZE32_MAX || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (long size out of range)"); return NULL; @@ -730,7 +734,7 @@ double dx; retval = NULL; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -768,7 +772,7 @@ Py_complex c; retval = NULL; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -778,7 +782,7 @@ if (c.real == -1.0 && PyErr_Occurred()) break; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -827,7 +831,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)"); retval = NULL; break; @@ -837,7 +841,7 @@ retval = NULL; break; } - if (r_string(PyBytes_AS_STRING(v), (int)n, p) != n) { + if (r_string(PyBytes_AS_STRING(v), n, p) != n) { Py_DECREF(v); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -856,7 +860,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (unicode size out of range)"); retval = NULL; break; @@ -866,7 +870,7 @@ retval = PyErr_NoMemory(); break; } - if (r_string(buffer, (int)n, p) != n) { + if (r_string(buffer, n, p) != n) { PyMem_DEL(buffer); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -885,12 +889,12 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (tuple size out of range)"); retval = NULL; break; } - v = PyTuple_New((int)n); + v = PyTuple_New(n); if (v == NULL) { retval = NULL; break; @@ -905,7 +909,7 @@ v = NULL; break; } - PyTuple_SET_ITEM(v, (int)i, v2); + PyTuple_SET_ITEM(v, i, v2); } retval = v; break; @@ -916,12 +920,12 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (list size out of range)"); retval = NULL; break; } - v = PyList_New((int)n); + v = PyList_New(n); if (v == NULL) { retval = NULL; break; @@ -936,7 +940,7 @@ v = NULL; break; } - PyList_SET_ITEM(v, (int)i, v2); + PyList_SET_ITEM(v, i, v2); } retval = v; break; @@ -972,7 +976,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (set size out of range)"); retval = NULL; break; @@ -1180,12 +1184,8 @@ if (filesize > 0 && filesize <= REASONABLE_FILE_LIMIT) { char* pBuf = (char *)PyMem_MALLOC(filesize); if (pBuf != NULL) { - PyObject* v; - size_t n; - /* filesize must fit into an int, because it - is smaller than REASONABLE_FILE_LIMIT */ - n = fread(pBuf, 1, (int)filesize, fp); - v = PyMarshal_ReadObjectFromString(pBuf, n); + size_t n = fread(pBuf, 1, (size_t)filesize, fp); + PyObject* v = PyMarshal_ReadObjectFromString(pBuf, n); PyMem_FREE(pBuf); return v; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:15:23 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:15:23 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=235308=3A_Raise_ValueError_when_marshalling_too_l?= =?utf-8?q?arge_object_=28a_sequence?= Message-ID: <3Z5c7H5svczRM6@mail.python.org> http://hg.python.org/cpython/rev/ea36478a36ee changeset: 82192:ea36478a36ee parent: 82188:fec33725f319 parent: 82191:b48e1cd2d3be user: Serhiy Storchaka date: Wed Feb 13 12:12:11 2013 +0200 summary: Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. files: Lib/test/test_marshal.py | 61 ++++++++++++- Misc/NEWS | 3 + Python/marshal.c | 118 +++++++++++++------------- 3 files changed, 116 insertions(+), 66 deletions(-) diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py --- a/Lib/test/test_marshal.py +++ b/Lib/test/test_marshal.py @@ -279,16 +279,63 @@ unicode_string = 'T' self.assertRaises(TypeError, marshal.loads, unicode_string) +LARGE_SIZE = 2**31 +character_size = 4 if sys.maxunicode > 0xFFFF else 2 +pointer_size = 8 if sys.maxsize > 0xFFFFFFFF else 4 + +class NullWriter: + def write(self, s): + pass + + at unittest.skipIf(LARGE_SIZE > sys.maxsize, "test cannot run on 32-bit systems") +class LargeValuesTestCase(unittest.TestCase): + def check_unmarshallable(self, data): + self.assertRaises(ValueError, marshal.dump, data, NullWriter()) + + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytes(self, size): + self.check_unmarshallable(b'x' * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=character_size, dry_run=False) + def test_str(self, size): + self.check_unmarshallable('x' * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=pointer_size, dry_run=False) + def test_tuple(self, size): + self.check_unmarshallable((None,) * size) + + @support.bigmemtest(size=LARGE_SIZE, memuse=pointer_size, dry_run=False) + def test_list(self, size): + self.check_unmarshallable([None] * size) + + @support.bigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_set(self, size): + self.check_unmarshallable(set(range(size))) + + @support.bigmemtest(size=LARGE_SIZE, + memuse=pointer_size*12 + sys.getsizeof(LARGE_SIZE-1), + dry_run=False) + def test_frozenset(self, size): + self.check_unmarshallable(frozenset(range(size))) + + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) + def test_bytearray(self, size): + self.check_unmarshallable(bytearray(size)) + def test_main(): support.run_unittest(IntTestCase, - FloatTestCase, - StringTestCase, - CodeTestCase, - ContainerTestCase, - ExceptionTestCase, - BufferTestCase, - BugsTestCase) + FloatTestCase, + StringTestCase, + CodeTestCase, + ContainerTestCase, + ExceptionTestCase, + BufferTestCase, + BugsTestCase, + LargeValuesTestCase, + ) if __name__ == "__main__": test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #5308: Raise ValueError when marshalling too large object (a sequence + with size >= 2**31), instead of producing illegal marshal data. + - Issue #12983: Bytes literals with invalid \x escape now raise a SyntaxError and a full traceback including line number. diff --git a/Python/marshal.c b/Python/marshal.c --- a/Python/marshal.c +++ b/Python/marshal.c @@ -95,7 +95,7 @@ } static void -w_string(char *s, int n, WFILE *p) +w_string(char *s, Py_ssize_t n, WFILE *p) { if (p->fp != NULL) { fwrite(s, 1, n, p->fp); @@ -124,6 +124,21 @@ w_byte((char)((x>>24) & 0xff), p); } +#define SIZE32_MAX 0x7FFFFFFF + +#if SIZEOF_SIZE_T > 4 +# define W_SIZE(n, p) do { \ + if ((n) > SIZE32_MAX) { \ + (p)->depth--; \ + (p)->error = WFERR_UNMARSHALLABLE; \ + return; \ + } \ + w_long((long)(n), p); \ + } while(0) +#else +# define W_SIZE w_long +#endif + /* We assume that Python longs are stored internally in base some power of 2**15; for the sake of portability we'll always read and write them in base exactly 2**15. */ @@ -157,6 +172,11 @@ d >>= PyLong_MARSHAL_SHIFT; l++; } while (d != 0); + if (l > SIZE32_MAX) { + p->depth--; + p->error = WFERR_UNMARSHALLABLE; + return; + } w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p); for (i=0; i < n-1; i++) { @@ -245,7 +265,7 @@ n = strlen(buf); w_byte(TYPE_FLOAT, p); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } @@ -277,7 +297,7 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); buf = PyOS_double_to_string(PyComplex_ImagAsDouble(v), 'g', 17, 0, NULL); @@ -287,21 +307,15 @@ } n = strlen(buf); w_byte((int)n, p); - w_string(buf, (int)n, p); + w_string(buf, n, p); PyMem_Free(buf); } } else if (PyBytes_CheckExact(v)) { w_byte(TYPE_STRING, p); n = PyBytes_GET_SIZE(v); - if (n > INT_MAX) { - /* huge strings are not supported */ - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyBytes_AS_STRING(v), (int)n, p); + W_SIZE(n, p); + w_string(PyBytes_AS_STRING(v), n, p); } else if (PyUnicode_CheckExact(v)) { PyObject *utf8; @@ -313,19 +327,14 @@ } w_byte(TYPE_UNICODE, p); n = PyBytes_GET_SIZE(utf8); - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(PyBytes_AS_STRING(utf8), (int)n, p); + W_SIZE(n, p); + w_string(PyBytes_AS_STRING(utf8), n, p); Py_DECREF(utf8); } else if (PyTuple_CheckExact(v)) { w_byte(TYPE_TUPLE, p); n = PyTuple_Size(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyTuple_GET_ITEM(v, i), p); } @@ -333,7 +342,7 @@ else if (PyList_CheckExact(v)) { w_byte(TYPE_LIST, p); n = PyList_GET_SIZE(v); - w_long((long)n, p); + W_SIZE(n, p); for (i = 0; i < n; i++) { w_object(PyList_GET_ITEM(v, i), p); } @@ -363,7 +372,7 @@ p->error = WFERR_UNMARSHALLABLE; return; } - w_long((long)n, p); + W_SIZE(n, p); it = PyObject_GetIter(v); if (it == NULL) { p->depth--; @@ -413,13 +422,8 @@ w_byte(TYPE_STRING, p); n = view.len; s = view.buf; - if (n > INT_MAX) { - p->depth--; - p->error = WFERR_UNMARSHALLABLE; - return; - } - w_long((long)n, p); - w_string(s, (int)n, p); + W_SIZE(n, p); + w_string(s, n, p); PyBuffer_Release(&view); } else { @@ -456,18 +460,18 @@ #define rs_byte(p) (((p)->ptr < (p)->end) ? (unsigned char)*(p)->ptr++ : EOF) -static int -r_string(char *s, int n, RFILE *p) +static Py_ssize_t +r_string(char *s, Py_ssize_t n, RFILE *p) { char *ptr; - int read, left; + Py_ssize_t read, left; if (!p->readable) { if (p->fp != NULL) /* The result fits into int because it must be <=n. */ - read = (int) fread(s, 1, n, p->fp); + read = fread(s, 1, n, p->fp); else { - left = (int)(p->end - p->ptr); + left = p->end - p->ptr; read = (left < n) ? left : n; memcpy(s, p->ptr, read); p->ptr += read; @@ -476,7 +480,7 @@ else { _Py_IDENTIFIER(read); - PyObject *data = _PyObject_CallMethodId(p->readable, &PyId_read, "i", n); + PyObject *data = _PyObject_CallMethodId(p->readable, &PyId_read, "n", n); read = 0; if (data != NULL) { if (!PyBytes_Check(data)) { @@ -506,7 +510,7 @@ { int c = EOF; unsigned char ch; - int n; + Py_ssize_t n; if (!p->readable) c = p->fp ? getc(p->fp) : rs_byte(p); @@ -590,8 +594,8 @@ r_PyLong(RFILE *p) { PyLongObject *ob; - int size, i, j, md, shorts_in_top_digit; - long n; + long n, size, i; + int j, md, shorts_in_top_digit; digit d; n = r_long(p); @@ -599,7 +603,7 @@ return NULL; if (n == 0) return (PyObject *)_PyLong_New(0); - if (n < -INT_MAX || n > INT_MAX) { + if (n < -SIZE32_MAX || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (long size out of range)"); return NULL; @@ -730,7 +734,7 @@ double dx; retval = NULL; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -768,7 +772,7 @@ Py_complex c; retval = NULL; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -778,7 +782,7 @@ if (c.real == -1.0 && PyErr_Occurred()) break; n = r_byte(p); - if (n == EOF || r_string(buf, (int)n, p) != n) { + if (n == EOF || r_string(buf, n, p) != n) { PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); break; @@ -827,7 +831,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)"); retval = NULL; break; @@ -837,7 +841,7 @@ retval = NULL; break; } - if (r_string(PyBytes_AS_STRING(v), (int)n, p) != n) { + if (r_string(PyBytes_AS_STRING(v), n, p) != n) { Py_DECREF(v); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -856,7 +860,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (unicode size out of range)"); retval = NULL; break; @@ -866,7 +870,7 @@ retval = PyErr_NoMemory(); break; } - if (r_string(buffer, (int)n, p) != n) { + if (r_string(buffer, n, p) != n) { PyMem_DEL(buffer); PyErr_SetString(PyExc_EOFError, "EOF read where object expected"); @@ -885,12 +889,12 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (tuple size out of range)"); retval = NULL; break; } - v = PyTuple_New((int)n); + v = PyTuple_New(n); if (v == NULL) { retval = NULL; break; @@ -905,7 +909,7 @@ v = NULL; break; } - PyTuple_SET_ITEM(v, (int)i, v2); + PyTuple_SET_ITEM(v, i, v2); } retval = v; break; @@ -916,12 +920,12 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (list size out of range)"); retval = NULL; break; } - v = PyList_New((int)n); + v = PyList_New(n); if (v == NULL) { retval = NULL; break; @@ -936,7 +940,7 @@ v = NULL; break; } - PyList_SET_ITEM(v, (int)i, v2); + PyList_SET_ITEM(v, i, v2); } retval = v; break; @@ -972,7 +976,7 @@ retval = NULL; break; } - if (n < 0 || n > INT_MAX) { + if (n < 0 || n > SIZE32_MAX) { PyErr_SetString(PyExc_ValueError, "bad marshal data (set size out of range)"); retval = NULL; break; @@ -1180,12 +1184,8 @@ if (filesize > 0 && filesize <= REASONABLE_FILE_LIMIT) { char* pBuf = (char *)PyMem_MALLOC(filesize); if (pBuf != NULL) { - PyObject* v; - size_t n; - /* filesize must fit into an int, because it - is smaller than REASONABLE_FILE_LIMIT */ - n = fread(pBuf, 1, (int)filesize, fp); - v = PyMarshal_ReadObjectFromString(pBuf, n); + size_t n = fread(pBuf, 1, (size_t)filesize, fp); + PyObject* v = PyMarshal_ReadObjectFromString(pBuf, n); PyMem_FREE(pBuf); return v; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:20:35 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:20:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2316996=3A_webbrows?= =?utf-8?q?er_module_now_uses_shutil=2Ewhich=28=29_to_find_a?= Message-ID: <3Z5cFH6PcRzRCW@mail.python.org> http://hg.python.org/cpython/rev/050c94f5f72c changeset: 82193:050c94f5f72c user: Serhiy Storchaka date: Wed Feb 13 12:19:40 2013 +0200 summary: Issue #16996: webbrowser module now uses shutil.which() to find a web-browser on the executable search path. files: Lib/webbrowser.py | 71 +++++++++------------------------- Misc/NEWS | 3 + 2 files changed, 23 insertions(+), 51 deletions(-) diff --git a/Lib/webbrowser.py b/Lib/webbrowser.py --- a/Lib/webbrowser.py +++ b/Lib/webbrowser.py @@ -5,6 +5,7 @@ import io import os import shlex +import shutil import sys import stat import subprocess @@ -83,7 +84,7 @@ """ cmd = browser.split()[0] - if not _iscommand(cmd): + if not shutil.which(cmd): return [None, None] name = os.path.basename(cmd) try: @@ -102,38 +103,6 @@ return [None, None] -if sys.platform[:3] == "win": - def _isexecutable(cmd): - cmd = cmd.lower() - if os.path.isfile(cmd) and cmd.endswith((".exe", ".bat")): - return True - for ext in ".exe", ".bat": - if os.path.isfile(cmd + ext): - return True - return False -else: - def _isexecutable(cmd): - if os.path.isfile(cmd): - mode = os.stat(cmd)[stat.ST_MODE] - if mode & stat.S_IXUSR or mode & stat.S_IXGRP or mode & stat.S_IXOTH: - return True - return False - -def _iscommand(cmd): - """Return True if cmd is executable or can be found on the executable - search path.""" - if _isexecutable(cmd): - return True - path = os.environ.get("PATH") - if not path: - return False - for d in path.split(os.pathsep): - exe = os.path.join(d, cmd) - if _isexecutable(exe): - return True - return False - - # General parent classes class BaseBrowser(object): @@ -453,58 +422,58 @@ def register_X_browsers(): # use xdg-open if around - if _iscommand("xdg-open"): + if shutil.which("xdg-open"): register("xdg-open", None, BackgroundBrowser("xdg-open")) # The default GNOME3 browser - if "GNOME_DESKTOP_SESSION_ID" in os.environ and _iscommand("gvfs-open"): + if "GNOME_DESKTOP_SESSION_ID" in os.environ and shutil.which("gvfs-open"): register("gvfs-open", None, BackgroundBrowser("gvfs-open")) # The default GNOME browser - if "GNOME_DESKTOP_SESSION_ID" in os.environ and _iscommand("gnome-open"): + if "GNOME_DESKTOP_SESSION_ID" in os.environ and shutil.which("gnome-open"): register("gnome-open", None, BackgroundBrowser("gnome-open")) # The default KDE browser - if "KDE_FULL_SESSION" in os.environ and _iscommand("kfmclient"): + if "KDE_FULL_SESSION" in os.environ and shutil.which("kfmclient"): register("kfmclient", Konqueror, Konqueror("kfmclient")) # The Mozilla/Netscape browsers for browser in ("mozilla-firefox", "firefox", "mozilla-firebird", "firebird", "seamonkey", "mozilla", "netscape"): - if _iscommand(browser): + if shutil.which(browser): register(browser, None, Mozilla(browser)) # Konqueror/kfm, the KDE browser. - if _iscommand("kfm"): + if shutil.which("kfm"): register("kfm", Konqueror, Konqueror("kfm")) - elif _iscommand("konqueror"): + elif shutil.which("konqueror"): register("konqueror", Konqueror, Konqueror("konqueror")) # Gnome's Galeon and Epiphany for browser in ("galeon", "epiphany"): - if _iscommand(browser): + if shutil.which(browser): register(browser, None, Galeon(browser)) # Skipstone, another Gtk/Mozilla based browser - if _iscommand("skipstone"): + if shutil.which("skipstone"): register("skipstone", None, BackgroundBrowser("skipstone")) # Google Chrome/Chromium browsers for browser in ("google-chrome", "chrome", "chromium", "chromium-browser"): - if _iscommand(browser): + if shutil.which(browser): register(browser, None, Chrome(browser)) # Opera, quite popular - if _iscommand("opera"): + if shutil.which("opera"): register("opera", None, Opera("opera")) # Next, Mosaic -- old but still in use. - if _iscommand("mosaic"): + if shutil.which("mosaic"): register("mosaic", None, BackgroundBrowser("mosaic")) # Grail, the Python browser. Does anybody still use it? - if _iscommand("grail"): + if shutil.which("grail"): register("grail", Grail, None) # Prefer X browsers if present @@ -514,15 +483,15 @@ # Also try console browsers if os.environ.get("TERM"): # The Links/elinks browsers - if _iscommand("links"): + if shutil.which("links"): register("links", None, GenericBrowser("links")) - if _iscommand("elinks"): + if shutil.which("elinks"): register("elinks", None, Elinks("elinks")) # The Lynx browser , - if _iscommand("lynx"): + if shutil.which("lynx"): register("lynx", None, GenericBrowser("lynx")) # The w3m browser - if _iscommand("w3m"): + if shutil.which("w3m"): register("w3m", None, GenericBrowser("w3m")) # @@ -552,7 +521,7 @@ "Internet Explorer\\IEXPLORE.EXE") for browser in ("firefox", "firebird", "seamonkey", "mozilla", "netscape", "opera", iexplore): - if _iscommand(browser): + if shutil.which(browser): register(browser, None, BackgroundBrowser(browser)) # diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -253,6 +253,9 @@ Library ------- +- Issue #16996: webbrowser module now uses shutil.which() to find a + web-browser on the executable search path. + - Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:27:57 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:27:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzExMzEx?= =?utf-8?q?=3A_StringIO=2Ereadline=280=29_now_returns_an_empty_string_as_a?= =?utf-8?q?ll_other?= Message-ID: <3Z5cPn4jvpzQGN@mail.python.org> http://hg.python.org/cpython/rev/7513bd184a01 changeset: 82194:7513bd184a01 branch: 2.7 parent: 82189:385d982ce641 user: Serhiy Storchaka date: Wed Feb 13 12:26:58 2013 +0200 summary: Issue #11311: StringIO.readline(0) now returns an empty string as all other file-like objects. files: Lib/StringIO.py | 2 +- Lib/test/test_StringIO.py | 2 ++ Misc/NEWS | 3 +++ 3 files changed, 6 insertions(+), 1 deletions(-) diff --git a/Lib/StringIO.py b/Lib/StringIO.py --- a/Lib/StringIO.py +++ b/Lib/StringIO.py @@ -158,7 +158,7 @@ newpos = self.len else: newpos = i+1 - if length is not None and length > 0: + if length is not None and length >= 0: if self.pos + length < newpos: newpos = self.pos + length r = self.buf[self.pos:newpos] diff --git a/Lib/test/test_StringIO.py b/Lib/test/test_StringIO.py --- a/Lib/test/test_StringIO.py +++ b/Lib/test/test_StringIO.py @@ -28,6 +28,8 @@ eq = self.assertEqual self.assertRaises(TypeError, self._fp.seek) eq(self._fp.read(10), self._line[:10]) + eq(self._fp.read(0), '') + eq(self._fp.readline(0), '') eq(self._fp.readline(), self._line[10:] + '\n') eq(len(self._fp.readlines(60)), 2) self._fp.seek(0) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -205,6 +205,9 @@ Library ------- +- Issue #11311: StringIO.readline(0) now returns an empty string as all other + file-like objects. + - Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:34:27 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:34:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_tests_for_?= =?utf-8?q?issue_=235308=2E?= Message-ID: <3Z5cYH1XhXzQGN@mail.python.org> http://hg.python.org/cpython/rev/72e75ea25d00 changeset: 82195:72e75ea25d00 branch: 2.7 user: Serhiy Storchaka date: Wed Feb 13 12:31:19 2013 +0200 summary: Fix tests for issue #5308. files: Lib/test/test_marshal.py | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py --- a/Lib/test/test_marshal.py +++ b/Lib/test/test_marshal.py @@ -282,13 +282,13 @@ self.assertRaises(ValueError, marshal.dump, data, f) @test_support.precisionbigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) - def test_bytes(self, size): - self.check_unmarshallable(b'x' * size) + def test_string(self, size): + self.check_unmarshallable('x' * size) @test_support.precisionbigmemtest(size=LARGE_SIZE, memuse=character_size, dry_run=False) - def test_str(self, size): - self.check_unmarshallable('x' * size) + def test_unicode(self, size): + self.check_unmarshallable(u'x' * size) @test_support.precisionbigmemtest(size=LARGE_SIZE, memuse=pointer_size, dry_run=False) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:34:28 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:34:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Cleanup_a_test?= =?utf-8?q?_for_issue_=235308=2E?= Message-ID: <3Z5cYJ4CDSzQGN@mail.python.org> http://hg.python.org/cpython/rev/0407e5e5915e changeset: 82196:0407e5e5915e branch: 3.3 parent: 82191:b48e1cd2d3be user: Serhiy Storchaka date: Wed Feb 13 12:32:24 2013 +0200 summary: Cleanup a test for issue #5308. files: Lib/test/test_marshal.py | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py --- a/Lib/test/test_marshal.py +++ b/Lib/test/test_marshal.py @@ -280,7 +280,6 @@ self.assertRaises(TypeError, marshal.loads, unicode_string) LARGE_SIZE = 2**31 -character_size = 4 if sys.maxunicode > 0xFFFF else 2 pointer_size = 8 if sys.maxsize > 0xFFFFFFFF else 4 class NullWriter: @@ -296,7 +295,7 @@ def test_bytes(self, size): self.check_unmarshallable(b'x' * size) - @support.bigmemtest(size=LARGE_SIZE, memuse=character_size, dry_run=False) + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) def test_str(self, size): self.check_unmarshallable('x' * size) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 11:34:29 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 13 Feb 2013 11:34:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Cleanup_a_test_for_issue_=235308=2E?= Message-ID: <3Z5cYK6tPvzRJp@mail.python.org> http://hg.python.org/cpython/rev/e45f2fcf202c changeset: 82197:e45f2fcf202c parent: 82193:050c94f5f72c parent: 82196:0407e5e5915e user: Serhiy Storchaka date: Wed Feb 13 12:32:47 2013 +0200 summary: Cleanup a test for issue #5308. files: Lib/test/test_marshal.py | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py --- a/Lib/test/test_marshal.py +++ b/Lib/test/test_marshal.py @@ -280,7 +280,6 @@ self.assertRaises(TypeError, marshal.loads, unicode_string) LARGE_SIZE = 2**31 -character_size = 4 if sys.maxunicode > 0xFFFF else 2 pointer_size = 8 if sys.maxsize > 0xFFFFFFFF else 4 class NullWriter: @@ -296,7 +295,7 @@ def test_bytes(self, size): self.check_unmarshallable(b'x' * size) - @support.bigmemtest(size=LARGE_SIZE, memuse=character_size, dry_run=False) + @support.bigmemtest(size=LARGE_SIZE, memuse=1, dry_run=False) def test_str(self, size): self.check_unmarshallable('x' * size) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 13:55:17 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 13:55:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2NzQz?= =?utf-8?q?=3A_Fix_mmap_overflow_check_on_32_bit_Windows?= Message-ID: <3Z5ggn1KzgzMvM@mail.python.org> http://hg.python.org/cpython/rev/b1bbe519770b changeset: 82198:b1bbe519770b branch: 2.7 parent: 82195:72e75ea25d00 user: Richard Oudkerk date: Wed Feb 13 12:05:14 2013 +0000 summary: Issue #16743: Fix mmap overflow check on 32 bit Windows files: Lib/test/test_mmap.py | 7 +++++++ Modules/mmapmodule.c | 22 +++++++++++----------- 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/Lib/test/test_mmap.py b/Lib/test/test_mmap.py --- a/Lib/test/test_mmap.py +++ b/Lib/test/test_mmap.py @@ -682,6 +682,13 @@ def test_large_filesize(self): with self._make_test_file(0x17FFFFFFF, b" ") as f: + if sys.maxsize < 0x180000000: + # On 32 bit platforms the file is larger than sys.maxsize so + # mapping the whole file should fail -- Issue #16743 + with self.assertRaises(OverflowError): + mmap.mmap(f.fileno(), 0x180000000, access=mmap.ACCESS_READ) + with self.assertRaises(ValueError): + mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) m = mmap.mmap(f.fileno(), 0x10000, access=mmap.ACCESS_READ) try: self.assertEqual(m.size(), 0x180000000) diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c --- a/Modules/mmapmodule.c +++ b/Modules/mmapmodule.c @@ -1188,7 +1188,6 @@ # endif if (fd != -1 && fstat(fd, &st) == 0 && S_ISREG(st.st_mode)) { if (map_size == 0) { - off_t calc_size; if (st.st_size == 0) { PyErr_SetString(PyExc_ValueError, "cannot mmap an empty file"); @@ -1199,13 +1198,12 @@ "mmap offset is greater than file size"); return NULL; } - calc_size = st.st_size - offset; - map_size = calc_size; - if (map_size != calc_size) { + if (st.st_size - offset > PY_SSIZE_T_MAX) { PyErr_SetString(PyExc_ValueError, "mmap length is too large"); - return NULL; - } + return NULL; + } + map_size = (Py_ssize_t) (st.st_size - offset); } else if (offset + (size_t)map_size > st.st_size) { PyErr_SetString(PyExc_ValueError, "mmap length is greater than file size"); @@ -1400,11 +1398,13 @@ Py_DECREF(m_obj); return NULL; } - if (offset - size > PY_SSIZE_T_MAX) - /* Map area too large to fit in memory */ - m_obj->size = (Py_ssize_t) -1; - else - m_obj->size = (Py_ssize_t) (size - offset); + if (size - offset > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_ValueError, + "mmap length is too large"); + Py_DECREF(m_obj); + return NULL; + } + m_obj->size = (Py_ssize_t) (size - offset); } else { m_obj->size = map_size; size = offset + map_size; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 13:55:18 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 13:55:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2NzQz?= =?utf-8?q?=3A_Fix_mmap_overflow_check_on_32_bit_Windows?= Message-ID: <3Z5ggp4Hn0zQRF@mail.python.org> http://hg.python.org/cpython/rev/c2c84d3ab393 changeset: 82199:c2c84d3ab393 branch: 3.2 parent: 82190:e0464fa28c85 user: Richard Oudkerk date: Wed Feb 13 12:18:03 2013 +0000 summary: Issue #16743: Fix mmap overflow check on 32 bit Windows files: Lib/test/test_mmap.py | 7 +++++++ Modules/mmapmodule.c | 22 +++++++++++----------- 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/Lib/test/test_mmap.py b/Lib/test/test_mmap.py --- a/Lib/test/test_mmap.py +++ b/Lib/test/test_mmap.py @@ -693,6 +693,13 @@ def test_large_filesize(self): with self._make_test_file(0x17FFFFFFF, b" ") as f: + if sys.maxsize < 0x180000000: + # On 32 bit platforms the file is larger than sys.maxsize so + # mapping the whole file should fail -- Issue #16743 + with self.assertRaises(OverflowError): + mmap.mmap(f.fileno(), 0x180000000, access=mmap.ACCESS_READ) + with self.assertRaises(ValueError): + mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) with mmap.mmap(f.fileno(), 0x10000, access=mmap.ACCESS_READ) as m: self.assertEqual(m.size(), 0x180000000) diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c --- a/Modules/mmapmodule.c +++ b/Modules/mmapmodule.c @@ -1140,7 +1140,6 @@ # endif if (fd != -1 && fstat(fd, &st) == 0 && S_ISREG(st.st_mode)) { if (map_size == 0) { - off_t calc_size; if (st.st_size == 0) { PyErr_SetString(PyExc_ValueError, "cannot mmap an empty file"); @@ -1151,13 +1150,12 @@ "mmap offset is greater than file size"); return NULL; } - calc_size = st.st_size - offset; - map_size = calc_size; - if (map_size != calc_size) { + if (st.st_size - offset > PY_SSIZE_T_MAX) { PyErr_SetString(PyExc_ValueError, "mmap length is too large"); - return NULL; - } + return NULL; + } + map_size = (Py_ssize_t) (st.st_size - offset); } else if (offset + (size_t)map_size > st.st_size) { PyErr_SetString(PyExc_ValueError, "mmap length is greater than file size"); @@ -1354,11 +1352,13 @@ Py_DECREF(m_obj); return NULL; } - if (offset - size > PY_SSIZE_T_MAX) - /* Map area too large to fit in memory */ - m_obj->size = (Py_ssize_t) -1; - else - m_obj->size = (Py_ssize_t) (size - offset); + if (size - offset > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_ValueError, + "mmap length is too large"); + Py_DECREF(m_obj); + return NULL; + } + m_obj->size = (Py_ssize_t) (size - offset); } else { m_obj->size = map_size; size = offset + map_size; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 13:55:20 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 13:55:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge?= Message-ID: <3Z5ggr1TDWzQGJ@mail.python.org> http://hg.python.org/cpython/rev/0748cc03b83e changeset: 82200:0748cc03b83e branch: 3.3 parent: 82196:0407e5e5915e parent: 82199:c2c84d3ab393 user: Richard Oudkerk date: Wed Feb 13 12:32:32 2013 +0000 summary: Merge files: Lib/test/test_mmap.py | 7 +++++++ Modules/mmapmodule.c | 22 +++++++++++----------- 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/Lib/test/test_mmap.py b/Lib/test/test_mmap.py --- a/Lib/test/test_mmap.py +++ b/Lib/test/test_mmap.py @@ -721,6 +721,13 @@ def test_large_filesize(self): with self._make_test_file(0x17FFFFFFF, b" ") as f: + if sys.maxsize < 0x180000000: + # On 32 bit platforms the file is larger than sys.maxsize so + # mapping the whole file should fail -- Issue #16743 + with self.assertRaises(OverflowError): + mmap.mmap(f.fileno(), 0x180000000, access=mmap.ACCESS_READ) + with self.assertRaises(ValueError): + mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) with mmap.mmap(f.fileno(), 0x10000, access=mmap.ACCESS_READ) as m: self.assertEqual(m.size(), 0x180000000) diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c --- a/Modules/mmapmodule.c +++ b/Modules/mmapmodule.c @@ -1162,7 +1162,6 @@ # endif if (fd != -1 && fstat(fd, &st) == 0 && S_ISREG(st.st_mode)) { if (map_size == 0) { - off_t calc_size; if (st.st_size == 0) { PyErr_SetString(PyExc_ValueError, "cannot mmap an empty file"); @@ -1173,13 +1172,12 @@ "mmap offset is greater than file size"); return NULL; } - calc_size = st.st_size - offset; - map_size = calc_size; - if (map_size != calc_size) { + if (st.st_size - offset > PY_SSIZE_T_MAX) { PyErr_SetString(PyExc_ValueError, "mmap length is too large"); - return NULL; - } + return NULL; + } + map_size = (Py_ssize_t) (st.st_size - offset); } else if (offset + (size_t)map_size > st.st_size) { PyErr_SetString(PyExc_ValueError, "mmap length is greater than file size"); @@ -1376,11 +1374,13 @@ Py_DECREF(m_obj); return NULL; } - if (offset - size > PY_SSIZE_T_MAX) - /* Map area too large to fit in memory */ - m_obj->size = (Py_ssize_t) -1; - else - m_obj->size = (Py_ssize_t) (size - offset); + if (size - offset > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_ValueError, + "mmap length is too large"); + Py_DECREF(m_obj); + return NULL; + } + m_obj->size = (Py_ssize_t) (size - offset); } else { m_obj->size = map_size; size = offset + map_size; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 13:55:21 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 13:55:21 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge?= Message-ID: <3Z5ggs45ZTzNMh@mail.python.org> http://hg.python.org/cpython/rev/c286c96ef42d changeset: 82201:c286c96ef42d parent: 82197:e45f2fcf202c parent: 82200:0748cc03b83e user: Richard Oudkerk date: Wed Feb 13 12:33:53 2013 +0000 summary: Merge files: Lib/test/test_mmap.py | 7 +++++++ Modules/mmapmodule.c | 22 +++++++++++----------- 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/Lib/test/test_mmap.py b/Lib/test/test_mmap.py --- a/Lib/test/test_mmap.py +++ b/Lib/test/test_mmap.py @@ -721,6 +721,13 @@ def test_large_filesize(self): with self._make_test_file(0x17FFFFFFF, b" ") as f: + if sys.maxsize < 0x180000000: + # On 32 bit platforms the file is larger than sys.maxsize so + # mapping the whole file should fail -- Issue #16743 + with self.assertRaises(OverflowError): + mmap.mmap(f.fileno(), 0x180000000, access=mmap.ACCESS_READ) + with self.assertRaises(ValueError): + mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) with mmap.mmap(f.fileno(), 0x10000, access=mmap.ACCESS_READ) as m: self.assertEqual(m.size(), 0x180000000) diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c --- a/Modules/mmapmodule.c +++ b/Modules/mmapmodule.c @@ -1162,7 +1162,6 @@ # endif if (fd != -1 && fstat(fd, &st) == 0 && S_ISREG(st.st_mode)) { if (map_size == 0) { - off_t calc_size; if (st.st_size == 0) { PyErr_SetString(PyExc_ValueError, "cannot mmap an empty file"); @@ -1173,13 +1172,12 @@ "mmap offset is greater than file size"); return NULL; } - calc_size = st.st_size - offset; - map_size = calc_size; - if (map_size != calc_size) { + if (st.st_size - offset > PY_SSIZE_T_MAX) { PyErr_SetString(PyExc_ValueError, "mmap length is too large"); - return NULL; - } + return NULL; + } + map_size = (Py_ssize_t) (st.st_size - offset); } else if (offset + (size_t)map_size > st.st_size) { PyErr_SetString(PyExc_ValueError, "mmap length is greater than file size"); @@ -1376,11 +1374,13 @@ Py_DECREF(m_obj); return NULL; } - if (offset - size > PY_SSIZE_T_MAX) - /* Map area too large to fit in memory */ - m_obj->size = (Py_ssize_t) -1; - else - m_obj->size = (Py_ssize_t) (size - offset); + if (size - offset > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_ValueError, + "mmap length is too large"); + Py_DECREF(m_obj); + return NULL; + } + m_obj->size = (Py_ssize_t) (size - offset); } else { m_obj->size = map_size; size = offset + map_size; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 16:27:14 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 16:27:14 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Add_Misc/NEWS_?= =?utf-8?q?entry_for_Issue_=2316743?= Message-ID: <3Z5l362016zQLb@mail.python.org> http://hg.python.org/cpython/rev/82db097cd2e0 changeset: 82202:82db097cd2e0 branch: 2.7 parent: 82198:b1bbe519770b user: Richard Oudkerk date: Wed Feb 13 15:17:47 2013 +0000 summary: Add Misc/NEWS entry for Issue #16743 files: Misc/NEWS | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -205,6 +205,8 @@ Library ------- +- Issue #16743: Fix mmap overflow check on 32 bit Windows. + - Issue #11311: StringIO.readline(0) now returns an empty string as all other file-like objects. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 16:27:15 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 16:27:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Add_Misc/NEWS_?= =?utf-8?q?entry_for_Issue_=2316743?= Message-ID: <3Z5l374nJMzQ16@mail.python.org> http://hg.python.org/cpython/rev/efe489f87881 changeset: 82203:efe489f87881 branch: 3.2 parent: 82199:c2c84d3ab393 user: Richard Oudkerk date: Wed Feb 13 15:19:36 2013 +0000 summary: Add Misc/NEWS entry for Issue #16743 files: Misc/NEWS | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -224,6 +224,8 @@ Library ------- +- Issue #16743: Fix mmap overflow check on 32 bit Windows. + - Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 16:27:17 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 16:27:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge?= Message-ID: <3Z5l390H73zQM5@mail.python.org> http://hg.python.org/cpython/rev/85e646f7fce3 changeset: 82204:85e646f7fce3 branch: 3.3 parent: 82200:0748cc03b83e parent: 82203:efe489f87881 user: Richard Oudkerk date: Wed Feb 13 15:21:23 2013 +0000 summary: Merge files: Misc/NEWS | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -178,6 +178,8 @@ Library ------- +- Issue #16743: Fix mmap overflow check on 32 bit Windows. + - Issue #16800: tempfile.gettempdir() no longer left temporary files when the disk is full. Original patch by Amir Szekely. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 13 16:27:18 2013 From: python-checkins at python.org (richard.oudkerk) Date: Wed, 13 Feb 2013 16:27:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge?= Message-ID: <3Z5l3B380MzPZG@mail.python.org> http://hg.python.org/cpython/rev/659ef9d360ae changeset: 82205:659ef9d360ae parent: 82201:c286c96ef42d parent: 82204:85e646f7fce3 user: Richard Oudkerk date: Wed Feb 13 15:25:21 2013 +0000 summary: Merge files: Misc/NEWS | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -253,6 +253,8 @@ Library ------- +- Issue #16743: Fix mmap overflow check on 32 bit Windows. + - Issue #16996: webbrowser module now uses shutil.which() to find a web-browser on the executable search path. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 14 02:13:25 2013 From: python-checkins at python.org (victor.stinner) Date: Thu, 14 Feb 2013 02:13:25 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_433=3A_typo?= Message-ID: <3Z603T59JQzRWZ@mail.python.org> http://hg.python.org/peps/rev/162f244394d7 changeset: 4737:162f244394d7 user: Victor Stinner date: Thu Feb 14 02:12:36 2013 +0100 summary: PEP 433: typo files: pep-0433.txt | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/pep-0433.txt b/pep-0433.txt --- a/pep-0433.txt +++ b/pep-0433.txt @@ -38,7 +38,7 @@ close-on-exec flag is cleared and if ``CreateProcess()`` is called with the *bInheritHandles* parameter set to ``TRUE`` (when ``subprocess.Popen`` is created with ``close_fds=False`` for example). -Windows does now have "close-on-exec" flag but an inheritance flag which +Windows does not have "close-on-exec" flag but an inheritance flag which is just the opposite value. For example, setting close-on-exec flag means clearing the ``HANDLE_FLAG_INHERIT`` flag of an handle. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Thu Feb 14 02:23:16 2013 From: python-checkins at python.org (victor.stinner) Date: Thu, 14 Feb 2013 02:23:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_433=3A_more_typos?= Message-ID: <3Z60Gr1pKSzRd2@mail.python.org> http://hg.python.org/peps/rev/7c1d23e2ad11 changeset: 4738:7c1d23e2ad11 user: Victor Stinner date: Thu Feb 14 02:22:29 2013 +0100 summary: PEP 433: more typos files: pep-0433.txt | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/pep-0433.txt b/pep-0433.txt --- a/pep-0433.txt +++ b/pep-0433.txt @@ -183,7 +183,7 @@ ======== Add a new optional *cloexec* parameter on functions creating file -descriptors and different ways to change default values of this +descriptors and different ways to change default value of this parameter. Add new functions: @@ -257,11 +257,11 @@ Add a new optional parameter *cloexec* on functions creating file descriptors. The default value of the *cloexec* parameter is ``False``, -and this default cannot be changed. No file descriptor inheritance by +and this default cannot be changed. File descriptor inheritance enabled by default is also the default on POSIX and on Windows. This alternative is the most convervative option. -This option does solve issues listed in the `Rationale`_ +This option does not solve issues listed in the `Rationale`_ section, it only provides an helper to fix them. All functions creating file descriptors have to be modified to set *cloexec=True* in each module used by an application to fix all these issues. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Thu Feb 14 04:21:04 2013 From: python-checkins at python.org (r.david.murray) Date: Thu, 14 Feb 2013 04:21:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_=2315220=3A_simplify_and_s?= =?utf-8?q?peed_up_feedparser=27s_line_splitting=2E?= Message-ID: <3Z62tm5JBGzRn5@mail.python.org> http://hg.python.org/cpython/rev/0f827775f7b7 changeset: 82206:0f827775f7b7 user: R David Murray date: Wed Feb 13 21:17:13 2013 -0500 summary: #15220: simplify and speed up feedparser's line splitting. Original patch submitted by QNX, modified for clarity by me (mostly comments). QNX reports a 30% speed up in average email parsing time. files: Lib/email/feedparser.py | 27 +++++++++------------------ Misc/NEWS | 3 +++ 2 files changed, 12 insertions(+), 18 deletions(-) diff --git a/Lib/email/feedparser.py b/Lib/email/feedparser.py --- a/Lib/email/feedparser.py +++ b/Lib/email/feedparser.py @@ -98,24 +98,15 @@ """Push some new data into this object.""" # Handle any previous leftovers data, self._partial = self._partial + data, '' - # Crack into lines, but preserve the newlines on the end of each - parts = NLCRE_crack.split(data) - # The *ahem* interesting behaviour of re.split when supplied grouping - # parentheses is that the last element of the resulting list is the - # data after the final RE. In the case of a NL/CR terminated string, - # this is the empty string. - self._partial = parts.pop() - #GAN 29Mar09 bugs 1555570, 1721862 Confusion at 8K boundary ending with \r: - # is there a \n to follow later? - if not self._partial and parts and parts[-1].endswith('\r'): - self._partial = parts.pop(-2)+parts.pop() - # parts is a list of strings, alternating between the line contents - # and the eol character(s). Gather up a list of lines after - # re-attaching the newlines. - lines = [] - for i in range(len(parts) // 2): - lines.append(parts[i*2] + parts[i*2+1]) - self.pushlines(lines) + # Crack into lines, but preserve the linesep characters on the end of each + parts = data.splitlines(True) + # If the last element of the list does not end in a newline, then treat + # it as a partial line. We only check for '\n' here because a line + # ending with '\r' might be a line that was split in the middle of a + # '\r\n' sequence (see bugs 1555570 and 1721862). + if parts and not parts[-1].endswith('\n'): + self._partial = parts.pop() + self.pushlines(parts) def pushlines(self, lines): # Reverse and insert at the front of the lines. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -253,6 +253,9 @@ Library ------- +- Issue #15220: email.feedparser's line splitting algorithm is now simpler and + faster. + - Issue #16743: Fix mmap overflow check on 32 bit Windows. - Issue #16996: webbrowser module now uses shutil.which() to find a -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Thu Feb 14 06:01:19 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Thu, 14 Feb 2013 06:01:19 +0100 Subject: [Python-checkins] Daily reference leaks (659ef9d360ae): sum=14 Message-ID: results for 659ef9d360ae on branch "default" -------------------------------------------- test_support leaked [1, 0, 0] references, sum=1 test_support leaked [1, 2, 1] memory blocks, sum=4 test_dbm leaked [0, 0, 2] references, sum=2 test_dbm leaked [0, 0, 2] memory blocks, sum=2 test_httplib leaked [1, 0, 0] references, sum=1 test_httplib leaked [1, 2, 1] memory blocks, sum=4 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogla05DV', '-x'] From solipsis at pitrou.net Fri Feb 15 06:03:59 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Fri, 15 Feb 2013 06:03:59 +0100 Subject: [Python-checkins] Daily reference leaks (0f827775f7b7): sum=0 Message-ID: results for 0f827775f7b7 on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogN4tOlr', '-x'] From python-checkins at python.org Fri Feb 15 13:42:03 2013 From: python-checkins at python.org (daniel.holth) Date: Fri, 15 Feb 2013 13:42:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_minor_wheel_edits?= Message-ID: <3Z6vHb5qB7zSgQ@mail.python.org> http://hg.python.org/peps/rev/4105e7bdb917 changeset: 4739:4105e7bdb917 user: Daniel Holth date: Fri Feb 15 07:41:57 2013 -0500 summary: minor wheel edits files: pep-0427.txt | 15 +++++++-------- 1 files changed, 7 insertions(+), 8 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -120,10 +120,10 @@ PEP-386 compliant version, e.g. 1.0. build tag - Optional build number. Must start with a digit. A tie breaker if - two wheels have the same version. Sort as None if unspecified, - else sort the initial digits as a number, and the remainder - lexicographically. + Optional build number. Must start with a digit. A tie breaker + if two wheels have the same version. Sort as the empty string + if unspecified, else sort the initial digits as a number, and the + remainder lexicographically. language implementation and version tag E.g. 'py27', 'py2', 'py3'. @@ -164,9 +164,9 @@ ``#!python`` rewriting at install time. They may have any or no extension. #. ``{distribution}-{version}.dist-info/METADATA`` is Metadata version 1.1 - (PEP 314, PEP 345, PEP 426) or greater format metadata. + or greater (PEP 314, PEP 345, PEP 426) format metadata. #. ``{distribution}-{version}.dist-info/WHEEL`` is metadata about the archive - itself:: + itself, in the same basic key: value format:: Wheel-Version: 0.1 Generator: bdist_wheel 0.7 @@ -329,8 +329,7 @@ existing public key infrastructure with wheel. Signed packages are only a basic building block in a secure package - update system and many kinds of attacks are possible even when - packages are signed. Wheel only provides the building block. + update system. Wheel only provides the building block. Appendix ======== -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 15 15:51:15 2013 From: python-checkins at python.org (daniel.holth) Date: Fri, 15 Feb 2013 15:51:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_wheel=3A_add_escaping?= Message-ID: <3Z6y8g1TQ0zQMt@mail.python.org> http://hg.python.org/peps/rev/a577c9e0fa5f changeset: 4740:a577c9e0fa5f user: Daniel Holth date: Fri Feb 15 09:50:50 2013 -0500 summary: wheel: add escaping files: pep-0427.txt | 17 ++++++++++++++--- 1 files changed, 14 insertions(+), 3 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -117,7 +117,7 @@ Distribution name, e.g. 'django', 'pyramid'. version - PEP-386 compliant version, e.g. 1.0. + Distribution version, e.g. 1.0. build tag Optional build number. Must start with a digit. A tie breaker @@ -143,6 +143,17 @@ called "compatibility tags." The compatibility tags express the package's basic interpreter requirements and are detailed in PEP 425. +Escaping and Unicode +'''''''''''''''''''' + +Each component of the filename is escaped by replacing runs of +non-alphanumeric characters with an underscore ``_``:: + + re.sub("[^\w\d.]+", "_", distribution, re.UNICODE) + +The filename is Unicode. It will be some time before the tools are +updated to support non-ASCII filenames, but they are supported in this +specification. File contents ''''''''''''' @@ -158,7 +169,7 @@ #. ``{distribution}-{version}.data/`` contains one subdirectory for each non-empty install scheme key not already covered, where the subdirectory name is an index into a dictionary of install paths - (e.g. ``data``, ``scripts``, ``include``, ``purelib`, ``platlib``). + (e.g. ``data``, ``scripts``, ``include``, ``purelib``, ``platlib``). #. Python scripts must appear in ``scripts`` and begin with exactly ``b'#!python'`` in order to enjoy script wrapper generation and ``#!python`` rewriting at install time. They may have any or no @@ -166,7 +177,7 @@ #. ``{distribution}-{version}.dist-info/METADATA`` is Metadata version 1.1 or greater (PEP 314, PEP 345, PEP 426) format metadata. #. ``{distribution}-{version}.dist-info/WHEEL`` is metadata about the archive - itself, in the same basic key: value format:: + itself in the same basic key: value format:: Wheel-Version: 0.1 Generator: bdist_wheel 0.7 -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 15 18:19:37 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 18:19:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzE3MTYzOiB0ZXN0?= =?utf-8?q?=5Ffile_now_works_with_unittest_test_discovery=2E__Patch_by_Zac?= =?utf-8?q?hary?= Message-ID: <3Z71Rs3DM1zRQ7@mail.python.org> http://hg.python.org/cpython/rev/9b3c5085b4a4 changeset: 82207:9b3c5085b4a4 branch: 3.3 parent: 82204:85e646f7fce3 user: Ezio Melotti date: Fri Feb 15 19:17:53 2013 +0200 summary: #17163: test_file now works with unittest test discovery. Patch by Zachary Ware. files: Lib/test/test_file.py | 24 ++++++++++-------------- Misc/NEWS | 3 +++ 2 files changed, 13 insertions(+), 14 deletions(-) diff --git a/Lib/test/test_file.py b/Lib/test/test_file.py --- a/Lib/test/test_file.py +++ b/Lib/test/test_file.py @@ -10,7 +10,7 @@ from test.support import TESTFN, run_unittest from collections import UserList -class AutoFileTests(unittest.TestCase): +class AutoFileTests: # file tests for which a test file is automatically set up def setUp(self): @@ -128,14 +128,14 @@ def testReadWhenWriting(self): self.assertRaises(IOError, self.f.read) -class CAutoFileTests(AutoFileTests): +class CAutoFileTests(AutoFileTests, unittest.TestCase): open = io.open -class PyAutoFileTests(AutoFileTests): +class PyAutoFileTests(AutoFileTests, unittest.TestCase): open = staticmethod(pyio.open) -class OtherFileTests(unittest.TestCase): +class OtherFileTests: def testModeStrings(self): # check invalid mode strings @@ -322,22 +322,18 @@ finally: os.unlink(TESTFN) -class COtherFileTests(OtherFileTests): +class COtherFileTests(OtherFileTests, unittest.TestCase): open = io.open -class PyOtherFileTests(OtherFileTests): +class PyOtherFileTests(OtherFileTests, unittest.TestCase): open = staticmethod(pyio.open) -def test_main(): +def tearDownModule(): # Historically, these tests have been sloppy about removing TESTFN. # So get rid of it no matter what. - try: - run_unittest(CAutoFileTests, PyAutoFileTests, - COtherFileTests, PyOtherFileTests) - finally: - if os.path.exists(TESTFN): - os.unlink(TESTFN) + if os.path.exists(TESTFN): + os.unlink(TESTFN) if __name__ == '__main__': - test_main() + unittest.main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -605,6 +605,9 @@ - Issue #15539: Added regression tests for Tools/scripts/pindent.py. +- Issue #17163: test_file now works with unittest test discovery. + Patch by Zachary Ware. + - Issue #16925: test_configparser now works with unittest test discovery. Patch by Zachary Ware. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 18:19:38 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 18:19:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MTYzOiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3Z71Rv15mNzT1m@mail.python.org> http://hg.python.org/cpython/rev/f289e40b3d70 changeset: 82208:f289e40b3d70 parent: 82206:0f827775f7b7 parent: 82207:9b3c5085b4a4 user: Ezio Melotti date: Fri Feb 15 19:19:18 2013 +0200 summary: #17163: merge with 3.3. files: Lib/test/test_file.py | 24 ++++++++++-------------- Misc/NEWS | 3 +++ 2 files changed, 13 insertions(+), 14 deletions(-) diff --git a/Lib/test/test_file.py b/Lib/test/test_file.py --- a/Lib/test/test_file.py +++ b/Lib/test/test_file.py @@ -10,7 +10,7 @@ from test.support import TESTFN, run_unittest from collections import UserList -class AutoFileTests(unittest.TestCase): +class AutoFileTests: # file tests for which a test file is automatically set up def setUp(self): @@ -128,14 +128,14 @@ def testReadWhenWriting(self): self.assertRaises(OSError, self.f.read) -class CAutoFileTests(AutoFileTests): +class CAutoFileTests(AutoFileTests, unittest.TestCase): open = io.open -class PyAutoFileTests(AutoFileTests): +class PyAutoFileTests(AutoFileTests, unittest.TestCase): open = staticmethod(pyio.open) -class OtherFileTests(unittest.TestCase): +class OtherFileTests: def testModeStrings(self): # check invalid mode strings @@ -322,22 +322,18 @@ finally: os.unlink(TESTFN) -class COtherFileTests(OtherFileTests): +class COtherFileTests(OtherFileTests, unittest.TestCase): open = io.open -class PyOtherFileTests(OtherFileTests): +class PyOtherFileTests(OtherFileTests, unittest.TestCase): open = staticmethod(pyio.open) -def test_main(): +def tearDownModule(): # Historically, these tests have been sloppy about removing TESTFN. # So get rid of it no matter what. - try: - run_unittest(CAutoFileTests, PyAutoFileTests, - COtherFileTests, PyOtherFileTests) - finally: - if os.path.exists(TESTFN): - os.unlink(TESTFN) + if os.path.exists(TESTFN): + os.unlink(TESTFN) if __name__ == '__main__': - test_main() + unittest.main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -841,6 +841,9 @@ - Issue #16836: Enable IPv6 support even if IPv6 is disabled on the build host. +- Issue #17163: test_file now works with unittest test discovery. + Patch by Zachary Ware. + - Issue #16925: test_configparser now works with unittest test discovery. Patch by Zachary Ware. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 20:22:39 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 20:22:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzE3MTQzOiBmaXgg?= =?utf-8?q?a_missing_import_in_the_trace_module=2E__Initial_patch_by_Berke?= =?utf-8?q?r?= Message-ID: <3Z749q6NLDz7LkP@mail.python.org> http://hg.python.org/cpython/rev/3f8b5fcbf07e changeset: 82209:3f8b5fcbf07e branch: 3.3 parent: 82207:9b3c5085b4a4 user: Ezio Melotti date: Fri Feb 15 21:20:50 2013 +0200 summary: #17143: fix a missing import in the trace module. Initial patch by Berker Peksag. files: Lib/test/test_trace.py | 45 ++++++++++++++++++++++++++++++ Lib/trace.py | 1 + Misc/NEWS | 3 ++ 3 files changed, 49 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_trace.py b/Lib/test/test_trace.py --- a/Lib/test/test_trace.py +++ b/Lib/test/test_trace.py @@ -1,7 +1,9 @@ import os +import io import sys from test.support import (run_unittest, TESTFN, rmtree, unlink, captured_stdout) +import tempfile import unittest import trace @@ -361,6 +363,49 @@ self.assertTrue(ignore.names(jn('bar', 'baz.py'), 'baz')) +class TestDeprecatedMethods(unittest.TestCase): + + def test_deprecated_usage(self): + sio = io.StringIO() + with self.assertWarns(DeprecationWarning): + trace.usage(sio) + self.assertIn('Usage:', sio.getvalue()) + + def test_deprecated_Ignore(self): + with self.assertWarns(DeprecationWarning): + trace.Ignore() + + def test_deprecated_modname(self): + with self.assertWarns(DeprecationWarning): + self.assertEqual("spam", trace.modname("spam")) + + def test_deprecated_fullmodname(self): + with self.assertWarns(DeprecationWarning): + self.assertEqual("spam", trace.fullmodname("spam")) + + def test_deprecated_find_lines_from_code(self): + with self.assertWarns(DeprecationWarning): + def foo(): + pass + trace.find_lines_from_code(foo.__code__, ["eggs"]) + + def test_deprecated_find_lines(self): + with self.assertWarns(DeprecationWarning): + def foo(): + pass + trace.find_lines(foo.__code__, ["eggs"]) + + def test_deprecated_find_strings(self): + with self.assertWarns(DeprecationWarning): + with tempfile.NamedTemporaryFile() as fd: + trace.find_strings(fd.name) + + def test_deprecated_find_executable_linenos(self): + with self.assertWarns(DeprecationWarning): + with tempfile.NamedTemporaryFile() as fd: + trace.find_executable_linenos(fd.name) + + def test_main(): run_unittest(__name__) diff --git a/Lib/trace.py b/Lib/trace.py --- a/Lib/trace.py +++ b/Lib/trace.py @@ -58,6 +58,7 @@ import gc import dis import pickle +from warnings import warn as _warn try: from time import monotonic as _time except ImportError: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -178,6 +178,9 @@ Library ------- +- Issue #17143: Fix a missing import in the trace module. Initial patch by + Berker Peksag. + - Issue #16743: Fix mmap overflow check on 32 bit Windows. - Issue #16800: tempfile.gettempdir() no longer left temporary files when -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 20:22:41 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 20:22:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MTQzOiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3Z749s2s4Jz7Lkf@mail.python.org> http://hg.python.org/cpython/rev/46e9f668aea9 changeset: 82210:46e9f668aea9 parent: 82208:f289e40b3d70 parent: 82209:3f8b5fcbf07e user: Ezio Melotti date: Fri Feb 15 21:22:22 2013 +0200 summary: #17143: merge with 3.3. files: Lib/test/test_trace.py | 45 ++++++++++++++++++++++++++++++ Lib/trace.py | 1 + Misc/NEWS | 3 ++ 3 files changed, 49 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_trace.py b/Lib/test/test_trace.py --- a/Lib/test/test_trace.py +++ b/Lib/test/test_trace.py @@ -1,7 +1,9 @@ import os +import io import sys from test.support import (run_unittest, TESTFN, rmtree, unlink, captured_stdout) +import tempfile import unittest import trace @@ -361,6 +363,49 @@ self.assertTrue(ignore.names(jn('bar', 'baz.py'), 'baz')) +class TestDeprecatedMethods(unittest.TestCase): + + def test_deprecated_usage(self): + sio = io.StringIO() + with self.assertWarns(DeprecationWarning): + trace.usage(sio) + self.assertIn('Usage:', sio.getvalue()) + + def test_deprecated_Ignore(self): + with self.assertWarns(DeprecationWarning): + trace.Ignore() + + def test_deprecated_modname(self): + with self.assertWarns(DeprecationWarning): + self.assertEqual("spam", trace.modname("spam")) + + def test_deprecated_fullmodname(self): + with self.assertWarns(DeprecationWarning): + self.assertEqual("spam", trace.fullmodname("spam")) + + def test_deprecated_find_lines_from_code(self): + with self.assertWarns(DeprecationWarning): + def foo(): + pass + trace.find_lines_from_code(foo.__code__, ["eggs"]) + + def test_deprecated_find_lines(self): + with self.assertWarns(DeprecationWarning): + def foo(): + pass + trace.find_lines(foo.__code__, ["eggs"]) + + def test_deprecated_find_strings(self): + with self.assertWarns(DeprecationWarning): + with tempfile.NamedTemporaryFile() as fd: + trace.find_strings(fd.name) + + def test_deprecated_find_executable_linenos(self): + with self.assertWarns(DeprecationWarning): + with tempfile.NamedTemporaryFile() as fd: + trace.find_executable_linenos(fd.name) + + def test_main(): run_unittest(__name__) diff --git a/Lib/trace.py b/Lib/trace.py --- a/Lib/trace.py +++ b/Lib/trace.py @@ -58,6 +58,7 @@ import gc import dis import pickle +from warnings import warn as _warn try: from time import monotonic as _time except ImportError: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -253,6 +253,9 @@ Library ------- +- Issue #17143: Fix a missing import in the trace module. Initial patch by + Berker Peksag. + - Issue #15220: email.feedparser's line splitting algorithm is now simpler and faster. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 21:04:39 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 21:04:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_=2317175=3A_remove_outdated_p?= =?utf-8?q?aragraph_about_issue_=238040_from_PEP_430=2E__Patch_by?= Message-ID: <3Z756H6V3Yz7LkL@mail.python.org> http://hg.python.org/peps/rev/f306777e0b6d changeset: 4741:f306777e0b6d user: Ezio Melotti date: Fri Feb 15 22:04:23 2013 +0200 summary: #17175: remove outdated paragraph about issue #8040 from PEP 430. Patch by Tshepang Lekhonkhobe. files: pep-0430.txt | 11 ++--------- 1 files changed, 2 insertions(+), 9 deletions(-) diff --git a/pep-0430.txt b/pep-0430.txt --- a/pep-0430.txt +++ b/pep-0430.txt @@ -33,10 +33,6 @@ the default version displayed at the docs.python.org root URL to providing the Python 3 documentation. -While efforts are under way [3_] to improve the general version switching -support in the online documentation, this PEP is technically independent -of those improvements. - Key Concerns ============ @@ -81,7 +77,7 @@ Proposal ======== -This PEP (based on an idea originally put forward back in May [4_]) is to +This PEP (based on an idea originally put forward back in May [3_]) is to *not migrate* the Python 2 specific deep links at all, and instead adopt a scheme where all URLs presented to users on docs.python.org are qualified appropriately with the relevant release series. @@ -215,10 +211,7 @@ .. [2] October 2012 discussion (http://mail.python.org/pipermail/python-ideas/2012-October/017406.html) -.. [3] Issue for easier access to other version of the same docs page - (http://bugs.python.org/issue8040) - -.. [4] Using a "/latest/" path prefix +.. [3] Using a "/latest/" path prefix (http://mail.python.org/pipermail/python-dev/2012-May/119567.html) -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 15 21:30:39 2013 From: python-checkins at python.org (antoine.pitrou) Date: Fri, 15 Feb 2013 21:30:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MjA4?= =?utf-8?q?=3A_add_a_note_about_the_termination_behaviour_of_daemon_thread?= =?utf-8?q?s=2E?= Message-ID: <3Z75hH2BGszQ4R@mail.python.org> http://hg.python.org/cpython/rev/e63c4bc81d9f changeset: 82211:e63c4bc81d9f branch: 2.7 parent: 82202:82db097cd2e0 user: Antoine Pitrou date: Fri Feb 15 21:27:18 2013 +0100 summary: Issue #17208: add a note about the termination behaviour of daemon threads. files: Doc/library/threading.rst | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/Doc/library/threading.rst b/Doc/library/threading.rst --- a/Doc/library/threading.rst +++ b/Doc/library/threading.rst @@ -247,6 +247,12 @@ initial value is inherited from the creating thread. The flag can be set through the :attr:`daemon` property. +.. note:: + Daemon threads are abruptly stopped at shutdown. Their resources (such + as open files, database transactions, etc.) may not be released properly. + If you want your threads to stop gracefully, make them non-daemonic and + use a suitable signalling mechanism such as an :class:`Event`. + There is a "main thread" object; this corresponds to the initial thread of control in the Python program. It is not a daemon thread. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 21:35:52 2013 From: python-checkins at python.org (antoine.pitrou) Date: Fri, 15 Feb 2013 21:35:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MjA4?= =?utf-8?q?=3A_add_a_note_about_the_termination_behaviour_of_daemon_thread?= =?utf-8?q?s=2E?= Message-ID: <3Z75pJ42Q9zQT1@mail.python.org> http://hg.python.org/cpython/rev/8753a3be4a3c changeset: 82212:8753a3be4a3c branch: 3.2 parent: 82203:efe489f87881 user: Antoine Pitrou date: Fri Feb 15 21:27:18 2013 +0100 summary: Issue #17208: add a note about the termination behaviour of daemon threads. files: Doc/library/threading.rst | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/Doc/library/threading.rst b/Doc/library/threading.rst --- a/Doc/library/threading.rst +++ b/Doc/library/threading.rst @@ -244,6 +244,12 @@ The initial value is inherited from the creating thread. The flag can be set through the :attr:`~Thread.daemon` property. +.. note:: + Daemon threads are abruptly stopped at shutdown. Their resources (such + as open files, database transactions, etc.) may not be released properly. + If you want your threads to stop gracefully, make them non-daemonic and + use a suitable signalling mechanism such as an :class:`Event`. + There is a "main thread" object; this corresponds to the initial thread of control in the Python program. It is not a daemon thread. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 21:35:53 2013 From: python-checkins at python.org (antoine.pitrou) Date: Fri, 15 Feb 2013 21:35:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317208=3A_add_a_note_about_the_termination_behaviour_o?= =?utf-8?q?f_daemon_threads=2E?= Message-ID: <3Z75pK6gFHzSmJ@mail.python.org> http://hg.python.org/cpython/rev/917ae89e59ce changeset: 82213:917ae89e59ce branch: 3.3 parent: 82209:3f8b5fcbf07e parent: 82212:8753a3be4a3c user: Antoine Pitrou date: Fri Feb 15 21:31:33 2013 +0100 summary: Issue #17208: add a note about the termination behaviour of daemon threads. files: Doc/library/threading.rst | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/Doc/library/threading.rst b/Doc/library/threading.rst --- a/Doc/library/threading.rst +++ b/Doc/library/threading.rst @@ -174,6 +174,12 @@ through the :attr:`~Thread.daemon` property or the *daemon* constructor argument. +.. note:: + Daemon threads are abruptly stopped at shutdown. Their resources (such + as open files, database transactions, etc.) may not be released properly. + If you want your threads to stop gracefully, make them non-daemonic and + use a suitable signalling mechanism such as an :class:`Event`. + There is a "main thread" object; this corresponds to the initial thread of control in the Python program. It is not a daemon thread. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 21:35:55 2013 From: python-checkins at python.org (antoine.pitrou) Date: Fri, 15 Feb 2013 21:35:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317208=3A_add_a_note_about_the_termination_behav?= =?utf-8?q?iour_of_daemon_threads=2E?= Message-ID: <3Z75pM2BrczSZc@mail.python.org> http://hg.python.org/cpython/rev/8b85f10b5341 changeset: 82214:8b85f10b5341 parent: 82210:46e9f668aea9 parent: 82213:917ae89e59ce user: Antoine Pitrou date: Fri Feb 15 21:32:30 2013 +0100 summary: Issue #17208: add a note about the termination behaviour of daemon threads. files: Doc/library/threading.rst | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/Doc/library/threading.rst b/Doc/library/threading.rst --- a/Doc/library/threading.rst +++ b/Doc/library/threading.rst @@ -174,6 +174,12 @@ through the :attr:`~Thread.daemon` property or the *daemon* constructor argument. +.. note:: + Daemon threads are abruptly stopped at shutdown. Their resources (such + as open files, database transactions, etc.) may not be released properly. + If you want your threads to stop gracefully, make them non-daemonic and + use a suitable signalling mechanism such as an :class:`Event`. + There is a "main thread" object; this corresponds to the initial thread of control in the Python program. It is not a daemon thread. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 22:38:37 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 22:38:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MTc4OiB1cGRh?= =?utf-8?q?te_any=28=29/all=28=29_docstrings_to_document_their_behavior_wi?= =?utf-8?q?th_empty?= Message-ID: <3Z77Bj6XWQz7Lkf@mail.python.org> http://hg.python.org/cpython/rev/0f7eec78569c changeset: 82215:0f7eec78569c branch: 2.7 parent: 82211:e63c4bc81d9f user: Ezio Melotti date: Fri Feb 15 23:35:14 2013 +0200 summary: #17178: update any()/all() docstrings to document their behavior with empty iterables. Patch by Ankur Ankan. files: Misc/ACKS | 1 + Python/bltinmodule.c | 6 ++++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -33,6 +33,7 @@ Erik Anders?n Oliver Andrich Ross Andrus +Ankur Ankan Heidi Annexstad ?ric Araujo Jeffrey Armstrong diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c --- a/Python/bltinmodule.c +++ b/Python/bltinmodule.c @@ -120,7 +120,8 @@ PyDoc_STRVAR(all_doc, "all(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for all values x in the iterable."); +Return True if bool(x) is True for all values x in the iterable.\n\ +If the iterable is empty, return True."); static PyObject * builtin_any(PyObject *self, PyObject *v) @@ -162,7 +163,8 @@ PyDoc_STRVAR(any_doc, "any(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for any x in the iterable."); +Return True if bool(x) is True for any x in the iterable.\n\ +If the iterable is empty, return False."); static PyObject * builtin_apply(PyObject *self, PyObject *args) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 22:38:39 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 22:38:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MTc4OiB1cGRh?= =?utf-8?q?te_any=28=29/all=28=29_docstrings_to_document_their_behavior_wi?= =?utf-8?q?th_empty?= Message-ID: <3Z77Bl26qFz7Llx@mail.python.org> http://hg.python.org/cpython/rev/1d4849f9e37d changeset: 82216:1d4849f9e37d branch: 3.2 parent: 82212:8753a3be4a3c user: Ezio Melotti date: Fri Feb 15 23:35:14 2013 +0200 summary: #17178: update any()/all() docstrings to document their behavior with empty iterables. Patch by Ankur Ankan. files: Misc/ACKS | 1 + Python/bltinmodule.c | 6 ++++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -37,6 +37,7 @@ Oliver Andrich Ross Andrus J?r?my Anger +Ankur Ankan Jon Anglin Heidi Annexstad ?ric Araujo diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c --- a/Python/bltinmodule.c +++ b/Python/bltinmodule.c @@ -262,7 +262,8 @@ PyDoc_STRVAR(all_doc, "all(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for all values x in the iterable."); +Return True if bool(x) is True for all values x in the iterable.\n\ +If the iterable is empty, return True."); static PyObject * builtin_any(PyObject *self, PyObject *v) @@ -304,7 +305,8 @@ PyDoc_STRVAR(any_doc, "any(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for any x in the iterable."); +Return True if bool(x) is True for any x in the iterable.\n\ +If the iterable is empty, return False."); static PyObject * builtin_ascii(PyObject *self, PyObject *v) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 22:38:40 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 22:38:40 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317178=3A_merge_with_3=2E2=2E?= Message-ID: <3Z77Bm4pdkz7LmK@mail.python.org> http://hg.python.org/cpython/rev/34cfe145b286 changeset: 82217:34cfe145b286 branch: 3.3 parent: 82213:917ae89e59ce parent: 82216:1d4849f9e37d user: Ezio Melotti date: Fri Feb 15 23:38:05 2013 +0200 summary: #17178: merge with 3.2. files: Misc/ACKS | 1 + Python/bltinmodule.c | 6 ++++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -41,6 +41,7 @@ Ross Andrus Juancarlo A?ez J?r?my Anger +Ankur Ankan Jon Anglin Heidi Annexstad ?ric Araujo diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c --- a/Python/bltinmodule.c +++ b/Python/bltinmodule.c @@ -263,7 +263,8 @@ PyDoc_STRVAR(all_doc, "all(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for all values x in the iterable."); +Return True if bool(x) is True for all values x in the iterable.\n\ +If the iterable is empty, return True."); static PyObject * builtin_any(PyObject *self, PyObject *v) @@ -305,7 +306,8 @@ PyDoc_STRVAR(any_doc, "any(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for any x in the iterable."); +Return True if bool(x) is True for any x in the iterable.\n\ +If the iterable is empty, return False."); static PyObject * builtin_ascii(PyObject *self, PyObject *v) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 15 22:38:42 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 15 Feb 2013 22:38:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MTc4OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3Z77Bp0WRwz7LmN@mail.python.org> http://hg.python.org/cpython/rev/168efd87e051 changeset: 82218:168efd87e051 parent: 82214:8b85f10b5341 parent: 82217:34cfe145b286 user: Ezio Melotti date: Fri Feb 15 23:38:23 2013 +0200 summary: #17178: merge with 3.3. files: Misc/ACKS | 1 + Python/bltinmodule.c | 6 ++++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -41,6 +41,7 @@ Ross Andrus Juancarlo A?ez J?r?my Anger +Ankur Ankan Jon Anglin Heidi Annexstad ?ric Araujo diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c --- a/Python/bltinmodule.c +++ b/Python/bltinmodule.c @@ -263,7 +263,8 @@ PyDoc_STRVAR(all_doc, "all(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for all values x in the iterable."); +Return True if bool(x) is True for all values x in the iterable.\n\ +If the iterable is empty, return True."); static PyObject * builtin_any(PyObject *self, PyObject *v) @@ -305,7 +306,8 @@ PyDoc_STRVAR(any_doc, "any(iterable) -> bool\n\ \n\ -Return True if bool(x) is True for any x in the iterable."); +Return True if bool(x) is True for any x in the iterable.\n\ +If the iterable is empty, return False."); static PyObject * builtin_ascii(PyObject *self, PyObject *v) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 05:06:10 2013 From: python-checkins at python.org (daniel.holth) Date: Sat, 16 Feb 2013 05:06:10 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_edit_wheel_peps_for_posting?= Message-ID: <3Z7Hnt1Swrz7LmR@mail.python.org> http://hg.python.org/peps/rev/6648b89fa919 changeset: 4742:6648b89fa919 user: Daniel Holth date: Fri Feb 15 23:02:25 2013 -0500 summary: edit wheel peps for posting files: pep-0425.txt | 2 +- pep-0427.txt | 18 ++++++++++++------ 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/pep-0425.txt b/pep-0425.txt --- a/pep-0425.txt +++ b/pep-0425.txt @@ -9,7 +9,7 @@ Content-Type: text/x-rst Created: 27-Jul-2012 Python-Version: 3.4 -Post-History: 8-Aug-2012 +Post-History: 8-Aug-2012, 18-Oct-2012, 15-Feb-2013 Abstract diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -9,7 +9,7 @@ Type: Standards Track Content-Type: text/x-rst Created: 20-Sep-2012 -Post-History: +Post-History: 18-Oct-2012, 15-Feb-2013 Abstract @@ -182,12 +182,18 @@ Wheel-Version: 0.1 Generator: bdist_wheel 0.7 Root-Is-Purelib: true + Tag: py2-none-any + Tag: py3-none-any + Build: 1 -#. Wheel-Version is the version number of the Wheel specification. - Generator is the name and optionally the version of the software - that produced the archive. Root-Is-Purelib is true if the top level - directory of the archive should be installed into purelib; - otherwise the root should be installed into platlib. +#. ``Wheel-Version`` is the version number of the Wheel specification. + ``Generator`` is the name and optionally the version of the software + that produced the archive. ``Root-Is-Purelib`` is true if the top + level directory of the archive should be installed into purelib; + otherwise the root should be installed into platlib. ``Tag`` is the + wheel's expanded compatibility tags; in the example the filename would + contain ``py2.py3-none-any``. ``Build`` is the build number and is + omitted if there is no build number. #. A wheel installer should warn if Wheel-Version is greater than the version it supports, and must fail if Wheel-Version has a greater major version than the version it supports. -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Sat Feb 16 06:02:28 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Sat, 16 Feb 2013 06:02:28 +0100 Subject: [Python-checkins] Daily reference leaks (168efd87e051): sum=2 Message-ID: results for 168efd87e051 on branch "default" -------------------------------------------- test_concurrent_futures leaked [0, 0, -2] memory blocks, sum=-2 test_httplib leaked [-1, 1, 0] references, sum=0 test_httplib leaked [0, 2, 2] memory blocks, sum=4 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflognclHFq', '-x'] From python-checkins at python.org Sat Feb 16 12:25:08 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 16 Feb 2013 12:25:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Accept_PEP_427_=28wheel_forma?= =?utf-8?q?t=29?= Message-ID: <3Z7TXN3P3gzRH6@mail.python.org> http://hg.python.org/peps/rev/d272d7a97e0c changeset: 4743:d272d7a97e0c user: Nick Coghlan date: Sat Feb 16 21:14:38 2013 +1000 summary: Accept PEP 427 (wheel format) files: pep-0427.txt | 45 +++++++++++++++++++++++---------------- 1 files changed, 26 insertions(+), 19 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -1,16 +1,16 @@ PEP: 427 -Title: The Wheel Binary Package Format 0.1 +Title: The Wheel Binary Package Format 1.0 Version: $Revision$ Last-Modified: $Date$ Author: Daniel Holth BDFL-Delegate: Nick Coghlan Discussions-To: -Status: Draft +Status: Accepted Type: Standards Track Content-Type: text/x-rst Created: 20-Sep-2012 Post-History: 18-Oct-2012, 15-Feb-2013 - +Resolution: http://mail.python.org/pipermail/python-dev/2013-February/124103.html Abstract ======== @@ -26,13 +26,11 @@ out onto their final paths at any later time. -Note -==== +PEP Acceptance +============== -This draft PEP describes version 0.1 of the "wheel" format. When the PEP -is accepted, the version will be changed to 1.0. (The major version -is used to indicate potentially backwards-incompatible changes to the -format.) +This PEP was accepted, and the defined wheel version updated to 1.0, by +Nick Coghlan on 16th February, 2013 [1]_ Rationale @@ -175,25 +173,26 @@ ``#!python`` rewriting at install time. They may have any or no extension. #. ``{distribution}-{version}.dist-info/METADATA`` is Metadata version 1.1 - or greater (PEP 314, PEP 345, PEP 426) format metadata. + or greater format metadata. #. ``{distribution}-{version}.dist-info/WHEEL`` is metadata about the archive itself in the same basic key: value format:: - Wheel-Version: 0.1 - Generator: bdist_wheel 0.7 + Wheel-Version: 1.0 + Generator: bdist_wheel 1.0 Root-Is-Purelib: true Tag: py2-none-any Tag: py3-none-any Build: 1 #. ``Wheel-Version`` is the version number of the Wheel specification. - ``Generator`` is the name and optionally the version of the software - that produced the archive. ``Root-Is-Purelib`` is true if the top - level directory of the archive should be installed into purelib; - otherwise the root should be installed into platlib. ``Tag`` is the - wheel's expanded compatibility tags; in the example the filename would - contain ``py2.py3-none-any``. ``Build`` is the build number and is - omitted if there is no build number. +#. ``Generator`` is the name and optionally the version of the software + that produced the archive. +#. ``Root-Is-Purelib`` is true if the top level directory of the archive + should be installed into purelib; otherwise the root should be installed + into platlib. +#. ``Tag`` is the wheel's expanded compatibility tags; in the example the + filename would contain ``py2.py3-none-any``. +#. ``Build`` is the build number and is omitted if there is no build number. #. A wheel installer should warn if Wheel-Version is greater than the version it supports, and must fail if Wheel-Version has a greater major version than the version it supports. @@ -348,6 +347,14 @@ Signed packages are only a basic building block in a secure package update system. Wheel only provides the building block. + +References +========== + +.. [1] PEP acceptance + (http://mail.python.org/pipermail/python-dev/2013-February/124103.html) + + Appendix ======== -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 16 15:59:42 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 15:59:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzEzMTY5?= =?utf-8?q?=3A_The_maximal_repetition_number_in_a_regular_expression_has_b?= =?utf-8?q?een?= Message-ID: <3Z7ZHy595QzQj1@mail.python.org> http://hg.python.org/cpython/rev/c1b3d25882ca changeset: 82219:c1b3d25882ca branch: 2.7 parent: 82215:0f7eec78569c user: Serhiy Storchaka date: Sat Feb 16 16:47:15 2013 +0200 summary: Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). files: Lib/sre_compile.py | 1 + Lib/sre_constants.py | 4 --- Lib/sre_parse.py | 9 ++++++- Lib/test/test_re.py | 33 +++++++++++++++++++++++++++++++- Misc/NEWS | 4 +++ Modules/_sre.c | 18 +++++++++++----- Modules/sre.h | 14 +++++++++++- 7 files changed, 68 insertions(+), 15 deletions(-) diff --git a/Lib/sre_compile.py b/Lib/sre_compile.py --- a/Lib/sre_compile.py +++ b/Lib/sre_compile.py @@ -13,6 +13,7 @@ import _sre, sys import sre_parse from sre_constants import * +from _sre import MAXREPEAT assert _sre.MAGIC == MAGIC, "SRE module mismatch" diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,10 +15,6 @@ MAGIC = 20031017 -# max code word in this release - -MAXREPEAT = 65535 - # SRE standard exception (access as sre.error) # should this really be here? diff --git a/Lib/sre_parse.py b/Lib/sre_parse.py --- a/Lib/sre_parse.py +++ b/Lib/sre_parse.py @@ -15,6 +15,7 @@ import sys from sre_constants import * +from _sre import MAXREPEAT SPECIAL_CHARS = ".\\[{()*+?^$|" REPEAT_CHARS = "*+?{" @@ -498,10 +499,14 @@ continue if lo: min = int(lo) + if min >= MAXREPEAT: + raise OverflowError("the repetition number is too large") if hi: max = int(hi) - if max < min: - raise error, "bad repeat interval" + if max >= MAXREPEAT: + raise OverflowError("the repetition number is too large") + if max < min: + raise error("bad repeat interval") else: raise error, "not supported" # figure out which item to repeat diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -1,5 +1,5 @@ from test.test_support import verbose, run_unittest, import_module -from test.test_support import precisionbigmemtest, _2G +from test.test_support import precisionbigmemtest, _2G, cpython_only import re from re import Scanner import sys @@ -847,6 +847,37 @@ self.assertEqual(n, size + 1) + def test_repeat_minmax_overflow(self): + # Issue #13169 + string = "x" * 100000 + self.assertEqual(re.match(r".{65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{,65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65535,}?", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{,65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{65536,}?", string).span(), (0, 65536)) + # 2**128 should be big enough to overflow both SRE_CODE and Py_ssize_t. + self.assertRaises(OverflowError, re.compile, r".{%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,%d}" % (2**129, 2**128)) + + @cpython_only + def test_repeat_minmax_overflow_maxrepeat(self): + try: + from _sre import MAXREPEAT + except ImportError: + self.skipTest('requires _sre.MAXREPEAT constant') + string = "x" * 100000 + self.assertIsNone(re.match(r".{%d}" % (MAXREPEAT - 1), string)) + self.assertEqual(re.match(r".{,%d}" % (MAXREPEAT - 1), string).span(), + (0, 100000)) + self.assertIsNone(re.match(r".{%d,}?" % (MAXREPEAT - 1), string)) + self.assertRaises(OverflowError, re.compile, r".{%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % MAXREPEAT) + + def run_re_tests(): from test.re_tests import tests, SUCCEED, FAIL, SYNTAX_ERROR if verbose: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -205,6 +205,10 @@ Library ------- +- Issue #13169: The maximal repetition number in a regular expression has been + increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on + 64-bit). + - Issue #16743: Fix mmap overflow check on 32 bit Windows. - Issue #11311: StringIO.readline(0) now returns an empty string as all other diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -524,7 +524,7 @@ Py_ssize_t i; /* adjust end */ - if (maxcount < end - ptr && maxcount != 65535) + if (maxcount < end - ptr && maxcount != SRE_MAXREPEAT) end = ptr + maxcount; switch (pattern[0]) { @@ -1139,7 +1139,7 @@ } else { /* general case */ LASTMARK_SAVE(); - while ((Py_ssize_t)ctx->pattern[2] == 65535 + while ((Py_ssize_t)ctx->pattern[2] == SRE_MAXREPEAT || ctx->count <= (Py_ssize_t)ctx->pattern[2]) { state->ptr = ctx->ptr; DO_JUMP(JUMP_MIN_REPEAT_ONE,jump_min_repeat_one, @@ -1225,7 +1225,7 @@ } if ((ctx->count < ctx->u.rep->pattern[2] || - ctx->u.rep->pattern[2] == 65535) && + ctx->u.rep->pattern[2] == SRE_MAXREPEAT) && state->ptr != ctx->u.rep->last_ptr) { /* we may have enough matches, but if we can match another item, do so */ @@ -1303,7 +1303,7 @@ LASTMARK_RESTORE(); if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != 65535) + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) RETURN_FAILURE; ctx->u.rep->count = ctx->count; @@ -3042,7 +3042,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-4, groups)) FAIL; @@ -3061,7 +3061,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-3, groups)) FAIL; @@ -3938,6 +3938,12 @@ Py_DECREF(x); } + x = PyLong_FromUnsignedLong(SRE_MAXREPEAT); + if (x) { + PyDict_SetItemString(d, "MAXREPEAT", x); + Py_DECREF(x); + } + x = PyString_FromString(copyright); if (x) { PyDict_SetItemString(d, "copyright", x); diff --git a/Modules/sre.h b/Modules/sre.h --- a/Modules/sre.h +++ b/Modules/sre.h @@ -16,9 +16,19 @@ /* size of a code word (must be unsigned short or larger, and large enough to hold a UCS4 character) */ #ifdef Py_USING_UNICODE -#define SRE_CODE Py_UCS4 +# define SRE_CODE Py_UCS4 +# if SIZEOF_SIZE_T > 4 +# define SRE_MAXREPEAT (~(SRE_CODE)0) +# else +# define SRE_MAXREPEAT ((SRE_CODE)PY_SSIZE_T_MAX + 1u) +# endif #else -#define SRE_CODE unsigned long +# define SRE_CODE unsigned long +# if SIZEOF_SIZE_T > SIZEOF_LONG +# define SRE_MAXREPEAT (~(SRE_CODE)0) +# else +# define SRE_MAXREPEAT ((SRE_CODE)PY_SSIZE_T_MAX + 1u) +# endif #endif typedef struct { -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 15:59:44 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 15:59:44 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzEzMTY5?= =?utf-8?q?=3A_The_maximal_repetition_number_in_a_regular_expression_has_b?= =?utf-8?q?een?= Message-ID: <3Z7ZJ02QxqzQlp@mail.python.org> http://hg.python.org/cpython/rev/472a7c652cbd changeset: 82220:472a7c652cbd branch: 3.2 parent: 82216:1d4849f9e37d user: Serhiy Storchaka date: Sat Feb 16 16:47:47 2013 +0200 summary: Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). files: Lib/sre_compile.py | 1 + Lib/sre_constants.py | 4 --- Lib/sre_parse.py | 9 ++++++- Lib/test/test_re.py | 34 +++++++++++++++++++++++++++++++- Misc/NEWS | 4 +++ Modules/_sre.c | 18 +++++++++++----- Modules/sre.h | 5 ++++ 7 files changed, 62 insertions(+), 13 deletions(-) diff --git a/Lib/sre_compile.py b/Lib/sre_compile.py --- a/Lib/sre_compile.py +++ b/Lib/sre_compile.py @@ -13,6 +13,7 @@ import _sre, sys import sre_parse from sre_constants import * +from _sre import MAXREPEAT assert _sre.MAGIC == MAGIC, "SRE module mismatch" diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,10 +15,6 @@ MAGIC = 20031017 -# max code word in this release - -MAXREPEAT = 65535 - # SRE standard exception (access as sre.error) # should this really be here? diff --git a/Lib/sre_parse.py b/Lib/sre_parse.py --- a/Lib/sre_parse.py +++ b/Lib/sre_parse.py @@ -15,6 +15,7 @@ import sys from sre_constants import * +from _sre import MAXREPEAT SPECIAL_CHARS = ".\\[{()*+?^$|" REPEAT_CHARS = "*+?{" @@ -505,10 +506,14 @@ continue if lo: min = int(lo) + if min >= MAXREPEAT: + raise OverflowError("the repetition number is too large") if hi: max = int(hi) - if max < min: - raise error("bad repeat interval") + if max >= MAXREPEAT: + raise OverflowError("the repetition number is too large") + if max < min: + raise error("bad repeat interval") else: raise error("not supported") # figure out which item to repeat diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -1,4 +1,5 @@ -from test.support import verbose, run_unittest, gc_collect, bigmemtest, _2G +from test.support import verbose, run_unittest, gc_collect, bigmemtest, _2G, \ + cpython_only import io import re from re import Scanner @@ -883,6 +884,37 @@ self.assertEqual(n, size + 1) + def test_repeat_minmax_overflow(self): + # Issue #13169 + string = "x" * 100000 + self.assertEqual(re.match(r".{65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{,65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65535,}?", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{,65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{65536,}?", string).span(), (0, 65536)) + # 2**128 should be big enough to overflow both SRE_CODE and Py_ssize_t. + self.assertRaises(OverflowError, re.compile, r".{%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,%d}" % (2**129, 2**128)) + + @cpython_only + def test_repeat_minmax_overflow_maxrepeat(self): + try: + from _sre import MAXREPEAT + except ImportError: + self.skipTest('requires _sre.MAXREPEAT constant') + string = "x" * 100000 + self.assertIsNone(re.match(r".{%d}" % (MAXREPEAT - 1), string)) + self.assertEqual(re.match(r".{,%d}" % (MAXREPEAT - 1), string).span(), + (0, 100000)) + self.assertIsNone(re.match(r".{%d,}?" % (MAXREPEAT - 1), string)) + self.assertRaises(OverflowError, re.compile, r".{%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % MAXREPEAT) + + def run_re_tests(): from test.re_tests import tests, SUCCEED, FAIL, SYNTAX_ERROR if verbose: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -224,6 +224,10 @@ Library ------- +- Issue #13169: The maximal repetition number in a regular expression has been + increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on + 64-bit). + - Issue #16743: Fix mmap overflow check on 32 bit Windows. - Issue #16800: tempfile.gettempdir() no longer left temporary files when diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -517,7 +517,7 @@ Py_ssize_t i; /* adjust end */ - if (maxcount < end - ptr && maxcount != 65535) + if (maxcount < end - ptr && maxcount != SRE_MAXREPEAT) end = ptr + maxcount; switch (pattern[0]) { @@ -1132,7 +1132,7 @@ } else { /* general case */ LASTMARK_SAVE(); - while ((Py_ssize_t)ctx->pattern[2] == 65535 + while ((Py_ssize_t)ctx->pattern[2] == SRE_MAXREPEAT || ctx->count <= (Py_ssize_t)ctx->pattern[2]) { state->ptr = ctx->ptr; DO_JUMP(JUMP_MIN_REPEAT_ONE,jump_min_repeat_one, @@ -1218,7 +1218,7 @@ } if ((ctx->count < ctx->u.rep->pattern[2] || - ctx->u.rep->pattern[2] == 65535) && + ctx->u.rep->pattern[2] == SRE_MAXREPEAT) && state->ptr != ctx->u.rep->last_ptr) { /* we may have enough matches, but if we can match another item, do so */ @@ -1296,7 +1296,7 @@ LASTMARK_RESTORE(); if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != 65535) + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) RETURN_FAILURE; ctx->u.rep->count = ctx->count; @@ -3072,7 +3072,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-4, groups)) FAIL; @@ -3091,7 +3091,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-3, groups)) FAIL; @@ -3979,6 +3979,12 @@ Py_DECREF(x); } + x = PyLong_FromUnsignedLong(SRE_MAXREPEAT); + if (x) { + PyDict_SetItemString(d, "MAXREPEAT", x); + Py_DECREF(x); + } + x = PyUnicode_FromString(copyright); if (x) { PyDict_SetItemString(d, "copyright", x); diff --git a/Modules/sre.h b/Modules/sre.h --- a/Modules/sre.h +++ b/Modules/sre.h @@ -16,6 +16,11 @@ /* size of a code word (must be unsigned short or larger, and large enough to hold a UCS4 character) */ #define SRE_CODE Py_UCS4 +#if SIZEOF_SIZE_T > 4 +# define SRE_MAXREPEAT (~(SRE_CODE)0) +#else +# define SRE_MAXREPEAT ((SRE_CODE)PY_SSIZE_T_MAX + 1u) +#endif typedef struct { PyObject_VAR_HEAD -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 15:59:45 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 15:59:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2313169=3A_The_maximal_repetition_number_in_a_regular_e?= =?utf-8?q?xpression_has_been?= Message-ID: <3Z7ZJ16WKHzRPy@mail.python.org> http://hg.python.org/cpython/rev/b78c321ee9a5 changeset: 82221:b78c321ee9a5 branch: 3.3 parent: 82217:34cfe145b286 parent: 82220:472a7c652cbd user: Serhiy Storchaka date: Sat Feb 16 16:54:33 2013 +0200 summary: Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). files: Lib/sre_compile.py | 1 + Lib/sre_constants.py | 4 --- Lib/sre_parse.py | 9 ++++++- Lib/test/test_re.py | 34 +++++++++++++++++++++++++++++++- Misc/NEWS | 4 +++ Modules/_sre.c | 18 +++++++++++----- Modules/sre.h | 5 ++++ 7 files changed, 62 insertions(+), 13 deletions(-) diff --git a/Lib/sre_compile.py b/Lib/sre_compile.py --- a/Lib/sre_compile.py +++ b/Lib/sre_compile.py @@ -13,6 +13,7 @@ import _sre, sys import sre_parse from sre_constants import * +from _sre import MAXREPEAT assert _sre.MAGIC == MAGIC, "SRE module mismatch" diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,10 +15,6 @@ MAGIC = 20031017 -# max code word in this release - -MAXREPEAT = 65535 - # SRE standard exception (access as sre.error) # should this really be here? diff --git a/Lib/sre_parse.py b/Lib/sre_parse.py --- a/Lib/sre_parse.py +++ b/Lib/sre_parse.py @@ -15,6 +15,7 @@ import sys from sre_constants import * +from _sre import MAXREPEAT SPECIAL_CHARS = ".\\[{()*+?^$|" REPEAT_CHARS = "*+?{" @@ -537,10 +538,14 @@ continue if lo: min = int(lo) + if min >= MAXREPEAT: + raise OverflowError("the repetition number is too large") if hi: max = int(hi) - if max < min: - raise error("bad repeat interval") + if max >= MAXREPEAT: + raise OverflowError("the repetition number is too large") + if max < min: + raise error("bad repeat interval") else: raise error("not supported") # figure out which item to repeat diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -1,4 +1,5 @@ -from test.support import verbose, run_unittest, gc_collect, bigmemtest, _2G +from test.support import verbose, run_unittest, gc_collect, bigmemtest, _2G, \ + cpython_only import io import re from re import Scanner @@ -980,6 +981,37 @@ self.assertEqual(re.findall(r"(?i)(a)\1", "aa \u0100"), ['a']) self.assertEqual(re.match(r"(?s).{1,3}", "\u0100\u0100").span(), (0, 2)) + def test_repeat_minmax_overflow(self): + # Issue #13169 + string = "x" * 100000 + self.assertEqual(re.match(r".{65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{,65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65535,}?", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{,65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{65536,}?", string).span(), (0, 65536)) + # 2**128 should be big enough to overflow both SRE_CODE and Py_ssize_t. + self.assertRaises(OverflowError, re.compile, r".{%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,%d}" % (2**129, 2**128)) + + @cpython_only + def test_repeat_minmax_overflow_maxrepeat(self): + try: + from _sre import MAXREPEAT + except ImportError: + self.skipTest('requires _sre.MAXREPEAT constant') + string = "x" * 100000 + self.assertIsNone(re.match(r".{%d}" % (MAXREPEAT - 1), string)) + self.assertEqual(re.match(r".{,%d}" % (MAXREPEAT - 1), string).span(), + (0, 100000)) + self.assertIsNone(re.match(r".{%d,}?" % (MAXREPEAT - 1), string)) + self.assertRaises(OverflowError, re.compile, r".{%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % MAXREPEAT) + + def run_re_tests(): from test.re_tests import tests, SUCCEED, FAIL, SYNTAX_ERROR if verbose: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -178,6 +178,10 @@ Library ------- +- Issue #13169: The maximal repetition number in a regular expression has been + increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on + 64-bit). + - Issue #17143: Fix a missing import in the trace module. Initial patch by Berker Peksag. diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -492,7 +492,7 @@ Py_ssize_t i; /* adjust end */ - if (maxcount < (end - ptr) / state->charsize && maxcount != 65535) + if (maxcount < (end - ptr) / state->charsize && maxcount != SRE_MAXREPEAT) end = ptr + maxcount*state->charsize; switch (pattern[0]) { @@ -1109,7 +1109,7 @@ } else { /* general case */ LASTMARK_SAVE(); - while ((Py_ssize_t)ctx->pattern[2] == 65535 + while ((Py_ssize_t)ctx->pattern[2] == SRE_MAXREPEAT || ctx->count <= (Py_ssize_t)ctx->pattern[2]) { state->ptr = ctx->ptr; DO_JUMP(JUMP_MIN_REPEAT_ONE,jump_min_repeat_one, @@ -1195,7 +1195,7 @@ } if ((ctx->count < ctx->u.rep->pattern[2] || - ctx->u.rep->pattern[2] == 65535) && + ctx->u.rep->pattern[2] == SRE_MAXREPEAT) && state->ptr != ctx->u.rep->last_ptr) { /* we may have enough matches, but if we can match another item, do so */ @@ -1273,7 +1273,7 @@ LASTMARK_RESTORE(); if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != 65535) + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) RETURN_FAILURE; ctx->u.rep->count = ctx->count; @@ -3037,7 +3037,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-4, groups)) FAIL; @@ -3056,7 +3056,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-3, groups)) FAIL; @@ -3942,6 +3942,12 @@ Py_DECREF(x); } + x = PyLong_FromUnsignedLong(SRE_MAXREPEAT); + if (x) { + PyDict_SetItemString(d, "MAXREPEAT", x); + Py_DECREF(x); + } + x = PyUnicode_FromString(copyright); if (x) { PyDict_SetItemString(d, "copyright", x); diff --git a/Modules/sre.h b/Modules/sre.h --- a/Modules/sre.h +++ b/Modules/sre.h @@ -16,6 +16,11 @@ /* size of a code word (must be unsigned short or larger, and large enough to hold a UCS4 character) */ #define SRE_CODE Py_UCS4 +#if SIZEOF_SIZE_T > 4 +# define SRE_MAXREPEAT (~(SRE_CODE)0) +#else +# define SRE_MAXREPEAT ((SRE_CODE)PY_SSIZE_T_MAX + 1u) +#endif typedef struct { PyObject_VAR_HEAD -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 15:59:47 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 15:59:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2313169=3A_The_maximal_repetition_number_in_a_reg?= =?utf-8?q?ular_expression_has_been?= Message-ID: <3Z7ZJ33j4JzRQF@mail.python.org> http://hg.python.org/cpython/rev/ca0307905cd7 changeset: 82222:ca0307905cd7 parent: 82218:168efd87e051 parent: 82221:b78c321ee9a5 user: Serhiy Storchaka date: Sat Feb 16 16:55:54 2013 +0200 summary: Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). files: Lib/sre_compile.py | 1 + Lib/sre_constants.py | 4 --- Lib/sre_parse.py | 9 ++++++- Lib/test/test_re.py | 34 +++++++++++++++++++++++++++++++- Misc/NEWS | 4 +++ Modules/_sre.c | 18 +++++++++++----- Modules/sre.h | 5 ++++ 7 files changed, 62 insertions(+), 13 deletions(-) diff --git a/Lib/sre_compile.py b/Lib/sre_compile.py --- a/Lib/sre_compile.py +++ b/Lib/sre_compile.py @@ -13,6 +13,7 @@ import _sre, sys import sre_parse from sre_constants import * +from _sre import MAXREPEAT assert _sre.MAGIC == MAGIC, "SRE module mismatch" diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,10 +15,6 @@ MAGIC = 20031017 -# max code word in this release - -MAXREPEAT = 65535 - # SRE standard exception (access as sre.error) # should this really be here? diff --git a/Lib/sre_parse.py b/Lib/sre_parse.py --- a/Lib/sre_parse.py +++ b/Lib/sre_parse.py @@ -15,6 +15,7 @@ import sys from sre_constants import * +from _sre import MAXREPEAT SPECIAL_CHARS = ".\\[{()*+?^$|" REPEAT_CHARS = "*+?{" @@ -537,10 +538,14 @@ continue if lo: min = int(lo) + if min >= MAXREPEAT: + raise OverflowError("the repetition number is too large") if hi: max = int(hi) - if max < min: - raise error("bad repeat interval") + if max >= MAXREPEAT: + raise OverflowError("the repetition number is too large") + if max < min: + raise error("bad repeat interval") else: raise error("not supported") # figure out which item to repeat diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -1,4 +1,5 @@ -from test.support import verbose, run_unittest, gc_collect, bigmemtest, _2G +from test.support import verbose, run_unittest, gc_collect, bigmemtest, _2G, \ + cpython_only import io import re from re import Scanner @@ -980,6 +981,37 @@ self.assertEqual(re.findall(r"(?i)(a)\1", "aa \u0100"), ['a']) self.assertEqual(re.match(r"(?s).{1,3}", "\u0100\u0100").span(), (0, 2)) + def test_repeat_minmax_overflow(self): + # Issue #13169 + string = "x" * 100000 + self.assertEqual(re.match(r".{65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{,65535}", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65535,}?", string).span(), (0, 65535)) + self.assertEqual(re.match(r".{65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{,65536}", string).span(), (0, 65536)) + self.assertEqual(re.match(r".{65536,}?", string).span(), (0, 65536)) + # 2**128 should be big enough to overflow both SRE_CODE and Py_ssize_t. + self.assertRaises(OverflowError, re.compile, r".{%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % 2**128) + self.assertRaises(OverflowError, re.compile, r".{%d,%d}" % (2**129, 2**128)) + + @cpython_only + def test_repeat_minmax_overflow_maxrepeat(self): + try: + from _sre import MAXREPEAT + except ImportError: + self.skipTest('requires _sre.MAXREPEAT constant') + string = "x" * 100000 + self.assertIsNone(re.match(r".{%d}" % (MAXREPEAT - 1), string)) + self.assertEqual(re.match(r".{,%d}" % (MAXREPEAT - 1), string).span(), + (0, 100000)) + self.assertIsNone(re.match(r".{%d,}?" % (MAXREPEAT - 1), string)) + self.assertRaises(OverflowError, re.compile, r".{%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{,%d}" % MAXREPEAT) + self.assertRaises(OverflowError, re.compile, r".{%d,}?" % MAXREPEAT) + + def run_re_tests(): from test.re_tests import tests, SUCCEED, FAIL, SYNTAX_ERROR if verbose: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -253,6 +253,10 @@ Library ------- +- Issue #13169: The maximal repetition number in a regular expression has been + increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on + 64-bit). + - Issue #17143: Fix a missing import in the trace module. Initial patch by Berker Peksag. diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -492,7 +492,7 @@ Py_ssize_t i; /* adjust end */ - if (maxcount < (end - ptr) / state->charsize && maxcount != 65535) + if (maxcount < (end - ptr) / state->charsize && maxcount != SRE_MAXREPEAT) end = ptr + maxcount*state->charsize; switch (pattern[0]) { @@ -1109,7 +1109,7 @@ } else { /* general case */ LASTMARK_SAVE(); - while ((Py_ssize_t)ctx->pattern[2] == 65535 + while ((Py_ssize_t)ctx->pattern[2] == SRE_MAXREPEAT || ctx->count <= (Py_ssize_t)ctx->pattern[2]) { state->ptr = ctx->ptr; DO_JUMP(JUMP_MIN_REPEAT_ONE,jump_min_repeat_one, @@ -1195,7 +1195,7 @@ } if ((ctx->count < ctx->u.rep->pattern[2] || - ctx->u.rep->pattern[2] == 65535) && + ctx->u.rep->pattern[2] == SRE_MAXREPEAT) && state->ptr != ctx->u.rep->last_ptr) { /* we may have enough matches, but if we can match another item, do so */ @@ -1273,7 +1273,7 @@ LASTMARK_RESTORE(); if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != 65535) + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) RETURN_FAILURE; ctx->u.rep->count = ctx->count; @@ -3037,7 +3037,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-4, groups)) FAIL; @@ -3056,7 +3056,7 @@ GET_ARG; max = arg; if (min > max) FAIL; - if (max > 65535) + if (max > SRE_MAXREPEAT) FAIL; if (!_validate_inner(code, code+skip-3, groups)) FAIL; @@ -3942,6 +3942,12 @@ Py_DECREF(x); } + x = PyLong_FromUnsignedLong(SRE_MAXREPEAT); + if (x) { + PyDict_SetItemString(d, "MAXREPEAT", x); + Py_DECREF(x); + } + x = PyUnicode_FromString(copyright); if (x) { PyDict_SetItemString(d, "copyright", x); diff --git a/Modules/sre.h b/Modules/sre.h --- a/Modules/sre.h +++ b/Modules/sre.h @@ -16,6 +16,11 @@ /* size of a code word (must be unsigned short or larger, and large enough to hold a UCS4 character) */ #define SRE_CODE Py_UCS4 +#if SIZEOF_SIZE_T > 4 +# define SRE_MAXREPEAT (~(SRE_CODE)0) +#else +# define SRE_MAXREPEAT ((SRE_CODE)PY_SSIZE_T_MAX + 1u) +#endif typedef struct { PyObject_VAR_HEAD -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 16:31:37 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 16:31:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MTkz?= =?utf-8?q?=3A_Use_binary_prefixes_=28KiB=2C_MiB=2C_GiB=29_for_memory_unit?= =?utf-8?q?s=2E?= Message-ID: <3Z7b0n2Lt3zQj1@mail.python.org> http://hg.python.org/cpython/rev/c1f846a99c85 changeset: 82223:c1f846a99c85 branch: 3.3 parent: 82221:b78c321ee9a5 user: Serhiy Storchaka date: Sat Feb 16 17:29:56 2013 +0200 summary: Issue #17193: Use binary prefixes (KiB, MiB, GiB) for memory units. files: Doc/howto/unicode.rst | 4 ++-- Doc/library/_thread.rst | 8 ++++---- Doc/library/lzma.rst | 6 +++--- Doc/library/os.rst | 2 +- Doc/library/posix.rst | 2 +- Doc/library/tarfile.rst | 4 ++-- Doc/library/threading.rst | 8 ++++---- Doc/library/zipfile.rst | 4 ++-- Modules/_pickle.c | 4 ++-- 9 files changed, 21 insertions(+), 21 deletions(-) diff --git a/Doc/howto/unicode.rst b/Doc/howto/unicode.rst --- a/Doc/howto/unicode.rst +++ b/Doc/howto/unicode.rst @@ -456,11 +456,11 @@ One problem is the multi-byte nature of encodings; one Unicode character can be represented by several bytes. If you want to read the file in arbitrary-sized -chunks (say, 1k or 4k), you need to write error-handling code to catch the case +chunks (say, 1024 or 4096 bytes), you need to write error-handling code to catch the case where only part of the bytes encoding a single Unicode character are read at the end of a chunk. One solution would be to read the entire file into memory and then perform the decoding, but that prevents you from working with files that -are extremely large; if you need to read a 2GB file, you need 2GB of RAM. +are extremely large; if you need to read a 2 GiB file, you need 2 GiB of RAM. (More, really, since for at least a moment you'd need to have both the encoded string and its Unicode version in memory.) diff --git a/Doc/library/_thread.rst b/Doc/library/_thread.rst --- a/Doc/library/_thread.rst +++ b/Doc/library/_thread.rst @@ -93,15 +93,15 @@ Return the thread stack size used when creating new threads. The optional *size* argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive - integer value of at least 32,768 (32kB). If changing the thread stack size is + integer value of at least 32,768 (32 KiB). If changing the thread stack size is unsupported, a :exc:`RuntimeError` is raised. If the specified stack size is - invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32kB + invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a - minimum stack size > 32kB or requiring allocation in multiples of the system + minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more - information (4kB pages are common; using multiples of 4096 for the stack size is + information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). Availability: Windows, systems with POSIX threads. diff --git a/Doc/library/lzma.rst b/Doc/library/lzma.rst --- a/Doc/library/lzma.rst +++ b/Doc/library/lzma.rst @@ -158,7 +158,7 @@ In addition to being more CPU-intensive, compression with higher presets also requires much more memory (and produces output that needs more memory to decompress). With preset ``9`` for example, the overhead for an - :class:`LZMACompressor` object can be as high as 800MiB. For this reason, + :class:`LZMACompressor` object can be as high as 800 MiB. For this reason, it is generally best to stick with the default preset. The *filters* argument (if provided) should be a filter chain specifier. @@ -302,8 +302,8 @@ * ``preset``: A compression preset to use as a source of default values for options that are not specified explicitly. - * ``dict_size``: Dictionary size in bytes. This should be between 4KiB and - 1.5GiB (inclusive). + * ``dict_size``: Dictionary size in bytes. This should be between 4 KiB and + 1.5 GiB (inclusive). * ``lc``: Number of literal context bits. * ``lp``: Number of literal position bits. The sum ``lc + lp`` must be at most 4. diff --git a/Doc/library/os.rst b/Doc/library/os.rst --- a/Doc/library/os.rst +++ b/Doc/library/os.rst @@ -2329,7 +2329,7 @@ .. data:: XATTR_SIZE_MAX The maximum size the value of an extended attribute can be. Currently, this - is 64 kilobytes on Linux. + is 64 KiB on Linux. .. data:: XATTR_CREATE diff --git a/Doc/library/posix.rst b/Doc/library/posix.rst --- a/Doc/library/posix.rst +++ b/Doc/library/posix.rst @@ -37,7 +37,7 @@ .. sectionauthor:: Steve Clift Several operating systems (including AIX, HP-UX, Irix and Solaris) provide -support for files that are larger than 2 GB from a C programming model where +support for files that are larger than 2 GiB from a C programming model where :c:type:`int` and :c:type:`long` are 32-bit values. This is typically accomplished by defining the relevant size and offset types as 64-bit values. Such files are sometimes referred to as :dfn:`large files`. diff --git a/Doc/library/tarfile.rst b/Doc/library/tarfile.rst --- a/Doc/library/tarfile.rst +++ b/Doc/library/tarfile.rst @@ -669,11 +669,11 @@ * The POSIX.1-1988 ustar format (:const:`USTAR_FORMAT`). It supports filenames up to a length of at best 256 characters and linknames up to 100 characters. The - maximum file size is 8 gigabytes. This is an old and limited but widely + maximum file size is 8 GiB. This is an old and limited but widely supported format. * The GNU tar format (:const:`GNU_FORMAT`). It supports long filenames and - linknames, files bigger than 8 gigabytes and sparse files. It is the de facto + linknames, files bigger than 8 GiB and sparse files. It is the de facto standard on GNU/Linux systems. :mod:`tarfile` fully supports the GNU tar extensions for long names, sparse file support is read-only. diff --git a/Doc/library/threading.rst b/Doc/library/threading.rst --- a/Doc/library/threading.rst +++ b/Doc/library/threading.rst @@ -80,15 +80,15 @@ Return the thread stack size used when creating new threads. The optional *size* argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive - integer value of at least 32,768 (32kB). If changing the thread stack size is + integer value of at least 32,768 (32 KiB). If changing the thread stack size is unsupported, a :exc:`RuntimeError` is raised. If the specified stack size is - invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32kB + invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a - minimum stack size > 32kB or requiring allocation in multiples of the system + minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more - information (4kB pages are common; using multiples of 4096 for the stack size is + information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). Availability: Windows, systems with POSIX threads. diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -18,7 +18,7 @@ This module does not currently handle multi-disk ZIP files. It can handle ZIP files that use the ZIP64 extensions -(that is ZIP files that are more than 4 GByte in size). It supports +(that is ZIP files that are more than 4 GiB in size). It supports decryption of encrypted files in ZIP archives, but it currently cannot create an encrypted file. Decryption is extremely slow as it is implemented in native Python rather than C. @@ -148,7 +148,7 @@ (:mod:`zlib`, :mod:`bz2` or :mod:`lzma`) is not available, :exc:`RuntimeError` is also raised. The default is :const:`ZIP_STORED`. If *allowZip64* is ``True`` zipfile will create ZIP files that use the ZIP64 extensions when - the zipfile is larger than 2 GB. If it is false (the default) :mod:`zipfile` + the zipfile is larger than 2 GiB. If it is false (the default) :mod:`zipfile` will raise an exception when the ZIP file would require ZIP64 extensions. ZIP64 extensions are disabled by default because the default :program:`zip` and :program:`unzip` commands on Unix (the InfoZIP utilities) don't support diff --git a/Modules/_pickle.c b/Modules/_pickle.c --- a/Modules/_pickle.c +++ b/Modules/_pickle.c @@ -1788,7 +1788,7 @@ } else { PyErr_SetString(PyExc_OverflowError, - "cannot serialize a bytes object larger than 4GB"); + "cannot serialize a bytes object larger than 4 GiB"); return -1; /* string too large */ } @@ -1888,7 +1888,7 @@ size = PyBytes_GET_SIZE(encoded); if (size > 0xffffffffL) { PyErr_SetString(PyExc_OverflowError, - "cannot serialize a string larger than 4GB"); + "cannot serialize a string larger than 4 GiB"); goto error; /* string too large */ } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 16:31:38 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 16:31:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317193=3A_Use_binary_prefixes_=28KiB=2C_MiB=2C_G?= =?utf-8?q?iB=29_for_memory_units=2E?= Message-ID: <3Z7b0p6TbpzQsw@mail.python.org> http://hg.python.org/cpython/rev/73a16d3c066a changeset: 82224:73a16d3c066a parent: 82222:ca0307905cd7 parent: 82223:c1f846a99c85 user: Serhiy Storchaka date: Sat Feb 16 17:30:31 2013 +0200 summary: Issue #17193: Use binary prefixes (KiB, MiB, GiB) for memory units. files: Doc/howto/unicode.rst | 4 ++-- Doc/library/_thread.rst | 8 ++++---- Doc/library/lzma.rst | 6 +++--- Doc/library/os.rst | 2 +- Doc/library/posix.rst | 2 +- Doc/library/tarfile.rst | 4 ++-- Doc/library/threading.rst | 8 ++++---- Doc/library/zipfile.rst | 4 ++-- Modules/_pickle.c | 4 ++-- 9 files changed, 21 insertions(+), 21 deletions(-) diff --git a/Doc/howto/unicode.rst b/Doc/howto/unicode.rst --- a/Doc/howto/unicode.rst +++ b/Doc/howto/unicode.rst @@ -456,11 +456,11 @@ One problem is the multi-byte nature of encodings; one Unicode character can be represented by several bytes. If you want to read the file in arbitrary-sized -chunks (say, 1k or 4k), you need to write error-handling code to catch the case +chunks (say, 1024 or 4096 bytes), you need to write error-handling code to catch the case where only part of the bytes encoding a single Unicode character are read at the end of a chunk. One solution would be to read the entire file into memory and then perform the decoding, but that prevents you from working with files that -are extremely large; if you need to read a 2GB file, you need 2GB of RAM. +are extremely large; if you need to read a 2 GiB file, you need 2 GiB of RAM. (More, really, since for at least a moment you'd need to have both the encoded string and its Unicode version in memory.) diff --git a/Doc/library/_thread.rst b/Doc/library/_thread.rst --- a/Doc/library/_thread.rst +++ b/Doc/library/_thread.rst @@ -93,15 +93,15 @@ Return the thread stack size used when creating new threads. The optional *size* argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive - integer value of at least 32,768 (32kB). If changing the thread stack size is + integer value of at least 32,768 (32 KiB). If changing the thread stack size is unsupported, a :exc:`RuntimeError` is raised. If the specified stack size is - invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32kB + invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a - minimum stack size > 32kB or requiring allocation in multiples of the system + minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more - information (4kB pages are common; using multiples of 4096 for the stack size is + information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). Availability: Windows, systems with POSIX threads. diff --git a/Doc/library/lzma.rst b/Doc/library/lzma.rst --- a/Doc/library/lzma.rst +++ b/Doc/library/lzma.rst @@ -158,7 +158,7 @@ In addition to being more CPU-intensive, compression with higher presets also requires much more memory (and produces output that needs more memory to decompress). With preset ``9`` for example, the overhead for an - :class:`LZMACompressor` object can be as high as 800MiB. For this reason, + :class:`LZMACompressor` object can be as high as 800 MiB. For this reason, it is generally best to stick with the default preset. The *filters* argument (if provided) should be a filter chain specifier. @@ -302,8 +302,8 @@ * ``preset``: A compression preset to use as a source of default values for options that are not specified explicitly. - * ``dict_size``: Dictionary size in bytes. This should be between 4KiB and - 1.5GiB (inclusive). + * ``dict_size``: Dictionary size in bytes. This should be between 4 KiB and + 1.5 GiB (inclusive). * ``lc``: Number of literal context bits. * ``lp``: Number of literal position bits. The sum ``lc + lp`` must be at most 4. diff --git a/Doc/library/os.rst b/Doc/library/os.rst --- a/Doc/library/os.rst +++ b/Doc/library/os.rst @@ -2329,7 +2329,7 @@ .. data:: XATTR_SIZE_MAX The maximum size the value of an extended attribute can be. Currently, this - is 64 kilobytes on Linux. + is 64 KiB on Linux. .. data:: XATTR_CREATE diff --git a/Doc/library/posix.rst b/Doc/library/posix.rst --- a/Doc/library/posix.rst +++ b/Doc/library/posix.rst @@ -37,7 +37,7 @@ .. sectionauthor:: Steve Clift Several operating systems (including AIX, HP-UX, Irix and Solaris) provide -support for files that are larger than 2 GB from a C programming model where +support for files that are larger than 2 GiB from a C programming model where :c:type:`int` and :c:type:`long` are 32-bit values. This is typically accomplished by defining the relevant size and offset types as 64-bit values. Such files are sometimes referred to as :dfn:`large files`. diff --git a/Doc/library/tarfile.rst b/Doc/library/tarfile.rst --- a/Doc/library/tarfile.rst +++ b/Doc/library/tarfile.rst @@ -669,11 +669,11 @@ * The POSIX.1-1988 ustar format (:const:`USTAR_FORMAT`). It supports filenames up to a length of at best 256 characters and linknames up to 100 characters. The - maximum file size is 8 gigabytes. This is an old and limited but widely + maximum file size is 8 GiB. This is an old and limited but widely supported format. * The GNU tar format (:const:`GNU_FORMAT`). It supports long filenames and - linknames, files bigger than 8 gigabytes and sparse files. It is the de facto + linknames, files bigger than 8 GiB and sparse files. It is the de facto standard on GNU/Linux systems. :mod:`tarfile` fully supports the GNU tar extensions for long names, sparse file support is read-only. diff --git a/Doc/library/threading.rst b/Doc/library/threading.rst --- a/Doc/library/threading.rst +++ b/Doc/library/threading.rst @@ -80,15 +80,15 @@ Return the thread stack size used when creating new threads. The optional *size* argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive - integer value of at least 32,768 (32kB). If changing the thread stack size is + integer value of at least 32,768 (32 KiB). If changing the thread stack size is unsupported, a :exc:`RuntimeError` is raised. If the specified stack size is - invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32kB + invalid, a :exc:`ValueError` is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a - minimum stack size > 32kB or requiring allocation in multiples of the system + minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more - information (4kB pages are common; using multiples of 4096 for the stack size is + information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). Availability: Windows, systems with POSIX threads. diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -18,7 +18,7 @@ This module does not currently handle multi-disk ZIP files. It can handle ZIP files that use the ZIP64 extensions -(that is ZIP files that are more than 4 GByte in size). It supports +(that is ZIP files that are more than 4 GiB in size). It supports decryption of encrypted files in ZIP archives, but it currently cannot create an encrypted file. Decryption is extremely slow as it is implemented in native Python rather than C. @@ -148,7 +148,7 @@ (:mod:`zlib`, :mod:`bz2` or :mod:`lzma`) is not available, :exc:`RuntimeError` is also raised. The default is :const:`ZIP_STORED`. If *allowZip64* is ``True`` zipfile will create ZIP files that use the ZIP64 extensions when - the zipfile is larger than 2 GB. If it is false (the default) :mod:`zipfile` + the zipfile is larger than 2 GiB. If it is false (the default) :mod:`zipfile` will raise an exception when the ZIP file would require ZIP64 extensions. ZIP64 extensions are disabled by default because the default :program:`zip` and :program:`unzip` commands on Unix (the InfoZIP utilities) don't support diff --git a/Modules/_pickle.c b/Modules/_pickle.c --- a/Modules/_pickle.c +++ b/Modules/_pickle.c @@ -1788,7 +1788,7 @@ } else { PyErr_SetString(PyExc_OverflowError, - "cannot serialize a bytes object larger than 4GB"); + "cannot serialize a bytes object larger than 4 GiB"); return -1; /* string too large */ } @@ -1888,7 +1888,7 @@ size = PyBytes_GET_SIZE(encoded); if (size > 0xffffffffL) { PyErr_SetString(PyExc_OverflowError, - "cannot serialize a string larger than 4GB"); + "cannot serialize a string larger than 4 GiB"); goto error; /* string too large */ } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 16:44:35 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 16:44:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=238745=3A_Small_spe?= =?utf-8?q?ed_up_zipimport_on_Windows=2E_Patch_by_Catalin_Iacob=2E?= Message-ID: <3Z7bHl1NV5zRR5@mail.python.org> http://hg.python.org/cpython/rev/088a14031998 changeset: 82225:088a14031998 user: Serhiy Storchaka date: Sat Feb 16 17:43:45 2013 +0200 summary: Issue #8745: Small speed up zipimport on Windows. Patch by Catalin Iacob. files: Lib/test/test_zipimport.py | 2 + Misc/NEWS | 2 + Modules/zipimport.c | 27 +++++++++++++++++-------- 3 files changed, 22 insertions(+), 9 deletions(-) diff --git a/Lib/test/test_zipimport.py b/Lib/test/test_zipimport.py --- a/Lib/test/test_zipimport.py +++ b/Lib/test/test_zipimport.py @@ -196,6 +196,7 @@ for name, (mtime, data) in files.items(): zinfo = ZipInfo(name, time.localtime(mtime)) zinfo.compress_type = self.compression + zinfo.comment = b"spam" z.writestr(zinfo, data) z.close() @@ -245,6 +246,7 @@ for name, (mtime, data) in files.items(): zinfo = ZipInfo(name, time.localtime(mtime)) zinfo.compress_type = self.compression + zinfo.comment = b"eggs" z.writestr(zinfo, data) z.close() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,8 @@ Core and Builtins ----------------- +- Issue #8745: Small speed up zipimport on Windows. Patch by Catalin Iacob. + - Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. diff --git a/Modules/zipimport.c b/Modules/zipimport.c --- a/Modules/zipimport.c +++ b/Modules/zipimport.c @@ -862,6 +862,7 @@ long l, count; Py_ssize_t i; char name[MAXPATHLEN + 5]; + char dummy[8]; /* Buffer to read unused header values into */ PyObject *nameobj = NULL; char *p, endof_central_dir[22]; Py_ssize_t arc_offset; /* Absolute offset to start of the zip-archive. */ @@ -905,17 +906,23 @@ /* Start of Central Directory */ count = 0; + if (fseek(fp, header_offset, 0) == -1) + goto file_error; for (;;) { PyObject *t; int err; - if (fseek(fp, header_offset, 0) == -1) /* Start of file header */ - goto fseek_error; + /* Start of file header */ l = PyMarshal_ReadLongFromFile(fp); if (l != 0x02014B50) break; /* Bad: Central Dir File Header */ - if (fseek(fp, header_offset + 8, 0) == -1) - goto fseek_error; + + /* On Windows, calling fseek to skip over the fields we don't use is + slower than reading the data into a dummy buffer because fseek flushes + stdio's internal buffers. See issue #8745. */ + if (fread(dummy, 1, 4, fp) != 4) /* Skip unused fields, avoid fseek */ + goto file_error; + flags = (unsigned short)PyMarshal_ReadShortFromFile(fp); compress = PyMarshal_ReadShortFromFile(fp); time = PyMarshal_ReadShortFromFile(fp); @@ -924,11 +931,11 @@ data_size = PyMarshal_ReadLongFromFile(fp); file_size = PyMarshal_ReadLongFromFile(fp); name_size = PyMarshal_ReadShortFromFile(fp); - header_size = 46 + name_size + + header_size = name_size + PyMarshal_ReadShortFromFile(fp) + PyMarshal_ReadShortFromFile(fp); - if (fseek(fp, header_offset + 42, 0) == -1) - goto fseek_error; + if (fread(dummy, 1, 8, fp) != 8) /* Skip unused fields, avoid fseek */ + goto file_error; file_offset = PyMarshal_ReadLongFromFile(fp) + arc_offset; if (name_size > MAXPATHLEN) name_size = MAXPATHLEN; @@ -941,7 +948,9 @@ p++; } *p = 0; /* Add terminating null byte */ - header_offset += header_size; + for (; i < header_size; i++) /* Skip the rest of the header */ + if(getc(fp) == EOF) /* Avoid fseek */ + goto file_error; bootstrap = 0; if (flags & 0x0800) @@ -988,7 +997,7 @@ PySys_FormatStderr("# zipimport: found %ld names in %R\n", count, archive); return files; -fseek_error: +file_error: fclose(fp); Py_XDECREF(files); Py_XDECREF(nameobj); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 20:28:51 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 20:28:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzk2Njk6?= =?utf-8?q?_Protect_re_against_infinite_loops_on_zero-width_matching_in?= Message-ID: <3Z7hGW5XvGzQgl@mail.python.org> http://hg.python.org/cpython/rev/dc8a11c16021 changeset: 82226:dc8a11c16021 branch: 2.7 parent: 82219:c1b3d25882ca user: Serhiy Storchaka date: Sat Feb 16 21:23:01 2013 +0200 summary: Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. files: Lib/test/test_re.py | 9 +++++++++ Misc/NEWS | 3 +++ Modules/_sre.c | 9 +++++++-- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -628,6 +628,15 @@ self.assertEqual(re.match('(x)*y', 50000*'x'+'y').group(1), 'x') self.assertEqual(re.match('(x)*?y', 50000*'x'+'y').group(1), 'x') + def test_unlimited_zero_width_repeat(self): + # Issue #9669 + self.assertIsNone(re.match(r'(?:a?)*y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}y', 'z')) + self.assertIsNone(re.match(r'(?:a?)*?y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+?y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}?y', 'z')) + def test_scanner(self): def s_ident(scanner, token): return token def s_operator(scanner, token): return "op%s" % token diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -205,6 +205,9 @@ Library ------- +- Issue #9669: Protect re against infinite loops on zero-width matching in + non-greedy repeat. Patch by Matthew Barnett. + - Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -1302,13 +1302,18 @@ LASTMARK_RESTORE(); - if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) + if ((ctx->count >= ctx->u.rep->pattern[2] + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) || + state->ptr == ctx->u.rep->last_ptr) RETURN_FAILURE; ctx->u.rep->count = ctx->count; + /* zero-width match protection */ + DATA_PUSH(&ctx->u.rep->last_ptr); + ctx->u.rep->last_ptr = state->ptr; DO_JUMP(JUMP_MIN_UNTIL_3,jump_min_until_3, ctx->u.rep->pattern+3); + DATA_POP(&ctx->u.rep->last_ptr); if (ret) { RETURN_ON_ERROR(ret); RETURN_SUCCESS; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 20:28:54 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 20:28:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzk2Njk6?= =?utf-8?q?_Protect_re_against_infinite_loops_on_zero-width_matching_in?= Message-ID: <3Z7hGZ04JJzRPm@mail.python.org> http://hg.python.org/cpython/rev/d40afd489b6a changeset: 82227:d40afd489b6a branch: 3.2 parent: 82220:472a7c652cbd user: Serhiy Storchaka date: Sat Feb 16 21:23:53 2013 +0200 summary: Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. files: Lib/test/test_re.py | 9 +++++++++ Misc/NEWS | 3 +++ Modules/_sre.c | 9 +++++++-- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -616,6 +616,15 @@ self.assertEqual(re.match('(x)*y', 50000*'x'+'y').group(1), 'x') self.assertEqual(re.match('(x)*?y', 50000*'x'+'y').group(1), 'x') + def test_unlimited_zero_width_repeat(self): + # Issue #9669 + self.assertIsNone(re.match(r'(?:a?)*y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}y', 'z')) + self.assertIsNone(re.match(r'(?:a?)*?y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+?y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}?y', 'z')) + def test_scanner(self): def s_ident(scanner, token): return token def s_operator(scanner, token): return "op%s" % token diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -224,6 +224,9 @@ Library ------- +- Issue #9669: Protect re against infinite loops on zero-width matching in + non-greedy repeat. Patch by Matthew Barnett. + - Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -1295,13 +1295,18 @@ LASTMARK_RESTORE(); - if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) + if ((ctx->count >= ctx->u.rep->pattern[2] + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) || + state->ptr == ctx->u.rep->last_ptr) RETURN_FAILURE; ctx->u.rep->count = ctx->count; + /* zero-width match protection */ + DATA_PUSH(&ctx->u.rep->last_ptr); + ctx->u.rep->last_ptr = state->ptr; DO_JUMP(JUMP_MIN_UNTIL_3,jump_min_until_3, ctx->u.rep->pattern+3); + DATA_POP(&ctx->u.rep->last_ptr); if (ret) { RETURN_ON_ERROR(ret); RETURN_SUCCESS; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 20:28:55 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 20:28:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=239669=3A_Protect_re_against_infinite_loops_on_zero-wid?= =?utf-8?q?th_matching_in?= Message-ID: <3Z7hGb3JN7zRQT@mail.python.org> http://hg.python.org/cpython/rev/8f9b628593db changeset: 82228:8f9b628593db branch: 3.3 parent: 82223:c1f846a99c85 parent: 82227:d40afd489b6a user: Serhiy Storchaka date: Sat Feb 16 21:25:05 2013 +0200 summary: Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. files: Lib/test/test_re.py | 9 +++++++++ Misc/NEWS | 3 +++ Modules/_sre.c | 9 +++++++-- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -681,6 +681,15 @@ self.assertEqual(re.match('(x)*y', 50000*'x'+'y').group(1), 'x') self.assertEqual(re.match('(x)*?y', 50000*'x'+'y').group(1), 'x') + def test_unlimited_zero_width_repeat(self): + # Issue #9669 + self.assertIsNone(re.match(r'(?:a?)*y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}y', 'z')) + self.assertIsNone(re.match(r'(?:a?)*?y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+?y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}?y', 'z')) + def test_scanner(self): def s_ident(scanner, token): return token def s_operator(scanner, token): return "op%s" % token diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -178,6 +178,9 @@ Library ------- +- Issue #9669: Protect re against infinite loops on zero-width matching in + non-greedy repeat. Patch by Matthew Barnett. + - Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -1272,13 +1272,18 @@ LASTMARK_RESTORE(); - if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) + if ((ctx->count >= ctx->u.rep->pattern[2] + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) || + state->ptr == ctx->u.rep->last_ptr) RETURN_FAILURE; ctx->u.rep->count = ctx->count; + /* zero-width match protection */ + DATA_PUSH(&ctx->u.rep->last_ptr); + ctx->u.rep->last_ptr = state->ptr; DO_JUMP(JUMP_MIN_UNTIL_3,jump_min_until_3, ctx->u.rep->pattern+3); + DATA_POP(&ctx->u.rep->last_ptr); if (ret) { RETURN_ON_ERROR(ret); RETURN_SUCCESS; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 20:28:56 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 16 Feb 2013 20:28:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=239669=3A_Protect_re_against_infinite_loops_on_ze?= =?utf-8?q?ro-width_matching_in?= Message-ID: <3Z7hGc6VK8zRVk@mail.python.org> http://hg.python.org/cpython/rev/aa17a0dab86a changeset: 82229:aa17a0dab86a parent: 82225:088a14031998 parent: 82228:8f9b628593db user: Serhiy Storchaka date: Sat Feb 16 21:25:40 2013 +0200 summary: Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. files: Lib/test/test_re.py | 9 +++++++++ Misc/NEWS | 3 +++ Modules/_sre.c | 9 +++++++-- 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -681,6 +681,15 @@ self.assertEqual(re.match('(x)*y', 50000*'x'+'y').group(1), 'x') self.assertEqual(re.match('(x)*?y', 50000*'x'+'y').group(1), 'x') + def test_unlimited_zero_width_repeat(self): + # Issue #9669 + self.assertIsNone(re.match(r'(?:a?)*y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}y', 'z')) + self.assertIsNone(re.match(r'(?:a?)*?y', 'z')) + self.assertIsNone(re.match(r'(?:a?)+?y', 'z')) + self.assertIsNone(re.match(r'(?:a?){2,}?y', 'z')) + def test_scanner(self): def s_ident(scanner, token): return token def s_operator(scanner, token): return "op%s" % token diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -255,6 +255,9 @@ Library ------- +- Issue #9669: Protect re against infinite loops on zero-width matching in + non-greedy repeat. Patch by Matthew Barnett. + - Issue #13169: The maximal repetition number in a regular expression has been increased from 65534 to 2147483647 (on 32-bit platform) or 4294967294 (on 64-bit). diff --git a/Modules/_sre.c b/Modules/_sre.c --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -1272,13 +1272,18 @@ LASTMARK_RESTORE(); - if (ctx->count >= ctx->u.rep->pattern[2] - && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) + if ((ctx->count >= ctx->u.rep->pattern[2] + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) || + state->ptr == ctx->u.rep->last_ptr) RETURN_FAILURE; ctx->u.rep->count = ctx->count; + /* zero-width match protection */ + DATA_PUSH(&ctx->u.rep->last_ptr); + ctx->u.rep->last_ptr = state->ptr; DO_JUMP(JUMP_MIN_UNTIL_3,jump_min_until_3, ctx->u.rep->pattern+3); + DATA_POP(&ctx->u.rep->last_ptr); if (ret) { RETURN_ON_ERROR(ret); RETURN_SUCCESS; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 21:45:07 2013 From: python-checkins at python.org (antoine.pitrou) Date: Sat, 16 Feb 2013 21:45:07 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogRml4IHRlc3Rfc3Ns?= =?utf-8?q?_by_replacing_expired_X509_certificate?= Message-ID: <3Z7jyW20QFzQfh@mail.python.org> http://hg.python.org/cpython/rev/22c0db7304d5 changeset: 82230:22c0db7304d5 branch: 2.7 parent: 82226:dc8a11c16021 user: Antoine Pitrou date: Sat Feb 16 21:39:28 2013 +0100 summary: Fix test_ssl by replacing expired X509 certificate files: Lib/test/keycert.pem | 59 +++++++++++++++---------------- Lib/test/test_ssl.py | 11 ++--- 2 files changed, 34 insertions(+), 36 deletions(-) diff --git a/Lib/test/keycert.pem b/Lib/test/keycert.pem --- a/Lib/test/keycert.pem +++ b/Lib/test/keycert.pem @@ -1,32 +1,31 @@ ------BEGIN RSA PRIVATE KEY----- -MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L -opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH -fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB -AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU -D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA -IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM -oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 -ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ -loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j -oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA -z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq -ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV -q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= ------END RSA PRIVATE KEY----- +-----BEGIN PRIVATE KEY----- +MIICdwIBADANBgkqhkiG9w0BAQEFAASCAmEwggJdAgEAAoGBANtb0+YrKuxevGpm +LrjaUhZSgz6zFAmuGFmKmUbdjmfv9zSmmdsQIksK++jK0Be9LeZy20j6ahOfuVa0 +ufEmPoP7Fy4hXegKZR9cCWcIe/A6H2xWF1IIJLRTLaU8ol/I7T+um5HD5AwAwNPP +USNU0Eegmvp+xxWu3NX2m1Veot85AgMBAAECgYA3ZdZ673X0oexFlq7AAmrutkHt +CL7LvwrpOiaBjhyTxTeSNWzvtQBkIU8DOI0bIazA4UreAFffwtvEuPmonDb3F+Iq +SMAu42XcGyVZEl+gHlTPU9XRX7nTOXVt+MlRRRxL6t9GkGfUAXI3XxJDXW3c0vBK +UL9xqD8cORXOfE06rQJBAP8mEX1ERkR64Ptsoe4281vjTlNfIbs7NMPkUnrn9N/Y +BLhjNIfQ3HFZG8BTMLfX7kCS9D593DW5tV4Z9BP/c6cCQQDcFzCcVArNh2JSywOQ +ZfTfRbJg/Z5Lt9Fkngv1meeGNPgIMLN8Sg679pAOOWmzdMO3V706rNPzSVMME7E5 +oPIfAkEA8pDddarP5tCvTTgUpmTFbakm0KoTZm2+FzHcnA4jRh+XNTjTOv98Y6Ik +eO5d1ZnKXseWvkZncQgxfdnMqqpj5wJAcNq/RVne1DbYlwWchT2Si65MYmmJ8t+F +0mcsULqjOnEMwf5e+ptq5LzwbyrHZYq5FNk7ocufPv/ZQrcSSC+cFwJBAKvOJByS +x56qyGeZLOQlWS2JS3KJo59XuLFGqcbgN9Om9xFa41Yb4N9NvplFivsvZdw3m1Q/ +SPIXQuT8RMPDVNQ= +-----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- -MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD -VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x -IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT -U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 -NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl -bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m -dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj -aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh -m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 -M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn -fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC -AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb -08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx -CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ -iHkC6gGdBJhogs4= +MIICVDCCAb2gAwIBAgIJANfHOBkZr8JOMA0GCSqGSIb3DQEBBQUAMF8xCzAJBgNV +BAYTAlhZMRcwFQYDVQQHEw5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9u +IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0xMDEw +MDgyMzAxNTZaFw0yMDEwMDUyMzAxNTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH +Ew5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9uIFNvZnR3YXJlIEZvdW5k +YXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAw +gYkCgYEA21vT5isq7F68amYuuNpSFlKDPrMUCa4YWYqZRt2OZ+/3NKaZ2xAiSwr7 +6MrQF70t5nLbSPpqE5+5VrS58SY+g/sXLiFd6AplH1wJZwh78DofbFYXUggktFMt +pTyiX8jtP66bkcPkDADA089RI1TQR6Ca+n7HFa7c1fabVV6i3zkCAwEAAaMYMBYw +FAYDVR0RBA0wC4IJbG9jYWxob3N0MA0GCSqGSIb3DQEBBQUAA4GBAHPctQBEQ4wd +BJ6+JcpIraopLn8BGhbjNWj40mmRqWB/NAWF6M5ne7KpGAu7tLeG4hb1zLaldK8G +lxy2GPSRF6LFS48dpEj2HbMv2nvv6xxalDMJ9+DicWgAKTQ6bcX2j3GUkCR0g/T1 +CRlNBAAlvhKzO7Clpf9l0YKBEfraJByX -----END CERTIFICATE----- diff --git a/Lib/test/test_ssl.py b/Lib/test/test_ssl.py --- a/Lib/test/test_ssl.py +++ b/Lib/test/test_ssl.py @@ -107,13 +107,12 @@ if test_support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual(p['subject'], - ((('countryName', u'US'),), - (('stateOrProvinceName', u'Delaware'),), - (('localityName', u'Wilmington'),), - (('organizationName', u'Python Software Foundation'),), - (('organizationalUnitName', u'SSL'),), - (('commonName', u'somemachine.python.org'),)), + ((('countryName', 'XY'),), + (('localityName', 'Castle Anthrax'),), + (('organizationName', 'Python Software Foundation'),), + (('commonName', 'localhost'),)) ) + self.assertEqual(p['subjectAltName'], (('DNS', 'localhost'),)) # Issue #13034: the subjectAltName in some certificates # (notably projects.developer.nokia.com:443) wasn't parsed p = ssl._ssl._test_decode_cert(NOKIACERT) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 16 21:45:08 2013 From: python-checkins at python.org (antoine.pitrou) Date: Sat, 16 Feb 2013 21:45:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Remove_unused_?= =?utf-8?q?certificate_files?= Message-ID: <3Z7jyX4tkLzRMG@mail.python.org> http://hg.python.org/cpython/rev/23393309d7a6 changeset: 82231:23393309d7a6 branch: 2.7 user: Antoine Pitrou date: Sat Feb 16 21:40:16 2013 +0100 summary: Remove unused certificate files files: Lib/test/ssl_cert.pem | 14 -------------- Lib/test/ssl_key.pem | 9 --------- 2 files changed, 0 insertions(+), 23 deletions(-) diff --git a/Lib/test/ssl_cert.pem b/Lib/test/ssl_cert.pem deleted file mode 100644 --- a/Lib/test/ssl_cert.pem +++ /dev/null @@ -1,14 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD -VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv -bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy -dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X -DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw -EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l -dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT -EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp -MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw -L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN -BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX -9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= ------END CERTIFICATE----- diff --git a/Lib/test/ssl_key.pem b/Lib/test/ssl_key.pem deleted file mode 100644 --- a/Lib/test/ssl_key.pem +++ /dev/null @@ -1,9 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -MIIBPAIBAAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNwL4lYKbpzzlmC5beaQXeQ -2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAAQJBALjkK+jc2+iihI98riEF -oudmkNziSRTYjnwjx8mCoAjPWviB3c742eO3FG4/soi1jD9A5alihEOXfUzloenr -8IECIQD3B5+0l+68BA/6d76iUNqAAV8djGTzvxnCxycnxPQydQIhAMXt4trUI3nc -a+U8YL2HPFA3gmhBsSICbq2OptOCnM7hAiEA6Xi3JIQECob8YwkRj29DU3/4WYD7 -WLPgsQpwo1GuSpECICGsnWH5oaeD9t9jbFoSfhJvv0IZmxdcLpRcpslpeWBBAiEA -6/5B8J0GHdJq89FHwEG/H2eVVUYu5y/aD6sgcm+0Avg= ------END RSA PRIVATE KEY----- -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 00:42:26 2013 From: python-checkins at python.org (eric.snow) Date: Sun, 17 Feb 2013 00:42:26 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2315022=3A_Add_pick?= =?utf-8?q?le_and_comparison_support_to_types=2ESimpleNamespace=2E?= Message-ID: <3Z7nv62fn4zRYG@mail.python.org> http://hg.python.org/cpython/rev/3b93ab8c9c20 changeset: 82232:3b93ab8c9c20 parent: 82229:aa17a0dab86a user: Eric Snow date: Sat Feb 16 16:32:39 2013 -0700 summary: Issue #15022: Add pickle and comparison support to types.SimpleNamespace. files: Doc/library/types.rst | 2 + Lib/test/test_types.py | 32 +++++++++++++++++++++---- Misc/NEWS | 2 + Objects/namespaceobject.c | 33 +++++++++++++++++++------- 4 files changed, 54 insertions(+), 15 deletions(-) diff --git a/Doc/library/types.rst b/Doc/library/types.rst --- a/Doc/library/types.rst +++ b/Doc/library/types.rst @@ -212,6 +212,8 @@ keys = sorted(self.__dict__) items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys) return "{}({})".format(type(self).__name__, ", ".join(items)) + def __eq__(self, other): + return self.__dict__ == other.__dict__ ``SimpleNamespace`` may be useful as a replacement for ``class NS: pass``. However, for a structured record type use :func:`~collections.namedtuple` diff --git a/Lib/test/test_types.py b/Lib/test/test_types.py --- a/Lib/test/test_types.py +++ b/Lib/test/test_types.py @@ -2,6 +2,7 @@ from test.support import run_unittest, run_with_locale import collections +import pickle import locale import sys import types @@ -1077,9 +1078,19 @@ ns2 = types.SimpleNamespace() ns2.x = "spam" ns2._y = 5 + name = "namespace" - self.assertEqual(repr(ns1), "namespace(w=3, x=1, y=2)") - self.assertEqual(repr(ns2), "namespace(_y=5, x='spam')") + self.assertEqual(repr(ns1), "{name}(w=3, x=1, y=2)".format(name=name)) + self.assertEqual(repr(ns2), "{name}(_y=5, x='spam')".format(name=name)) + + def test_equal(self): + ns1 = types.SimpleNamespace(x=1) + ns2 = types.SimpleNamespace() + ns2.x = 1 + + self.assertEqual(types.SimpleNamespace(), types.SimpleNamespace()) + self.assertEqual(ns1, ns2) + self.assertNotEqual(ns2, types.SimpleNamespace()) def test_nested(self): ns1 = types.SimpleNamespace(a=1, b=2) @@ -1117,11 +1128,12 @@ ns1.spam = ns1 ns2.spam = ns3 ns3.spam = ns2 + name = "namespace" + repr1 = "{name}(c='cookie', spam={name}(...))".format(name=name) + repr2 = "{name}(spam={name}(spam={name}(...), x=1))".format(name=name) - self.assertEqual(repr(ns1), - "namespace(c='cookie', spam=namespace(...))") - self.assertEqual(repr(ns2), - "namespace(spam=namespace(spam=namespace(...), x=1))") + self.assertEqual(repr(ns1), repr1) + self.assertEqual(repr(ns2), repr2) def test_as_dict(self): ns = types.SimpleNamespace(spam='spamspamspam') @@ -1144,6 +1156,14 @@ self.assertIs(type(spam), Spam) self.assertEqual(vars(spam), {'ham': 8, 'eggs': 9}) + def test_pickle(self): + ns = types.SimpleNamespace(breakfast="spam", lunch="spam") + + ns_pickled = pickle.dumps(ns) + ns_roundtrip = pickle.loads(ns_pickled) + + self.assertEqual(ns, ns_roundtrip) + def test_main(): run_unittest(TypesTests, MappingProxyTests, ClassCreationTests, diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -252,6 +252,8 @@ - Issue #15111: __import__ should propagate ImportError when raised as a side-effect of a module triggered from using fromlist. +- Issue #15022: Add pickle and comparison support to types.SimpleNamespace. + Library ------- diff --git a/Objects/namespaceobject.c b/Objects/namespaceobject.c --- a/Objects/namespaceobject.c +++ b/Objects/namespaceobject.c @@ -66,16 +66,20 @@ static PyObject * -namespace_repr(_PyNamespaceObject *ns) +namespace_repr(PyObject *ns) { int i, loop_error = 0; PyObject *pairs = NULL, *d = NULL, *keys = NULL, *keys_iter = NULL; PyObject *key; PyObject *separator, *pairsrepr, *repr = NULL; + const char * name; - i = Py_ReprEnter((PyObject *)ns); + name = (Py_TYPE(ns) == &_PyNamespace_Type) ? "namespace" + : ns->ob_type->tp_name; + + i = Py_ReprEnter(ns); if (i != 0) { - return i > 0 ? PyUnicode_FromString("namespace(...)") : NULL; + return i > 0 ? PyUnicode_FromFormat("%s(...)", name) : NULL; } pairs = PyList_New(0); @@ -127,8 +131,7 @@ if (pairsrepr == NULL) goto error; - repr = PyUnicode_FromFormat("%s(%S)", - ((PyObject *)ns)->ob_type->tp_name, pairsrepr); + repr = PyUnicode_FromFormat("%s(%S)", name, pairsrepr); Py_DECREF(pairsrepr); error: @@ -136,7 +139,7 @@ Py_XDECREF(d); Py_XDECREF(keys); Py_XDECREF(keys_iter); - Py_ReprLeave((PyObject *)ns); + Py_ReprLeave(ns); return repr; } @@ -158,14 +161,26 @@ } +static PyObject * +namespace_richcompare(PyObject *self, PyObject *other, int op) +{ + if (PyObject_IsInstance(self, (PyObject *)&_PyNamespace_Type) && + PyObject_IsInstance(other, (PyObject *)&_PyNamespace_Type)) + return PyObject_RichCompare(((_PyNamespaceObject *)self)->ns_dict, + ((_PyNamespaceObject *)other)->ns_dict, op); + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; +} + + PyDoc_STRVAR(namespace_doc, "A simple attribute-based namespace.\n\ \n\ -namespace(**kwargs)"); +SimpleNamespace(**kwargs)"); PyTypeObject _PyNamespace_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) - "namespace", /* tp_name */ + "types.SimpleNamespace", /* tp_name */ sizeof(_PyNamespaceObject), /* tp_size */ 0, /* tp_itemsize */ (destructor)namespace_dealloc, /* tp_dealloc */ @@ -188,7 +203,7 @@ namespace_doc, /* tp_doc */ (traverseproc)namespace_traverse, /* tp_traverse */ (inquiry)namespace_clear, /* tp_clear */ - 0, /* tp_richcompare */ + namespace_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 01:09:17 2013 From: python-checkins at python.org (antoine.pitrou) Date: Sun, 17 Feb 2013 01:09:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2317170=3A_speed_up?= =?utf-8?q?_PyArg=5FParseTuple=5BAndKeywords=5D_a_bit=2E?= Message-ID: <3Z7pV51pN3zNbt@mail.python.org> http://hg.python.org/cpython/rev/4e985a96a612 changeset: 82233:4e985a96a612 parent: 82229:aa17a0dab86a user: Antoine Pitrou date: Sun Feb 17 01:04:57 2013 +0100 summary: Issue #17170: speed up PyArg_ParseTuple[AndKeywords] a bit. files: Python/getargs.c | 57 ++++++++++++++++++++--------------- 1 files changed, 32 insertions(+), 25 deletions(-) diff --git a/Python/getargs.c b/Python/getargs.c --- a/Python/getargs.c +++ b/Python/getargs.c @@ -46,10 +46,12 @@ } freelistentry_t; typedef struct { + freelistentry_t *entries; int first_available; - freelistentry_t *entries; + int entries_malloced; } freelist_t; +#define STATIC_FREELIST_ENTRIES 8 /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); @@ -187,7 +189,8 @@ freelist->entries[index].item); } } - PyMem_FREE(freelist->entries); + if (freelist->entries_malloced) + PyMem_FREE(freelist->entries); return retval; } @@ -197,6 +200,8 @@ { char msgbuf[256]; int levels[32]; + freelistentry_t static_entries[STATIC_FREELIST_ENTRIES]; + freelist_t freelist = {static_entries, 0, 0}; const char *fname = NULL; const char *message = NULL; int min = -1; @@ -206,7 +211,6 @@ const char *formatsave = format; Py_ssize_t i, len; char *msg; - freelist_t freelist = {0, NULL}; int compat = flags & FLAG_COMPAT; assert(compat || (args != (PyObject*)NULL)); @@ -240,15 +244,15 @@ message = format; endfmt = 1; break; + case '|': + if (level == 0) + min = max; + break; default: if (level == 0) { - if (c == 'O') - max++; - else if (Py_ISALPHA(Py_CHARMASK(c))) { + if (Py_ISALPHA(Py_CHARMASK(c))) if (c != 'e') /* skip encoded */ max++; - } else if (c == '|') - min = max; } break; } @@ -262,10 +266,13 @@ format = formatsave; - freelist.entries = PyMem_NEW(freelistentry_t, max); - if (freelist.entries == NULL) { - PyErr_NoMemory(); - return 0; + if (max > STATIC_FREELIST_ENTRIES) { + freelist.entries = PyMem_NEW(freelistentry_t, max); + if (freelist.entries == NULL) { + PyErr_NoMemory(); + return 0; + } + freelist.entries_malloced = 1; } if (compat) { @@ -1421,7 +1428,8 @@ int max = INT_MAX; int i, len, nargs, nkeywords; PyObject *current_arg; - freelist_t freelist = {0, NULL}; + freelistentry_t static_entries[STATIC_FREELIST_ENTRIES]; + freelist_t freelist = {static_entries, 0, 0}; assert(args != NULL && PyTuple_Check(args)); assert(keywords == NULL || PyDict_Check(keywords)); @@ -1445,10 +1453,13 @@ for (len=0; kwlist[len]; len++) continue; - freelist.entries = PyMem_NEW(freelistentry_t, len); - if (freelist.entries == NULL) { - PyErr_NoMemory(); - return 0; + if (len > STATIC_FREELIST_ENTRIES) { + freelist.entries = PyMem_NEW(freelistentry_t, len); + if (freelist.entries == NULL) { + PyErr_NoMemory(); + return 0; + } + freelist.entries_malloced = 1; } nargs = PyTuple_GET_SIZE(args); @@ -1574,20 +1585,16 @@ Py_ssize_t pos = 0; while (PyDict_Next(keywords, &pos, &key, &value)) { int match = 0; - char *ks; if (!PyUnicode_Check(key)) { PyErr_SetString(PyExc_TypeError, "keywords must be strings"); return cleanreturn(0, &freelist); } /* check that _PyUnicode_AsString() result is not NULL */ - ks = _PyUnicode_AsString(key); - if (ks != NULL) { - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } + for (i = 0; i < len; i++) { + if (!PyUnicode_CompareWithASCIIString(key, kwlist[i])) { + match = 1; + break; } } if (!match) { -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 01:09:18 2013 From: python-checkins at python.org (antoine.pitrou) Date: Sun, 17 Feb 2013 01:09:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_default_-=3E_default?= =?utf-8?q?=29=3A_Merge?= Message-ID: <3Z7pV65hbwzRNM@mail.python.org> http://hg.python.org/cpython/rev/05e8d82b19a6 changeset: 82234:05e8d82b19a6 parent: 82233:4e985a96a612 parent: 82232:3b93ab8c9c20 user: Antoine Pitrou date: Sun Feb 17 01:05:46 2013 +0100 summary: Merge files: Doc/library/types.rst | 2 + Lib/test/test_types.py | 32 +++++++++++++++++++++---- Misc/NEWS | 2 + Objects/namespaceobject.c | 33 +++++++++++++++++++------- 4 files changed, 54 insertions(+), 15 deletions(-) diff --git a/Doc/library/types.rst b/Doc/library/types.rst --- a/Doc/library/types.rst +++ b/Doc/library/types.rst @@ -212,6 +212,8 @@ keys = sorted(self.__dict__) items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys) return "{}({})".format(type(self).__name__, ", ".join(items)) + def __eq__(self, other): + return self.__dict__ == other.__dict__ ``SimpleNamespace`` may be useful as a replacement for ``class NS: pass``. However, for a structured record type use :func:`~collections.namedtuple` diff --git a/Lib/test/test_types.py b/Lib/test/test_types.py --- a/Lib/test/test_types.py +++ b/Lib/test/test_types.py @@ -2,6 +2,7 @@ from test.support import run_unittest, run_with_locale import collections +import pickle import locale import sys import types @@ -1077,9 +1078,19 @@ ns2 = types.SimpleNamespace() ns2.x = "spam" ns2._y = 5 + name = "namespace" - self.assertEqual(repr(ns1), "namespace(w=3, x=1, y=2)") - self.assertEqual(repr(ns2), "namespace(_y=5, x='spam')") + self.assertEqual(repr(ns1), "{name}(w=3, x=1, y=2)".format(name=name)) + self.assertEqual(repr(ns2), "{name}(_y=5, x='spam')".format(name=name)) + + def test_equal(self): + ns1 = types.SimpleNamespace(x=1) + ns2 = types.SimpleNamespace() + ns2.x = 1 + + self.assertEqual(types.SimpleNamespace(), types.SimpleNamespace()) + self.assertEqual(ns1, ns2) + self.assertNotEqual(ns2, types.SimpleNamespace()) def test_nested(self): ns1 = types.SimpleNamespace(a=1, b=2) @@ -1117,11 +1128,12 @@ ns1.spam = ns1 ns2.spam = ns3 ns3.spam = ns2 + name = "namespace" + repr1 = "{name}(c='cookie', spam={name}(...))".format(name=name) + repr2 = "{name}(spam={name}(spam={name}(...), x=1))".format(name=name) - self.assertEqual(repr(ns1), - "namespace(c='cookie', spam=namespace(...))") - self.assertEqual(repr(ns2), - "namespace(spam=namespace(spam=namespace(...), x=1))") + self.assertEqual(repr(ns1), repr1) + self.assertEqual(repr(ns2), repr2) def test_as_dict(self): ns = types.SimpleNamespace(spam='spamspamspam') @@ -1144,6 +1156,14 @@ self.assertIs(type(spam), Spam) self.assertEqual(vars(spam), {'ham': 8, 'eggs': 9}) + def test_pickle(self): + ns = types.SimpleNamespace(breakfast="spam", lunch="spam") + + ns_pickled = pickle.dumps(ns) + ns_roundtrip = pickle.loads(ns_pickled) + + self.assertEqual(ns, ns_roundtrip) + def test_main(): run_unittest(TypesTests, MappingProxyTests, ClassCreationTests, diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -252,6 +252,8 @@ - Issue #15111: __import__ should propagate ImportError when raised as a side-effect of a module triggered from using fromlist. +- Issue #15022: Add pickle and comparison support to types.SimpleNamespace. + Library ------- diff --git a/Objects/namespaceobject.c b/Objects/namespaceobject.c --- a/Objects/namespaceobject.c +++ b/Objects/namespaceobject.c @@ -66,16 +66,20 @@ static PyObject * -namespace_repr(_PyNamespaceObject *ns) +namespace_repr(PyObject *ns) { int i, loop_error = 0; PyObject *pairs = NULL, *d = NULL, *keys = NULL, *keys_iter = NULL; PyObject *key; PyObject *separator, *pairsrepr, *repr = NULL; + const char * name; - i = Py_ReprEnter((PyObject *)ns); + name = (Py_TYPE(ns) == &_PyNamespace_Type) ? "namespace" + : ns->ob_type->tp_name; + + i = Py_ReprEnter(ns); if (i != 0) { - return i > 0 ? PyUnicode_FromString("namespace(...)") : NULL; + return i > 0 ? PyUnicode_FromFormat("%s(...)", name) : NULL; } pairs = PyList_New(0); @@ -127,8 +131,7 @@ if (pairsrepr == NULL) goto error; - repr = PyUnicode_FromFormat("%s(%S)", - ((PyObject *)ns)->ob_type->tp_name, pairsrepr); + repr = PyUnicode_FromFormat("%s(%S)", name, pairsrepr); Py_DECREF(pairsrepr); error: @@ -136,7 +139,7 @@ Py_XDECREF(d); Py_XDECREF(keys); Py_XDECREF(keys_iter); - Py_ReprLeave((PyObject *)ns); + Py_ReprLeave(ns); return repr; } @@ -158,14 +161,26 @@ } +static PyObject * +namespace_richcompare(PyObject *self, PyObject *other, int op) +{ + if (PyObject_IsInstance(self, (PyObject *)&_PyNamespace_Type) && + PyObject_IsInstance(other, (PyObject *)&_PyNamespace_Type)) + return PyObject_RichCompare(((_PyNamespaceObject *)self)->ns_dict, + ((_PyNamespaceObject *)other)->ns_dict, op); + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; +} + + PyDoc_STRVAR(namespace_doc, "A simple attribute-based namespace.\n\ \n\ -namespace(**kwargs)"); +SimpleNamespace(**kwargs)"); PyTypeObject _PyNamespace_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) - "namespace", /* tp_name */ + "types.SimpleNamespace", /* tp_name */ sizeof(_PyNamespaceObject), /* tp_size */ 0, /* tp_itemsize */ (destructor)namespace_dealloc, /* tp_dealloc */ @@ -188,7 +203,7 @@ namespace_doc, /* tp_doc */ (traverseproc)namespace_traverse, /* tp_traverse */ (inquiry)namespace_clear, /* tp_clear */ - 0, /* tp_richcompare */ + namespace_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 02:25:49 2013 From: python-checkins at python.org (eric.snow) Date: Sun, 17 Feb 2013 02:25:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2315022=3A_Ensure_a?= =?utf-8?q?ll_pickle_protocols_are_supported=2E?= Message-ID: <3Z7rBP0LGczRd6@mail.python.org> http://hg.python.org/cpython/rev/e4c065b2db49 changeset: 82235:e4c065b2db49 user: Eric Snow date: Sat Feb 16 18:20:32 2013 -0700 summary: Issue #15022: Ensure all pickle protocols are supported. files: Lib/test/test_types.py | 11 ++++++++--- Objects/namespaceobject.c | 25 ++++++++++++++++++++++++- 2 files changed, 32 insertions(+), 4 deletions(-) diff --git a/Lib/test/test_types.py b/Lib/test/test_types.py --- a/Lib/test/test_types.py +++ b/Lib/test/test_types.py @@ -1159,10 +1159,15 @@ def test_pickle(self): ns = types.SimpleNamespace(breakfast="spam", lunch="spam") - ns_pickled = pickle.dumps(ns) - ns_roundtrip = pickle.loads(ns_pickled) + for protocol in range(pickle.HIGHEST_PROTOCOL + 1): + pname = "protocol {}".format(protocol) + try: + ns_pickled = pickle.dumps(ns, protocol) + except TypeError as e: + raise TypeError(pname) from e + ns_roundtrip = pickle.loads(ns_pickled) - self.assertEqual(ns, ns_roundtrip) + self.assertEqual(ns, ns_roundtrip, pname) def test_main(): diff --git a/Objects/namespaceobject.c b/Objects/namespaceobject.c --- a/Objects/namespaceobject.c +++ b/Objects/namespaceobject.c @@ -173,6 +173,29 @@ } +PyDoc_STRVAR(namespace_reduce__doc__, "Return state information for pickling"); + +static PyObject * +namespace_reduce(register _PyNamespaceObject *ns) +{ + PyObject *result, *args = PyTuple_New(0); + + if (!args) + return NULL; + + result = PyTuple_Pack(3, (PyObject *)Py_TYPE(ns), args, ns->ns_dict); + Py_DECREF(args); + return result; +} + + +static PyMethodDef namespace_methods[] = { + {"__reduce__", (PyCFunction)namespace_reduce, METH_NOARGS, + namespace_reduce__doc__}, + {NULL, NULL} /* sentinel */ +}; + + PyDoc_STRVAR(namespace_doc, "A simple attribute-based namespace.\n\ \n\ @@ -207,7 +230,7 @@ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ - 0, /* tp_methods */ + namespace_methods, /* tp_methods */ namespace_members, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 04:55:32 2013 From: python-checkins at python.org (daniel.holth) Date: Sun, 17 Feb 2013 04:55:32 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_update_faq=2C_recommended_exa?= =?utf-8?q?mple_list?= Message-ID: <3Z7vW800JyzRV7@mail.python.org> http://hg.python.org/peps/rev/071b013f0e2c changeset: 4744:071b013f0e2c user: Daniel Holth date: Sat Feb 16 22:55:23 2013 -0500 summary: update faq, recommended example list files: pep-0425.txt | 90 ++++++++++++++++++++++++++------------- 1 files changed, 60 insertions(+), 30 deletions(-) diff --git a/pep-0425.txt b/pep-0425.txt --- a/pep-0425.txt +++ b/pep-0425.txt @@ -139,32 +139,44 @@ will support. If the built distribution's tag is `in` the list, then it can be installed. -For example, an installer running under CPython 3.3 on a linux_x86_64 -system might support:: - - 1. cp33-cp33m-linux_x86_64 - 2. cp33-abi3-linux_x86_64 - 3. cp33-none-linux_x86_64 - 4. cp33-none-any - 5. cp3-none-any - 6. cp32-none-any - 7. cp31-none-any - 8. cp30-none-any - 9. py33-none-any - 10. py3-none-any - 11. py32-none-any - 12. py31-none-any - 13. py30-none-any - -The list is in order from most-preferred (a distribution with a -compiled extension module, built for the current version of Python) -to least-preferred (a pure-Python distribution built with an older -version of Python). A user could instruct their installer to fall back -to building from an sdist more or less often by configuring this list of -tags; for example, a user could include only the `*-none-any` tags to only +It is recommended that installers try to choose the most feature complete +built distribution available (the one most specific to the installation +environment) by default before falling back to pure Python versions +published for older Python releases. Installers are also recommended to +provide a way to configure and re-order the list of allowed compatibility +tags; for example, a user might accept only the `*-none-any` tags to only download built packages that advertise themselves as being pure Python. -Rarely there will be more than one supported built distribution for a +Another desirable installer feature might be to include "re-compile from +source if possible" as more preferable than some of the compatible but +legacy pre-built options. + +This example list is for an installer running under CPython 3.3 on a +linux_x86_64 system. It is in order from most-preferred (a distribution +with a compiled extension module, built for the current version of +Python) to least-preferred (a pure-Python distribution built with an +older version of Python):: + +1. cp33-cp33m-linux_x86_64 +2. cp33-abi3-linux_x86_64 +3. cp3-abi3-linux_x86_64 +4. cp33-none-linux_x86_64* +5. cp3-none-linux_x86_64* +6. py33-none-linux_x86_64* +7. py3-none-linux_x86_64* +8. cp33-none-any +9. cp3-none-any +10. py33-none-any +11. py3-none-any +12. py32-none-any +13. py31-none-any +14. py30-none-any + +* Built distributions may be platform specific for reasons other than C + extensions, such as by including a native executable invoked as + a subprocess. + +Sometimes there will be more than one supported built distribution for a particular version of a package. For example, a packager could release a package tagged `cp33-abi3-linux_x86_64` that contains an optional C extension and the same distribution tagged `py3-none-any` that does not. @@ -202,18 +214,36 @@ default it indicates that they intended to provide cross-Python compatibility. -Can I have a tag `py32+` to indicate a minimum Python minor release? - No. Inspect the Trove classifiers to determine this level of - cross-release compatibility. Similar to the announcements "beaglevote - versions 3.2 and above no longer supports Python 1.52", you will - have to manually keep track of the maximum (PEP-386) release that - still supports your version of Python. +What tag do I use if my distribution uses a feature exclusive to the newest version of Python? + Compatibility tags aid installers in selecting the *most compatible* + build of a *single version* of a distribution. For example, when + there is no Python 3.3 compatible build of ``beaglevote-1.2.0`` + (it uses a Python 3.4 exclusive feature) it may still use the + ``py3-none-any`` tag instead of the ``py34-none-any`` tag. A Python + 3.3 user must combine other qualifiers, such as a requirement for the + older release ``beaglevote-1.1.0`` that does not use the new feature, + to get a compatible build. Why isn't there a `.` in the Python version number? CPython has lasted 20+ years without a 3-digit major release. This should continue for some time. Other implementations may use _ as a delimeter, since both - and . delimit the surrounding filename. +Why normalise hyphens and other non-alphanumeric characters to underscores? + To avoid conflicting with the "." and "-" characters that separate + components of the filename, and for better compatibility with the + widest range of filesystem limitations for filenames (including + being usable in URL paths without quoting). + +Why not use special character rather than "." or "-"? + Either because that character is inconvenient or potentially confusing + in some contexts (for example, "+" must be quoted in URLs, "~" is + used to denote the user's home directory in POSIX), or because the + advantages weren't sufficiently compelling to justify changing the + existing reference implementation for the wheel format defined in PEP + 427 (for example, using "," rather than "." to separate components + in a compressed tag). + Who will maintain the registry of abbreviated implementations? New two-letter abbreviations can be requested on the python-dev mailing list. As a rule of thumb, abbreviations are reserved for -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 17 05:56:52 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 17 Feb 2013 05:56:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Accept_PEP_425_=28binary_comp?= =?utf-8?q?atibility_tags=29?= Message-ID: <3Z7wsw3XwwzP9V@mail.python.org> http://hg.python.org/peps/rev/59685da23294 changeset: 4745:59685da23294 user: Nick Coghlan date: Sun Feb 17 14:56:39 2013 +1000 summary: Accept PEP 425 (binary compatibility tags) files: pep-0425.txt | 13 ++++++------- 1 files changed, 6 insertions(+), 7 deletions(-) diff --git a/pep-0425.txt b/pep-0425.txt --- a/pep-0425.txt +++ b/pep-0425.txt @@ -4,12 +4,13 @@ Last-Modified: 07-Aug-2012 Author: Daniel Holth BDFL-Delegate: Nick Coghlan -Status: Draft +Status: Accepted Type: Standards Track Content-Type: text/x-rst Created: 27-Jul-2012 Python-Version: 3.4 Post-History: 8-Aug-2012, 18-Oct-2012, 15-Feb-2013 +Resolution: http://mail.python.org/pipermail/python-dev/2013-February/124116.html Abstract @@ -22,12 +23,10 @@ they will be included in filenames. -PEP Editor's Note -================= +PEP Acceptance +============== -While the naming scheme described in this PEP will not be supported directly -in the standard library until Python 3.4 at the earliest, draft -implementations may be made available in third party projects. +This PEP was accepted by Nick Coghlan on 17th February, 2013. Rationale @@ -155,7 +154,7 @@ linux_x86_64 system. It is in order from most-preferred (a distribution with a compiled extension module, built for the current version of Python) to least-preferred (a pure-Python distribution built with an -older version of Python):: +older version of Python): 1. cp33-cp33m-linux_x86_64 2. cp33-abi3-linux_x86_64 -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Sun Feb 17 05:58:37 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Sun, 17 Feb 2013 05:58:37 +0100 Subject: [Python-checkins] Daily reference leaks (e4c065b2db49): sum=7 Message-ID: results for e4c065b2db49 on branch "default" -------------------------------------------- test_dbm leaked [2, 0, 0] references, sum=2 test_dbm leaked [2, 2, 1] memory blocks, sum=5 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflog_WIQhx', '-x'] From python-checkins at python.org Sun Feb 17 06:27:42 2013 From: python-checkins at python.org (eric.snow) Date: Sun, 17 Feb 2013 06:27:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Fixes_a_FileFi?= =?utf-8?q?nder_docstring_to_reflect_an_old_change=2E?= Message-ID: <3Z7xYV3Qf1zQWh@mail.python.org> http://hg.python.org/cpython/rev/0f65bf6063ca changeset: 82236:0f65bf6063ca branch: 3.3 parent: 82228:8f9b628593db user: Eric Snow date: Sat Feb 16 22:23:48 2013 -0700 summary: Fixes a FileFinder docstring to reflect an old change. That change was in 1db6553f3f8c. files: Lib/importlib/_bootstrap.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -1330,8 +1330,8 @@ def __init__(self, path, *details): """Initialize with the path to search on and a variable number of - 3-tuples containing the loader, file suffixes the loader recognizes, - and a boolean of whether the loader handles packages.""" + 2-tuples containing the loader and the file suffixes the loader + recognizes.""" loaders = [] for loader, suffixes in details: loaders.extend((suffix, loader) for suffix in suffixes) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 06:27:43 2013 From: python-checkins at python.org (eric.snow) Date: Sun, 17 Feb 2013 06:27:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_from_3=2E3?= Message-ID: <3Z7xYW64ZKzQWh@mail.python.org> http://hg.python.org/cpython/rev/65eaac000147 changeset: 82237:65eaac000147 parent: 82235:e4c065b2db49 parent: 82236:0f65bf6063ca user: Eric Snow date: Sat Feb 16 22:25:31 2013 -0700 summary: Merge from 3.3 files: Lib/importlib/_bootstrap.py | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -1352,8 +1352,8 @@ def __init__(self, path, *details): """Initialize with the path to search on and a variable number of - 3-tuples containing the loader, file suffixes the loader recognizes, - and a boolean of whether the loader handles packages.""" + 2-tuples containing the loader and the file suffixes the loader + recognizes.""" loaders = [] for loader, suffixes in details: loaders.extend((suffix, loader) for suffix in suffixes) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 07:09:04 2013 From: python-checkins at python.org (eric.snow) Date: Sun, 17 Feb 2013 07:09:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_Add_me_to_the_=22import_m?= =?utf-8?q?achinery=22_interest_area=2E?= Message-ID: <3Z7yTD4T3PzP9V@mail.python.org> http://hg.python.org/devguide/rev/b9fea544ed3d changeset: 598:b9fea544ed3d user: Eric Snow date: Sat Feb 16 23:07:40 2013 -0700 summary: Add me to the "import machinery" interest area. files: experts.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/experts.rst b/experts.rst --- a/experts.rst +++ b/experts.rst @@ -309,7 +309,7 @@ documentation georg.brandl, ezio.melotti, eric.araujo GUI i18n lemburg, eric.araujo -import machinery brett.cannon, ncoghlan +import machinery brett.cannon, ncoghlan, eric.snow io pitrou, benjamin.peterson, stutzbach, hynek locale lemburg, loewis mathematics mark.dickinson, eric.smith, lemburg, stutzbach -- Repository URL: http://hg.python.org/devguide From python-checkins at python.org Sun Feb 17 09:09:17 2013 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 17 Feb 2013 09:09:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Eliminate_unnecessary_vari?= =?utf-8?q?able=2E?= Message-ID: <3Z817x5ksKzSTc@mail.python.org> http://hg.python.org/cpython/rev/5405651bbef8 changeset: 82238:5405651bbef8 user: Raymond Hettinger date: Sun Feb 17 00:08:45 2013 -0800 summary: Eliminate unnecessary variable. files: Lib/functools.py | 17 +++++++---------- 1 files changed, 7 insertions(+), 10 deletions(-) diff --git a/Lib/functools.py b/Lib/functools.py --- a/Lib/functools.py +++ b/Lib/functools.py @@ -228,9 +228,8 @@ PREV, NEXT, KEY, RESULT = 0, 1, 2, 3 # names for the link fields def decorating_function(user_function): - cache = {} - hits = misses = currsize = 0 + hits = misses = 0 full = False cache_get = cache.get # bound method to lookup a key or return None lock = Lock() # because linkedlist updates aren't threadsafe @@ -250,7 +249,7 @@ def wrapper(*args, **kwds): # simple caching without ordering or size limit - nonlocal hits, misses, currsize + nonlocal hits, misses key = make_key(args, kwds, typed) result = cache_get(key, sentinel) if result is not sentinel: @@ -259,14 +258,13 @@ result = user_function(*args, **kwds) cache[key] = result misses += 1 - currsize += 1 return result else: def wrapper(*args, **kwds): # size limited caching that tracks accesses by recency - nonlocal root, hits, misses, currsize, full + nonlocal root, hits, misses, full key = make_key(args, kwds, typed) with lock: link = cache_get(key) @@ -303,23 +301,22 @@ last = root[PREV] link = [last, root, key, result] cache[key] = last[NEXT] = root[PREV] = link - currsize += 1 - full = (currsize == maxsize) + full = (len(cache) == maxsize) misses += 1 return result def cache_info(): """Report cache statistics""" with lock: - return _CacheInfo(hits, misses, maxsize, currsize) + return _CacheInfo(hits, misses, maxsize, len(cache)) def cache_clear(): """Clear the cache and cache statistics""" - nonlocal hits, misses, currsize, full + nonlocal hits, misses, full with lock: cache.clear() root[:] = [root, root, None, None] - hits = misses = currsize = 0 + hits = misses = 0 full = False wrapper.cache_info = cache_info -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 09:15:08 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 17 Feb 2013 09:15:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426_updates?= Message-ID: <3Z81Gh29wrzRc5@mail.python.org> http://hg.python.org/peps/rev/53537d0808d1 changeset: 4746:53537d0808d1 user: Nick Coghlan date: Sun Feb 17 18:14:42 2013 +1000 summary: PEP 426 updates files: pep-0426.txt | 267 +++++++++++++++++++++++++------ pep-0426/pepsort.py | 233 +++++++++++++++++++++++++++ 2 files changed, 442 insertions(+), 58 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -46,13 +46,17 @@ ``email.policy.Policy()``. When ``metadata`` is a Unicode string, ```email.parser.Parser().parsestr(metadata)`` is a serviceable parser. -There are two standard locations for these metadata files: +There are three standard locations for these metadata files: * the ``PKG-INFO`` file included in the base directory of Python source distribution archives (as created by the distutils ``sdist`` command) -* the ``.dist-info/METADATA`` files in a Python installation database, as - described in PEP 376. +* the ``{distribution}-{version}.dist-info/METADATA`` file in a ``wheel`` + binary distribution archive (as described in PEP 425, or a later version + of that specification) +* the ``{distribution}-{version}.dist-info/METADATA`` files in a local + Python installation database (as described in PEP 376, or a later version + of that specification) Other tools involved in Python distribution may also use this format. @@ -102,8 +106,9 @@ Version ------- -A string containing the distribution's version identifier. See `Version scheme`_ -below. +The distribution's public version identifier. Public versions are designed +for consumption by automated tools and are strictly ordered according +to a defined scheme. See `Version scheme`_ below. Example:: @@ -120,6 +125,21 @@ Summary: A module for collecting votes from beagles. +Private-Version (optional) +-------------------------- + +An arbitrary private version label. Private version labels are intended +for internal use by a project, and cannot be used in version specifiers. +See `Compatibility with other version schemes`_ below. + +Examples:: + + Private-Version: 1.0.0-alpha.1 + Private-Version: 1.3.7+build.11.e0f985a + Private-Version: v1.8.1.301.ga0df26f + Private-Version: 2013.02.17.dev123 + + Description (optional, deprecated) ---------------------------------- @@ -263,6 +283,8 @@ Each entry is a string giving a single classification value for the distribution. Classifiers are described in PEP 301 [2]. +`Environment markers`_ may be used with this field. + Examples:: Classifier: Development Status :: 4 - Beta @@ -299,6 +321,8 @@ in `Version scheme`_. The distribution's version identifier will be implied if none is specified. +`Environment markers`_ may be used with this field. + Examples:: Provides-Dist: ThisProject @@ -360,6 +384,8 @@ Package Index`_; often the same as, but distinct from, the module names as accessed with ``import x``. +`Environment markers`_ may be used with this field. + Version declarations must follow the rules described in `Version specifiers`_ @@ -404,6 +430,8 @@ This field specifies the Python version(s) that the distribution is guaranteed to be compatible with. +`Environment markers`_ may be used with this field. + Version declarations must be in the format specified in `Version specifiers`_. @@ -439,6 +467,8 @@ dependency, optionally followed by a version declaration within parentheses. +`Environment markers`_ may be used with this field. + Because they refer to non-Python software releases, version identifiers for this field are **not** required to conform to the format described in `Version scheme`_: they should correspond to the @@ -542,12 +572,14 @@ Version scheme ============== -Version identifiers must comply with the following scheme:: +Public version identifiers must comply with the following scheme:: N[.N]+[{a|b|c|rc}N][.postN][.devN] Version identifiers which do not comply with this scheme are an error. +Version identifiers must not include leading or trailing whitespace. + Any given version will be a "release", "pre-release", "post-release" or "developmental release" as defined in the following sections. @@ -576,6 +608,12 @@ in turn, with "component does not exist" sorted ahead of all numeric values. +Date based release numbers are explicitly excluded from compatibility with +this scheme, as they hinder automatic translation to other versioning +schemes, as well as preventing the adoption of semantic versioning without +changing the name of the project. Accordingly, a leading release component +greater than or equal to ``1980`` is an error. + While any number of additional components after the first are permitted under this scheme, the most common variants are to use two components ("major.minor") or three components ("major.minor.micro"). @@ -612,37 +650,6 @@ above shows both styles, always including the ``.0`` at the second level and consistently omitting it at the third level. -.. note:: - - While date based release numbers, using the forms ``year.month`` or - ``year.month.day``, are technically compliant with this scheme, their use - is strongly discouraged as they can hinder automatic translation to - other versioning schemes. In particular, they are completely - incompatible with semantic versioning. - - -Semantic versioning -------------------- - -`Semantic versioning`_ is a popular version identification scheme that is -more prescriptive than this PEP regarding the significance of different -elements of a release number. Even if a project chooses not to abide by -the details of semantic versioning, the scheme is worth understanding as -it covers many of the issues that can arise when depending on other -distributions, and when publishing a distribution that others rely on. - -The "Major.Minor.Patch" (described in this PEP as "major.minor.micro") -aspects of semantic versioning (clauses 1-9 in the 2.0.0-rc-1 specification) -are fully compatible with the version scheme defined in this PEP, and abiding -by these aspects is encouraged. - -Semantic versions containing a hyphen (pre-releases - clause 10) or a -plus sign (builds - clause 11) are *not* compatible with this PEP -and are not permitted in compliant metadata. Use this PEP's deliberately -more restricted pre-release and developmental release notation instead. - -.. _Semantic versioning: http://semver.org/ - Pre-releases ------------ @@ -898,6 +905,70 @@ should be used in preference to the one defined in PEP 386. +Compatibility with other version schemes +---------------------------------------- + +Some projects may choose to use a version scheme which requires +translation in order to comply with the public version scheme defined in +this PEP. In such cases, the `Private-Version`__ field can be used to +record the project specific version as an arbitrary label, while the +translated public version is given in the `Version`_ field. + +__ `Private-Version (optional)`_ + +This allows automated distribution tools to provide consistently correct +ordering of published releases, while still allowing developers to use +the internal versioning scheme they prefer for their projects. + + +Semantic versioning +~~~~~~~~~~~~~~~~~~~ + +`Semantic versioning`_ is a popular version identification scheme that is +more prescriptive than this PEP regarding the significance of different +elements of a release number. Even if a project chooses not to abide by +the details of semantic versioning, the scheme is worth understanding as +it covers many of the issues that can arise when depending on other +distributions, and when publishing a distribution that others rely on. + +The "Major.Minor.Patch" (described in this PEP as "major.minor.micro") +aspects of semantic versioning (clauses 1-9 in the 2.0.0-rc-1 specification) +are fully compatible with the version scheme defined in this PEP, and abiding +by these aspects is encouraged. + +Semantic versions containing a hyphen (pre-releases - clause 10) or a +plus sign (builds - clause 11) are *not* compatible with this PEP +and are not permitted in the public `Version`_ field. + +One possible mechanism to translate such private semantic versions to +compatible public versions is to use the ``.devN`` suffix to specify the +appropriate version order. + +.. _Semantic versioning: http://semver.org/ + + +DVCS based version labels +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Many build tools integrate with distributed version control systems like +Git and Mercurial in order to add an identifying hash to the version +identifier. As hashes cannot be ordered reliably such versions are not +permitted in the public `Version`_ field. + +As with semantic versioning, the public ``.devN`` suffix may be used to +uniquely identify such releases for publication, while the private +version field is used to record the original version label. + + +Date based versions +~~~~~~~~~~~~~~~~~~~ + +As with other incompatible version schemes, date based versions can be +stored in the ``Private-Version`` field. Translating them to a compliant +version is straightforward: the simplest approach is to subtract the year +of the first release from the major component in the release number. + + Version specifiers ================== @@ -1043,31 +1114,33 @@ The pseudo-grammar is :: - EXPR [in|==|!=|not in]?EXPR [or|and] ... + MARKER: EXPR [(and|or) EXPR]* + EXPR: ("(" MARKER ")") | (SUBEXPR [(in|==|!=|not in)?SUBEXPR]) -where ``EXPR`` belongs to any of these: +where ``SUBEXPR`` belongs to any of the following (the details after the +colon in each entry define the value represented by that subexpression): -- python_version = '%s.%s' % (sys.version_info[0], sys.version_info[1]) -- python_full_version = sys.version.split()[0] -- os.name = os.name -- sys.platform = sys.platform -- platform.version = platform.version() -- platform.machine = platform.machine() -- platform.python_implementation = platform.python_implementation() -- a free string, like ``'2.4'``, or ``'win32'`` -- extra = (name of requested feature) or None +* ``python_version``: '%s.%s' % (sys.version_info[0], sys.version_info[1]) +* ``python_full_version``: sys.version.split()[0] +* ``os.name````: os.name +* ``sys.platform````: sys.platform +* ``platform.version``: platform.version() +* ``platform.machine``: platform.machine() +* ``platform.python_implementation``: = platform.python_implementation() +* ``extra``: (name of requested feature) or None +* ``'text'``: a free string, like ``'2.4'``, or ``'win32'`` -Notice that ``in`` is restricted to strings, meaning that it is not possible -to use other sequences like tuples or lists on the right side. +Notice that ``in`` and ``not in`` are restricted to strings, meaning that it +is not possible to use other sequences like tuples or lists on the right +side. The fields that benefit from this marker are: -- ``Requires-Python`` -- ``Requires-External`` -- ``Requires-Dist`` -- ``Setup-Requires-Dist`` -- ``Provides-Dist`` -- ``Classifier`` +* ``Requires-Python`` +* ``Requires-External`` +* ``Requires-Dist`` +* ``Provides-Dist`` +* ``Classifier`` Optional features @@ -1133,11 +1206,24 @@ * Values are now expected to be UTF-8 -* Changed the version scheme (eliminating the dependency on PEP 386) +* Changed the version scheme + + * added the new ``Private-Version`` field + * changed the top level sort position of the ``.devN`` suffix + * allowed single value version numbers + * explicit exclusion of leading or trailing whitespace + * explicit criterion for the exclusion of date based versions + * incorporated the version scheme directly into the PEP * Changed interpretation of version specifiers -* Explicit handling of ordering and dependencies across metadata versions + * implicitly exclude pre-releases unless explicitly requested + * treat post releases the same way as unqualified releases + +* Discuss ordering and dependencies across metadata versions + +* Clarify use of parentheses for grouping in environment marker + pseudo-grammar * Support for packaging, build and installation dependencies @@ -1188,6 +1274,13 @@ Changing the version scheme --------------------------- +The new ``Private-Version`` field is intended to make it clearer that the +constraints on public version identifiers are there primarily to aid in +the creation of reliable automated dependency analysis tools. Projects +are free to use whatever versioning scheme they like internally, so long +as they are able to translate it to something the dependency analysis tools +will understand. + The key change in the version scheme in this PEP relative to that in PEP 386 is to sort top level developmental releases like ``X.Y.devN`` ahead of alpha releases like ``X.Ya1``. This is a far more logical sort order, as @@ -1214,12 +1307,68 @@ version specifiers and release numbers, rather than splitting the two definitions. +The exclusion of leading and trailing whitespace was made explicit after +a couple of projects with version identifiers differing only in a +trailing ``\n`` character were found on PyPI. + +The exclusion of major release numbers that looks like dates was implied +by the overall text of PEP 386, but not clear in the definition of the +version scheme. This exclusion has been made clear in the definition of +the release component. + Finally, as the version scheme in use is dependent on the metadata version, it was deemed simpler to merge the scheme definition directly into this PEP rather than continuing to maintain it as a separate PEP. This will also allow all of the distutils-specific elements of PEP 386 to finally be formally rejected. +The following statistics provide an analysis of the compatibility of existing +projects on PyPI with the specified versioning scheme (as of 16th February, +2013). + +* Total number of distributions analysed: 28088 +* Distributions with no releases: 248 / 28088 (0.88 %) + +* Fully compatible distributions: 24142 / 28088 (85.95 %) +* Compatible distributions after translation: 2830 / 28088 (10.08 %) +* Compatible distributions after filtering: 511 / 28088 (1.82 %) +* Distributions sorted differently after translation: 38 / 28088 (0.14 %) +* Distributions sorted differently without translation: 2 / 28088 (0.01 %) +* Distributions with no compatible releases: 317 / 28088 (1.13 %) + +The two remaining sort order discrepancies picked up by the analysis are due +to a pair of projects which have published releases ending with a carriage +return, alongside releases with the same version number, only *without* the +trailing carriage return. + +The sorting discrepancies after translation relate mainly to differences +in the handling of pre-releases where the standard mechanism is considered +to be an improvement. For example, the existing pkg_resources scheme will +sort "1.1beta1" *after* "1.1b2", whereas the suggested standard translation +for "1.1beta1" is "1.1b1", which sorts *before* "1.1b2". Similarly, the +pkg_resources scheme will sort "-dev-N" pre-releases differently from +"devN" releases when they occur within the same release, while the +standard scheme will normalize both representations to ".devN" and sort +them by the numeric component. + +For comparison, here are the corresponding analysis results for PEP 386: + +* Fully compatible distributions: 23874 / 28088 (85.00 %) +* Compatible distributions after translation: 2786 / 28088 (9.92 %) +* Compatible distributions after filtering: 527 / 28088 (1.88 %) +* Distributions sorted differently after translation: 96 / 28088 (0.34 %) +* Distributions sorted differently without translation: 14 / 28088 (0.05 %) +* Distributions with no compatible releases: 543 / 28088 (1.93 %) + +These figures make it clear that only a relatively small number of current +projects are affected by these changes. However, some of the affected +projects are in widespread use (such as Pinax and selenium). The +changes also serve to bring the standard scheme more into line with +developer's expectations, which is an important element in encouraging +adoption of the new metadata version. + +The script used for the above analysis is available at [3]_. + A more opinionated description of the versioning scheme ------------------------------------------------------- @@ -1357,6 +1506,8 @@ .. [2] PEP 301: http://www.python.org/dev/peps/pep-0301/ +.. [3] Version compatibility analysis script + http://hg.python.org/peps/file/default/pep-0426/pepsort.py Appendix ======== diff --git a/pep-0426/pepsort.py b/pep-0426/pepsort.py new file mode 100755 --- /dev/null +++ b/pep-0426/pepsort.py @@ -0,0 +1,233 @@ +#!/usr/bin/env python3 + +# Distribution sorting comparisons +# between pkg_resources, PEP 386 and PEP 426 +# +# Requires distlib, original script written by Vinay Sajip + +import logging +import re +import sys +import json +import errno +import time + +from distlib.compat import xmlrpclib +from distlib.version import suggest_normalized_version, legacy_key, normalized_key + +logger = logging.getLogger(__name__) + +PEP426_VERSION_RE = re.compile('^(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' + '(\.(post)(\d+))?(\.(dev)(\d+))?$') + +def pep426_key(s): + s = s.strip() + m = PEP426_VERSION_RE.match(s) + if not m: + raise ValueError('Not a valid version: %s' % s) + groups = m.groups() + nums = tuple(int(v) for v in groups[0].split('.')) + while len(nums) > 1 and nums[-1] == 0: + nums = nums[:-1] + + pre = groups[3:5] + post = groups[6:8] + dev = groups[9:11] + if pre == (None, None): + pre = () + else: + pre = pre[0], int(pre[1]) + if post == (None, None): + post = () + else: + post = post[0], int(post[1]) + if dev == (None, None): + dev = () + else: + dev = dev[0], int(dev[1]) + if not pre: + # either before pre-release, or final release and after + if not post and dev: + # before pre-release + pre = ('a', -1) # to sort before a0 + else: + pre = ('z',) # to sort after all pre-releases + # now look at the state of post and dev. + if not post: + post = ('a',) + if not dev: + dev = ('final',) + + return nums, pre, post, dev + +def cache_projects(cache_name): + logger.info("Retrieving package data from PyPI") + client = xmlrpclib.ServerProxy('http://python.org/pypi') + projects = dict.fromkeys(client.list_packages()) + failed = [] + for pname in projects: + time.sleep(0.1) + logger.debug("Retrieving versions for %s", pname) + try: + projects[pname] = list(client.package_releases(pname, True)) + except: + failed.append(pname) + logger.warn("Error retrieving versions for %s", failed) + with open(cache_name, 'w') as f: + json.dump(projects, f, sort_keys=True, + indent=2, separators=(',', ': ')) + return projects + +def get_projects(cache_name): + try: + f = open(cache_name) + except IOError as exc: + if exc.errno != errno.ENOENT: + raise + projects = cache_projects(cache_name); + else: + with f: + projects = json.load(f) + return projects + + +VERSION_CACHE = "pepsort_cache.json" + +class Category(set): + + def __init__(self, title, num_projects): + super().__init__() + self.title = title + self.num_projects = num_projects + + def __str__(self): + num_projects = self.num_projects + num_in_category = len(self) + pct = (100.0 * num_in_category) / num_projects + return "{}: {:d} / {:d} ({:.2f} %)".format( + self.title, num_in_category, num_projects, pct) + +SORT_KEYS = { + "386": normalized_key, + "426": pep426_key, +} + +def main(pepno = '426'): + sort_key = SORT_KEYS[pepno] + print('Comparing PEP %s version sort to setuptools.' % pepno) + + projects = get_projects(VERSION_CACHE) + num_projects = len(projects) + + null_projects = Category("No releases", num_projects) + compatible_projects = Category("Compatible", num_projects) + translated_projects = Category("Compatible with translation", num_projects) + filtered_projects = Category("Compatible with filtering", num_projects) + sort_error_translated_projects = Category("Translations sort differently", num_projects) + sort_error_compatible_projects = Category("Incompatible due to sorting errors", num_projects) + incompatible_projects = Category("Incompatible", num_projects) + + categories = [ + null_projects, + compatible_projects, + translated_projects, + filtered_projects, + sort_error_translated_projects, + sort_error_compatible_projects, + incompatible_projects, + ] + + sort_failures = 0 + for i, (pname, versions) in enumerate(projects.items()): + if i % 100 == 0: + sys.stderr.write('%s / %s\r' % (i, num_projects)) + sys.stderr.flush() + if not versions: + logger.debug('%-15.15s has no releases', pname) + null_projects.add(pname) + continue + # list_legacy and list_pep will contain 2-tuples + # comprising a sortable representation according to either + # the setuptools (legacy) algorithm or the PEP algorithm. + # followed by the original version string + list_legacy = [(legacy_key(v), v) for v in versions] + # Go through the PEP 386/426 stuff one by one, since + # we might get failures + list_pep = [] + excluded_versions = set() + translated_versions = set() + for v in versions: + try: + k = sort_key(v) + except Exception: + s = suggest_normalized_version(v) + if not s: + good = False + logger.debug('%-15.15s failed for %r, no suggestions', pname, v) + excluded_versions.add(v) + continue + else: + try: + k = sort_key(s) + except ValueError: + logger.error('%-15.15s failed for %r, with suggestion %r', + pname, v, s) + excluded_versions.add(v) + continue + logger.debug('%-15.15s translated %r to %r', pname, v, s) + translated_versions.add(v) + list_pep.append((k, v)) + if not list_pep: + logger.debug('%-15.15s has no compatible releases', pname) + incompatible_projects.add(pname) + continue + # Now check the versions sort as expected + if excluded_versions: + list_legacy = [(k, v) for k, v in list_legacy + if v not in excluded_versions] + assert len(list_legacy) == len(list_pep) + sorted_legacy = sorted(list_legacy) + sorted_pep = sorted(list_pep) + sv_legacy = [t[1] for t in sorted_legacy] + sv_pep = [t[1] for t in sorted_pep] + if sv_legacy != sv_pep: + if translated_versions: + logger.debug('%-15.15s translation creates sort differences', pname) + sort_error_translated_projects.add(pname) + else: + logger.debug('%-15.15s incompatible due to sort errors', pname) + sort_error_compatible_projects.add(pname) + logger.debug('%-15.15s unequal: legacy: %s', pname, sv_legacy) + logger.debug('%-15.15s unequal: pep%s: %s', pname, pepno, sv_pep) + continue + # The project is compatible to some degree, + if excluded_versions: + logger.debug('%-15.15s has some compatible releases', pname) + filtered_projects.add(pname) + continue + if translated_versions: + logger.debug('%-15.15s is compatible after translation', pname) + translated_projects.add(pname) + continue + logger.debug('%-15.15s is fully compatible', pname) + compatible_projects.add(pname) + + for category in categories: + print(category) + + # Uncomment the line below to explore differences in details + # import pdb; pdb.set_trace() + # Grepping the log files is also informative + # e.g. "grep unequal pep426sort.log" for the PEP 426 sort differences + +if __name__ == '__main__': + if len(sys.argv) > 1 and sys.argv[1] == '386': + pepno = '386' + else: + pepno = '426' + logname = 'pep{}sort.log'.format(pepno) + logging.basicConfig(level=logging.DEBUG, filename=logname, + filemode='w', format='%(message)s') + logger.setLevel(logging.DEBUG) + main(pepno) + -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 17 09:17:33 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 17 Feb 2013 09:17:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Reclaim_BDFL-Delegate_status_?= =?utf-8?q?for_PEP_426?= Message-ID: <3Z81KT0HGczRdQ@mail.python.org> http://hg.python.org/peps/rev/e196c46d1db5 changeset: 4747:e196c46d1db5 user: Nick Coghlan date: Sun Feb 17 18:17:24 2013 +1000 summary: Reclaim BDFL-Delegate status for PEP 426 files: pep-0426.txt | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -5,6 +5,7 @@ Author: Daniel Holth , Donald Stufft , Nick Coghlan +BDFL-Delegate: Nick Coghlan Discussions-To: Distutils SIG Status: Draft Type: Standards Track -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 17 10:34:52 2013 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 17 Feb 2013 10:34:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Set_cache_size?= =?utf-8?q?s_to_a_power-of-two?= Message-ID: <3Z832h548SzSd1@mail.python.org> http://hg.python.org/cpython/rev/4ab91904f232 changeset: 82239:4ab91904f232 branch: 3.3 parent: 82236:0f65bf6063ca user: Raymond Hettinger date: Sun Feb 17 01:33:37 2013 -0800 summary: Set cache sizes to a power-of-two files: Lib/fnmatch.py | 2 +- Lib/re.py | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/fnmatch.py b/Lib/fnmatch.py --- a/Lib/fnmatch.py +++ b/Lib/fnmatch.py @@ -35,7 +35,7 @@ pat = os.path.normcase(pat) return fnmatchcase(name, pat) - at functools.lru_cache(maxsize=250, typed=True) + at functools.lru_cache(maxsize=256, typed=True) def _compile_pattern(pat): if isinstance(pat, bytes): pat_str = str(pat, 'ISO-8859-1') diff --git a/Lib/re.py b/Lib/re.py --- a/Lib/re.py +++ b/Lib/re.py @@ -261,7 +261,7 @@ _pattern_type = type(sre_compile.compile("", 0)) - at functools.lru_cache(maxsize=500, typed=True) + at functools.lru_cache(maxsize=512, typed=True) def _compile(pattern, flags): # internal: compile pattern if isinstance(pattern, _pattern_type): @@ -273,7 +273,7 @@ raise TypeError("first argument must be string or compiled pattern") return sre_compile.compile(pattern, flags) - at functools.lru_cache(maxsize=500) + at functools.lru_cache(maxsize=512) def _compile_repl(repl, pattern): # internal: compile replacement pattern return sre_parse.parse_template(repl, pattern) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 10:34:54 2013 From: python-checkins at python.org (raymond.hettinger) Date: Sun, 17 Feb 2013 10:34:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_merge?= Message-ID: <3Z832k0YnqzScJ@mail.python.org> http://hg.python.org/cpython/rev/46f6e052cef9 changeset: 82240:46f6e052cef9 parent: 82238:5405651bbef8 parent: 82239:4ab91904f232 user: Raymond Hettinger date: Sun Feb 17 01:34:17 2013 -0800 summary: merge files: Lib/fnmatch.py | 2 +- Lib/re.py | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/fnmatch.py b/Lib/fnmatch.py --- a/Lib/fnmatch.py +++ b/Lib/fnmatch.py @@ -35,7 +35,7 @@ pat = os.path.normcase(pat) return fnmatchcase(name, pat) - at functools.lru_cache(maxsize=250, typed=True) + at functools.lru_cache(maxsize=256, typed=True) def _compile_pattern(pat): if isinstance(pat, bytes): pat_str = str(pat, 'ISO-8859-1') diff --git a/Lib/re.py b/Lib/re.py --- a/Lib/re.py +++ b/Lib/re.py @@ -261,7 +261,7 @@ _pattern_type = type(sre_compile.compile("", 0)) - at functools.lru_cache(maxsize=500, typed=True) + at functools.lru_cache(maxsize=512, typed=True) def _compile(pattern, flags): # internal: compile pattern if isinstance(pattern, _pattern_type): @@ -273,7 +273,7 @@ raise TypeError("first argument must be string or compiled pattern") return sre_compile.compile(pattern, flags) - at functools.lru_cache(maxsize=500) + at functools.lru_cache(maxsize=512) def _compile_repl(repl, pattern): # internal: compile replacement pattern return sre_parse.parse_template(repl, pattern) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 10:44:06 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 17 Feb 2013 10:44:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Bump_PEP_426_metadata_version?= =?utf-8?q?_to_2=2E0?= Message-ID: <3Z83FL3HGWzScD@mail.python.org> http://hg.python.org/peps/rev/7702cc74d6ed changeset: 4748:7702cc74d6ed user: Nick Coghlan date: Sun Feb 17 19:43:54 2013 +1000 summary: Bump PEP 426 metadata version to 2.0 files: pep-0426.txt | 51 +++++++++++++++++++++++++++++---------- 1 files changed, 38 insertions(+), 13 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -1,5 +1,5 @@ PEP: 426 -Title: Metadata for Python Software Packages 1.3 +Title: Metadata for Python Software Packages 2.0 Version: $Revision$ Last-Modified: $Date$ Author: Daniel Holth , @@ -21,12 +21,12 @@ It includes specifics of the field names, and their semantics and usage. -This document specifies version 1.3 of the metadata format. +This document specifies version 2.0 of the metadata format. Version 1.0 is specified in PEP 241. Version 1.1 is specified in PEP 314. Version 1.2 is specified in PEP 345. -Version 1.3 of the metadata format adds fields designed to make +Version 2.0 of the metadata format adds fields designed to make third-party packaging of Python Software easier and defines a formal extension mechanism. It also adds support for optional features of distributions and allows the description to be placed into a payload @@ -65,7 +65,7 @@ Encoding ======== -Metadata 1.3 files are UTF-8 with the restriction that keys must be +Metadata 2.0 files are UTF-8 with the restriction that keys must be ASCII. Parser implementations should be aware that older versions of the Metadata specification do not specify an encoding. @@ -76,7 +76,7 @@ This section specifies the names and semantics of each of the supported fields in the metadata header. -In a single Metadata 1.3 file, fields marked with "(optional)" may occur +In a single Metadata 2.0 file, fields marked with "(optional)" may occur 0 or 1 times. Fields marked with "(multiple use)" may be specified 0, 1 or more times. Only "Metadata-Version", "Name", "Version", and "Summary" must appear exactly once. @@ -87,11 +87,15 @@ Metadata-Version ---------------- -Version of the file format; "1.3" is the only legal value. +Version of the file format; "2.0" is the only legal value. + +Automated tools should warn if ``Metadata-Version`` is greater than the +highest version they support, and must fail if ``Metadata-Version`` has +a greater major version than the highest version they support. Example:: - Metadata-Version: 1.3 + Metadata-Version: 2.0 Name @@ -144,7 +148,7 @@ Description (optional, deprecated) ---------------------------------- -Starting with Metadata 1.3, the recommended place for the description is in +Starting with Metadata 2.0, the recommended place for the description is in the payload section of the document, after the last header. The description does not need to be reformatted when it is included in the payload. @@ -1196,7 +1200,8 @@ Summary of differences from \PEP 345 ==================================== -* Metadata-Version is now 1.3 +* Metadata-Version is now 2.0, with semantics specified for handling + version changes * Most fields are now optional @@ -1263,6 +1268,26 @@ The rationale for major changes is given in the following sections. +Metadata-Version semantics +-------------------------- + +The semantics of major and minor version increments are now specified, +and follow the same model as the format version semantics specified for +the wheel format in PEP 427: minor version increments must behave +reasonably when processed by a tool that only understand earlier metadata +versions with the same major version, while major version increments +may include changes that are not compatible with existing tools. + +The major version number of the specification has been incremented +accordingly, as interpreting PEP 426 metadata in accordance with earlier +metadata specifications is unlikely to give the expected behaviour. + +Whenever the major version number of the specification is incremented, it +is expected that deployment will take some time, as metadata consuming tools +much be updated before other tools can safely start producing the new +format. + + Standard encoding and other format clarifications ------------------------------------------------- @@ -1491,7 +1516,7 @@ References ========== -This document specifies version 1.3 of the metadata format. +This document specifies version 2.0 of the metadata format. Version 1.0 is specified in PEP 241. Version 1.1 is specified in PEP 314. Version 1.2 is specified in PEP 345. @@ -1513,10 +1538,10 @@ Appendix ======== -Parsing and generating the Metadata 1.3 serialization format using +Parsing and generating the Metadata 2.0 serialization format using Python 3.3:: - # Metadata 1.3 demo + # Metadata 2.0 demo from email.generator import Generator from email import header from email.parser import Parser @@ -1545,7 +1570,7 @@ import textwrap pkg_info = """\ - Metadata-Version: 1.3 + Metadata-Version: 2.0 Name: package Version: 0.1.0 Summary: A package. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 17 10:59:35 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 17 Feb 2013 10:59:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426_readability_tweaks?= Message-ID: <3Z83bC1LZ3zSXM@mail.python.org> http://hg.python.org/peps/rev/5a1b54cb7b78 changeset: 4749:5a1b54cb7b78 user: Nick Coghlan date: Sun Feb 17 19:59:27 2013 +1000 summary: PEP 426 readability tweaks files: pep-0426.txt | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -1354,7 +1354,6 @@ * Total number of distributions analysed: 28088 * Distributions with no releases: 248 / 28088 (0.88 %) - * Fully compatible distributions: 24142 / 28088 (85.95 %) * Compatible distributions after translation: 2830 / 28088 (10.08 %) * Compatible distributions after filtering: 511 / 28088 (1.82 %) @@ -1379,6 +1378,8 @@ For comparison, here are the corresponding analysis results for PEP 386: +* Total number of distributions analysed: 28088 +* Distributions with no releases: 248 / 28088 (0.88 %) * Fully compatible distributions: 23874 / 28088 (85.00 %) * Compatible distributions after translation: 2786 / 28088 (9.92 %) * Compatible distributions after filtering: 527 / 28088 (1.88 %) @@ -1532,7 +1533,7 @@ .. [2] PEP 301: http://www.python.org/dev/peps/pep-0301/ -.. [3] Version compatibility analysis script +.. [3] Version compatibility analysis script: http://hg.python.org/peps/file/default/pep-0426/pepsort.py Appendix -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 17 11:03:30 2013 From: python-checkins at python.org (nick.coghlan) Date: Sun, 17 Feb 2013 11:03:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Fix_typo_in_PEP_426?= Message-ID: <3Z83gk2lvmzPM8@mail.python.org> http://hg.python.org/peps/rev/e8b120a12fc4 changeset: 4750:e8b120a12fc4 user: Nick Coghlan date: Sun Feb 17 20:03:21 2013 +1000 summary: Fix typo in PEP 426 files: pep-0426.txt | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -1372,7 +1372,7 @@ sort "1.1beta1" *after* "1.1b2", whereas the suggested standard translation for "1.1beta1" is "1.1b1", which sorts *before* "1.1b2". Similarly, the pkg_resources scheme will sort "-dev-N" pre-releases differently from -"devN" releases when they occur within the same release, while the +"devN" pre-releases when they occur within the same release, while the standard scheme will normalize both representations to ".devN" and sort them by the numeric component. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sun Feb 17 15:56:40 2013 From: python-checkins at python.org (andrew.svetlov) Date: Sun, 17 Feb 2013 15:56:40 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MjE1?= =?utf-8?q?=3A_Fix_documentation_misprints_=28patch_by_July_Tikhonov=29?= Message-ID: <3Z8BB01TDHzSfT@mail.python.org> http://hg.python.org/cpython/rev/8c7719b06ba6 changeset: 82241:8c7719b06ba6 branch: 3.3 parent: 82239:4ab91904f232 user: Andrew Svetlov date: Sun Feb 17 16:55:58 2013 +0200 summary: Issue #17215: Fix documentation misprints (patch by July Tikhonov) files: Doc/library/importlib.rst | 2 +- Doc/library/io.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst --- a/Doc/library/importlib.rst +++ b/Doc/library/importlib.rst @@ -328,7 +328,7 @@ .. class:: FileLoader(fullname, path) An abstract base class which inherits from :class:`ResourceLoader` and - :class:`ExecutionLoader`, providing concreate implementations of + :class:`ExecutionLoader`, providing concrete implementations of :meth:`ResourceLoader.get_data` and :meth:`ExecutionLoader.get_filename`. The *fullname* argument is a fully resolved name of the module the loader is diff --git a/Doc/library/io.rst b/Doc/library/io.rst --- a/Doc/library/io.rst +++ b/Doc/library/io.rst @@ -110,7 +110,7 @@ :func:`os.stat`) if possible. -.. function:: open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True) +.. function:: open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) This is an alias for the builtin :func:`open` function. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 17 15:56:41 2013 From: python-checkins at python.org (andrew.svetlov) Date: Sun, 17 Feb 2013 15:56:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317215=3A_Fix_documentation_misprints_=28patch_b?= =?utf-8?q?y_July_Tikhonov=29?= Message-ID: <3Z8BB14QllzSh4@mail.python.org> http://hg.python.org/cpython/rev/627ebd001708 changeset: 82242:627ebd001708 parent: 82240:46f6e052cef9 parent: 82241:8c7719b06ba6 user: Andrew Svetlov date: Sun Feb 17 16:56:28 2013 +0200 summary: Issue #17215: Fix documentation misprints (patch by July Tikhonov) files: Doc/library/importlib.rst | 2 +- Doc/library/io.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst --- a/Doc/library/importlib.rst +++ b/Doc/library/importlib.rst @@ -326,7 +326,7 @@ .. class:: FileLoader(fullname, path) An abstract base class which inherits from :class:`ResourceLoader` and - :class:`ExecutionLoader`, providing concreate implementations of + :class:`ExecutionLoader`, providing concrete implementations of :meth:`ResourceLoader.get_data` and :meth:`ExecutionLoader.get_filename`. The *fullname* argument is a fully resolved name of the module the loader is diff --git a/Doc/library/io.rst b/Doc/library/io.rst --- a/Doc/library/io.rst +++ b/Doc/library/io.rst @@ -110,7 +110,7 @@ :func:`os.stat`) if possible. -.. function:: open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True) +.. function:: open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) This is an alias for the builtin :func:`open` function. -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Mon Feb 18 05:59:58 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Mon, 18 Feb 2013 05:59:58 +0100 Subject: [Python-checkins] Daily reference leaks (627ebd001708): sum=0 Message-ID: results for 627ebd001708 on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflog8jBSSn', '-x'] From python-checkins at python.org Mon Feb 18 10:30:44 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 10:30:44 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogRml4IGlzc3VlICMx?= =?utf-8?q?3169=3A_Reimport_MAXREPEAT_into_sre=5Fconstants=2Epy=2E?= Message-ID: <3Z8fvS2Ld5zPfX@mail.python.org> http://hg.python.org/cpython/rev/a80ea934da9a changeset: 82243:a80ea934da9a branch: 2.7 parent: 82231:23393309d7a6 user: Serhiy Storchaka date: Mon Feb 18 11:14:04 2013 +0200 summary: Fix issue #13169: Reimport MAXREPEAT into sre_constants.py. files: Lib/sre_constants.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,6 +15,8 @@ MAGIC = 20031017 +from _sre import MAXREPEAT + # SRE standard exception (access as sre.error) # should this really be here? -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 10:30:45 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 10:30:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogRml4IGlzc3VlICMx?= =?utf-8?q?3169=3A_Reimport_MAXREPEAT_into_sre=5Fconstants=2Epy=2E?= Message-ID: <3Z8fvT4qfDzPfX@mail.python.org> http://hg.python.org/cpython/rev/a6231ed7bff4 changeset: 82244:a6231ed7bff4 branch: 3.2 parent: 82227:d40afd489b6a user: Serhiy Storchaka date: Mon Feb 18 11:14:21 2013 +0200 summary: Fix issue #13169: Reimport MAXREPEAT into sre_constants.py. files: Lib/sre_constants.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,6 +15,8 @@ MAGIC = 20031017 +from _sre import MAXREPEAT + # SRE standard exception (access as sre.error) # should this really be here? -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 10:30:47 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 10:30:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_issue_=2313169=3A_Reimport_MAXREPEAT_into_sre=5Fconstants?= =?utf-8?q?=2Epy=2E?= Message-ID: <3Z8fvW0YF3zPlX@mail.python.org> http://hg.python.org/cpython/rev/88c04657c9f1 changeset: 82245:88c04657c9f1 branch: 3.3 parent: 82241:8c7719b06ba6 parent: 82244:a6231ed7bff4 user: Serhiy Storchaka date: Mon Feb 18 11:18:33 2013 +0200 summary: Fix issue #13169: Reimport MAXREPEAT into sre_constants.py. files: Lib/sre_constants.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,6 +15,8 @@ MAGIC = 20031017 +from _sre import MAXREPEAT + # SRE standard exception (access as sre.error) # should this really be here? -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 10:30:48 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 10:30:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_issue_=2313169=3A_Reimport_MAXREPEAT_into_sre=5Fcons?= =?utf-8?b?dGFudHMucHku?= Message-ID: <3Z8fvX3751zPpH@mail.python.org> http://hg.python.org/cpython/rev/3dd5be5c4794 changeset: 82246:3dd5be5c4794 parent: 82242:627ebd001708 parent: 82245:88c04657c9f1 user: Serhiy Storchaka date: Mon Feb 18 11:23:10 2013 +0200 summary: Fix issue #13169: Reimport MAXREPEAT into sre_constants.py. files: Lib/sre_constants.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/sre_constants.py b/Lib/sre_constants.py --- a/Lib/sre_constants.py +++ b/Lib/sre_constants.py @@ -15,6 +15,8 @@ MAGIC = 20031017 +from _sre import MAXREPEAT + # SRE standard exception (access as sre.error) # should this really be here? -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 11:24:40 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 11:24:40 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_posixpath?= =?utf-8?q?=2Erealpath=28=29_for_multiple_pardirs_=28fixes_issue_=236975?= =?utf-8?b?KS4=?= Message-ID: <3Z8h5h0bR3zPl5@mail.python.org> http://hg.python.org/cpython/rev/50ed06b3d419 changeset: 82247:50ed06b3d419 branch: 2.7 parent: 82243:a80ea934da9a user: Serhiy Storchaka date: Mon Feb 18 12:20:44 2013 +0200 summary: Fix posixpath.realpath() for multiple pardirs (fixes issue #6975). files: Lib/posixpath.py | 6 ++++-- Lib/test/test_posixpath.py | 10 ++++++++++ 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -382,9 +382,11 @@ if name == pardir: # parent dir if path: - path = dirname(path) + path, name = split(path) + if name == pardir: + path = join(path, pardir, pardir) else: - path = name + path = pardir continue newpath = join(path, name) if not islink(newpath): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -214,6 +214,16 @@ self.assertEqual(posixpath.normpath("///foo/.//bar//.//..//.//baz"), "/foo/baz") self.assertEqual(posixpath.normpath("///..//./foo/.//bar"), "/foo/bar") + def test_realpath_curdir(self): + self.assertEqual(realpath('.'), os.getcwd()) + self.assertEqual(realpath('./.'), os.getcwd()) + self.assertEqual(realpath('/'.join(['.'] * 100)), os.getcwd()) + + def test_realpath_pardir(self): + self.assertEqual(realpath('..'), dirname(os.getcwd())) + self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) + self.assertEqual(realpath('/'.join(['..'] * 100)), '/') + if hasattr(os, "symlink"): def test_realpath_basic(self): # Basic operation. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 11:24:41 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 11:24:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fix_posixpath?= =?utf-8?q?=2Erealpath=28=29_for_multiple_pardirs_=28fixes_issue_=236975?= =?utf-8?b?KS4=?= Message-ID: <3Z8h5j3L5NzPpx@mail.python.org> http://hg.python.org/cpython/rev/cb3fbadb65aa changeset: 82248:cb3fbadb65aa branch: 3.2 parent: 82244:a6231ed7bff4 user: Serhiy Storchaka date: Mon Feb 18 12:21:04 2013 +0200 summary: Fix posixpath.realpath() for multiple pardirs (fixes issue #6975). files: Lib/posixpath.py | 6 ++++-- Lib/test/test_posixpath.py | 18 ++++++++++++++++++ 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -417,9 +417,11 @@ if name == pardir: # parent dir if path: - path = dirname(path) + path, name = split(path) + if name == pardir: + path = join(path, pardir, pardir) else: - path = name + path = pardir continue newpath = join(path, name) if not islink(newpath): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -338,6 +338,24 @@ self.assertEqual(posixpath.normpath(b"///..//./foo/.//bar"), b"/foo/bar") + def test_realpath_curdir(self): + self.assertEqual(realpath('.'), os.getcwd()) + self.assertEqual(realpath('./.'), os.getcwd()) + self.assertEqual(realpath('/'.join(['.'] * 100)), os.getcwd()) + + self.assertEqual(realpath(b'.'), os.getcwdb()) + self.assertEqual(realpath(b'./.'), os.getcwdb()) + self.assertEqual(realpath(b'/'.join([b'.'] * 100)), os.getcwdb()) + + def test_realpath_pardir(self): + self.assertEqual(realpath('..'), dirname(os.getcwd())) + self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) + self.assertEqual(realpath('/'.join(['..'] * 100)), '/') + + self.assertEqual(realpath(b'..'), dirname(os.getcwdb())) + self.assertEqual(realpath(b'../..'), dirname(dirname(os.getcwdb()))) + self.assertEqual(realpath(b'/'.join([b'..'] * 100)), b'/') + @unittest.skipUnless(hasattr(os, "symlink"), "Missing symlink implementation") @skip_if_ABSTFN_contains_backslash -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 11:24:42 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 11:24:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_posixpath=2Erealpath=28=29_for_multiple_pardirs_=28fixes_i?= =?utf-8?b?c3N1ZSAjNjk3NSku?= Message-ID: <3Z8h5k5xRJzPmV@mail.python.org> http://hg.python.org/cpython/rev/aad7e68eff0a changeset: 82249:aad7e68eff0a branch: 3.3 parent: 82245:88c04657c9f1 parent: 82248:cb3fbadb65aa user: Serhiy Storchaka date: Mon Feb 18 12:21:30 2013 +0200 summary: Fix posixpath.realpath() for multiple pardirs (fixes issue #6975). files: Lib/posixpath.py | 6 ++++-- Lib/test/test_posixpath.py | 18 ++++++++++++++++++ 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -418,9 +418,11 @@ if name == pardir: # parent dir if path: - path = dirname(path) + path, name = split(path) + if name == pardir: + path = join(path, pardir, pardir) else: - path = name + path = pardir continue newpath = join(path, name) if not islink(newpath): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -340,6 +340,24 @@ self.assertEqual(posixpath.normpath(b"///..//./foo/.//bar"), b"/foo/bar") + def test_realpath_curdir(self): + self.assertEqual(realpath('.'), os.getcwd()) + self.assertEqual(realpath('./.'), os.getcwd()) + self.assertEqual(realpath('/'.join(['.'] * 100)), os.getcwd()) + + self.assertEqual(realpath(b'.'), os.getcwdb()) + self.assertEqual(realpath(b'./.'), os.getcwdb()) + self.assertEqual(realpath(b'/'.join([b'.'] * 100)), os.getcwdb()) + + def test_realpath_pardir(self): + self.assertEqual(realpath('..'), dirname(os.getcwd())) + self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) + self.assertEqual(realpath('/'.join(['..'] * 100)), '/') + + self.assertEqual(realpath(b'..'), dirname(os.getcwdb())) + self.assertEqual(realpath(b'../..'), dirname(dirname(os.getcwdb()))) + self.assertEqual(realpath(b'/'.join([b'..'] * 100)), b'/') + @unittest.skipUnless(hasattr(os, "symlink"), "Missing symlink implementation") @skip_if_ABSTFN_contains_backslash -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 11:24:44 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 11:24:44 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_posixpath=2Erealpath=28=29_for_multiple_pardirs_=28f?= =?utf-8?q?ixes_issue_=236975=29=2E?= Message-ID: <3Z8h5m1g10zPpx@mail.python.org> http://hg.python.org/cpython/rev/f99ff3b01fab changeset: 82250:f99ff3b01fab parent: 82246:3dd5be5c4794 parent: 82249:aad7e68eff0a user: Serhiy Storchaka date: Mon Feb 18 12:22:05 2013 +0200 summary: Fix posixpath.realpath() for multiple pardirs (fixes issue #6975). files: Lib/posixpath.py | 6 ++++-- Lib/test/test_posixpath.py | 18 ++++++++++++++++++ 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/Lib/posixpath.py b/Lib/posixpath.py --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -390,9 +390,11 @@ if name == pardir: # parent dir if path: - path = dirname(path) + path, name = split(path) + if name == pardir: + path = join(path, pardir, pardir) else: - path = name + path = pardir continue newpath = join(path, name) if not islink(newpath): diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -283,6 +283,24 @@ self.assertEqual(posixpath.normpath(b"///..//./foo/.//bar"), b"/foo/bar") + def test_realpath_curdir(self): + self.assertEqual(realpath('.'), os.getcwd()) + self.assertEqual(realpath('./.'), os.getcwd()) + self.assertEqual(realpath('/'.join(['.'] * 100)), os.getcwd()) + + self.assertEqual(realpath(b'.'), os.getcwdb()) + self.assertEqual(realpath(b'./.'), os.getcwdb()) + self.assertEqual(realpath(b'/'.join([b'.'] * 100)), os.getcwdb()) + + def test_realpath_pardir(self): + self.assertEqual(realpath('..'), dirname(os.getcwd())) + self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) + self.assertEqual(realpath('/'.join(['..'] * 100)), '/') + + self.assertEqual(realpath(b'..'), dirname(os.getcwdb())) + self.assertEqual(realpath(b'../..'), dirname(dirname(os.getcwdb()))) + self.assertEqual(realpath(b'/'.join([b'..'] * 100)), b'/') + @unittest.skipUnless(hasattr(os, "symlink"), "Missing symlink implementation") @skip_if_ABSTFN_contains_backslash -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:07:08 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:07:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzEzMTUz?= =?utf-8?q?=3A_Tkinter_functions_now_raise_TclError_instead_of_ValueError_?= =?utf-8?q?when?= Message-ID: <3Z8j2h22fmzPvT@mail.python.org> http://hg.python.org/cpython/rev/bb5a8564e186 changeset: 82251:bb5a8564e186 branch: 2.7 parent: 82247:50ed06b3d419 user: Serhiy Storchaka date: Mon Feb 18 13:00:08 2013 +0200 summary: Issue #13153: Tkinter functions now raise TclError instead of ValueError when a unicode argument contains non-BMP character. files: Misc/NEWS | 3 +++ Modules/_tkinter.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -205,6 +205,9 @@ Library ------- +- Issue #13153: Tkinter functions now raise TclError instead of ValueError when + a unicode argument contains non-BMP character. + - Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. diff --git a/Modules/_tkinter.c b/Modules/_tkinter.c --- a/Modules/_tkinter.c +++ b/Modules/_tkinter.c @@ -987,8 +987,10 @@ for (i = 0; i < size; i++) { if (inbuf[i] >= 0x10000) { /* Tcl doesn't do UTF-16, yet. */ - PyErr_SetString(PyExc_ValueError, - "unsupported character"); + PyErr_Format(Tkinter_TclError, + "character U+%x is above the range " + "(U+0000-U+FFFF) allowed by Tcl", + (int)inbuf[i]); ckfree(FREECAST outbuf); return NULL; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:07:09 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:07:09 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzEzMTUz?= =?utf-8?q?=3A_Tkinter_functions_now_raise_TclError_instead_of_ValueError_?= =?utf-8?q?when?= Message-ID: <3Z8j2j4rCbzPqV@mail.python.org> http://hg.python.org/cpython/rev/9904f245c3f0 changeset: 82252:9904f245c3f0 branch: 3.2 parent: 82248:cb3fbadb65aa user: Serhiy Storchaka date: Mon Feb 18 13:01:52 2013 +0200 summary: Issue #13153: Tkinter functions now raise TclError instead of ValueError when a string argument contains non-BMP character. files: Misc/NEWS | 3 +++ Modules/_tkinter.c | 2 +- 2 files changed, 4 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -224,6 +224,9 @@ Library ------- +- Issue #13153: Tkinter functions now raise TclError instead of ValueError when + a string argument contains non-BMP character. + - Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. diff --git a/Modules/_tkinter.c b/Modules/_tkinter.c --- a/Modules/_tkinter.c +++ b/Modules/_tkinter.c @@ -993,7 +993,7 @@ for (i = 0; i < size; i++) { if (inbuf[i] >= 0x10000) { /* Tcl doesn't do UTF-16, yet. */ - PyErr_Format(PyExc_ValueError, + PyErr_Format(Tkinter_TclError, "character U+%x is above the range " "(U+0000-U+FFFF) allowed by Tcl", inbuf[i]); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:07:11 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:07:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2313153=3A_Tkinter_functions_now_raise_TclError_instead?= =?utf-8?q?_of_ValueError_when?= Message-ID: <3Z8j2l0VnpzPl5@mail.python.org> http://hg.python.org/cpython/rev/38bb2a46692e changeset: 82253:38bb2a46692e branch: 3.3 parent: 82249:aad7e68eff0a parent: 82252:9904f245c3f0 user: Serhiy Storchaka date: Mon Feb 18 13:02:41 2013 +0200 summary: Issue #13153: Tkinter functions now raise TclError instead of ValueError when a string argument contains non-BMP character. files: Misc/NEWS | 3 +++ Modules/_tkinter.c | 2 +- 2 files changed, 4 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -178,6 +178,9 @@ Library ------- +- Issue #13153: Tkinter functions now raise TclError instead of ValueError when + a string argument contains non-BMP character. + - Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. diff --git a/Modules/_tkinter.c b/Modules/_tkinter.c --- a/Modules/_tkinter.c +++ b/Modules/_tkinter.c @@ -990,7 +990,7 @@ #if TCL_UTF_MAX == 3 if (ch >= 0x10000) { /* Tcl doesn't do UTF-16, yet. */ - PyErr_Format(PyExc_ValueError, + PyErr_Format(Tkinter_TclError, "character U+%x is above the range " "(U+0000-U+FFFF) allowed by Tcl", ch); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:07:12 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:07:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2313153=3A_Tkinter_functions_now_raise_TclError_i?= =?utf-8?q?nstead_of_ValueError_when?= Message-ID: <3Z8j2m3HvTzPt8@mail.python.org> http://hg.python.org/cpython/rev/61993bb9ab0e changeset: 82254:61993bb9ab0e parent: 82250:f99ff3b01fab parent: 82253:38bb2a46692e user: Serhiy Storchaka date: Mon Feb 18 13:03:07 2013 +0200 summary: Issue #13153: Tkinter functions now raise TclError instead of ValueError when a string argument contains non-BMP character. files: Misc/NEWS | 3 +++ Modules/_tkinter.c | 2 +- 2 files changed, 4 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -257,6 +257,9 @@ Library ------- +- Issue #13153: Tkinter functions now raise TclError instead of ValueError when + a string argument contains non-BMP character. + - Issue #9669: Protect re against infinite loops on zero-width matching in non-greedy repeat. Patch by Matthew Barnett. diff --git a/Modules/_tkinter.c b/Modules/_tkinter.c --- a/Modules/_tkinter.c +++ b/Modules/_tkinter.c @@ -870,7 +870,7 @@ #if TCL_UTF_MAX == 3 if (ch >= 0x10000) { /* Tcl doesn't do UTF-16, yet. */ - PyErr_Format(PyExc_ValueError, + PyErr_Format(Tkinter_TclError, "character U+%x is above the range " "(U+0000-U+FFFF) allowed by Tcl", ch); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:36:01 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:36:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Disable_posixp?= =?utf-8?q?ath=2Erealpath=28=29_tests_on_Windows_=28fix_for_issue_=236975?= =?utf-8?b?KS4=?= Message-ID: <3Z8jh11q8czPjq@mail.python.org> http://hg.python.org/cpython/rev/3c5517c4fa5d changeset: 82255:3c5517c4fa5d branch: 2.7 parent: 82251:bb5a8564e186 user: Serhiy Storchaka date: Mon Feb 18 13:32:06 2013 +0200 summary: Disable posixpath.realpath() tests on Windows (fix for issue #6975). files: Lib/test/test_posixpath.py | 12 ++++++++++++ 1 files changed, 12 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -9,6 +9,16 @@ ABSTFN = abspath(test_support.TESTFN) +def skip_if_ABSTFN_contains_backslash(test): + """ + On Windows, posixpath.abspath still returns paths with backslashes + instead of posix forward slashes. If this is the case, several tests + fail, so skip them. + """ + found_backslash = '\\' in ABSTFN + msg = "ABSTFN is not a posix path - tests fail" + return [test, unittest.skip(msg)(test)][found_backslash] + def safe_rmdir(dirname): try: os.rmdir(dirname) @@ -214,11 +224,13 @@ self.assertEqual(posixpath.normpath("///foo/.//bar//.//..//.//baz"), "/foo/baz") self.assertEqual(posixpath.normpath("///..//./foo/.//bar"), "/foo/bar") + @skip_if_ABSTFN_contains_backslash def test_realpath_curdir(self): self.assertEqual(realpath('.'), os.getcwd()) self.assertEqual(realpath('./.'), os.getcwd()) self.assertEqual(realpath('/'.join(['.'] * 100)), os.getcwd()) + @skip_if_ABSTFN_contains_backslash def test_realpath_pardir(self): self.assertEqual(realpath('..'), dirname(os.getcwd())) self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:36:02 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:36:02 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Disable_posixp?= =?utf-8?q?ath=2Erealpath=28=29_tests_on_Windows_=28fix_for_issue_=236975?= =?utf-8?b?KS4=?= Message-ID: <3Z8jh24RPKzPlj@mail.python.org> http://hg.python.org/cpython/rev/0bbf7cdea551 changeset: 82256:0bbf7cdea551 branch: 3.2 parent: 82252:9904f245c3f0 user: Serhiy Storchaka date: Mon Feb 18 13:32:30 2013 +0200 summary: Disable posixpath.realpath() tests on Windows (fix for issue #6975). files: Lib/test/test_posixpath.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -338,6 +338,7 @@ self.assertEqual(posixpath.normpath(b"///..//./foo/.//bar"), b"/foo/bar") + @skip_if_ABSTFN_contains_backslash def test_realpath_curdir(self): self.assertEqual(realpath('.'), os.getcwd()) self.assertEqual(realpath('./.'), os.getcwd()) @@ -347,6 +348,7 @@ self.assertEqual(realpath(b'./.'), os.getcwdb()) self.assertEqual(realpath(b'/'.join([b'.'] * 100)), os.getcwdb()) + @skip_if_ABSTFN_contains_backslash def test_realpath_pardir(self): self.assertEqual(realpath('..'), dirname(os.getcwd())) self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:36:03 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:36:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Disable_posixpath=2Erealpath=28=29_tests_on_Windows_=28fix_for?= =?utf-8?q?_issue_=236975=29=2E?= Message-ID: <3Z8jh370CszPhP@mail.python.org> http://hg.python.org/cpython/rev/79ea59b394bf changeset: 82257:79ea59b394bf branch: 3.3 parent: 82253:38bb2a46692e parent: 82256:0bbf7cdea551 user: Serhiy Storchaka date: Mon Feb 18 13:33:13 2013 +0200 summary: Disable posixpath.realpath() tests on Windows (fix for issue #6975). files: Lib/test/test_posixpath.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -340,6 +340,7 @@ self.assertEqual(posixpath.normpath(b"///..//./foo/.//bar"), b"/foo/bar") + @skip_if_ABSTFN_contains_backslash def test_realpath_curdir(self): self.assertEqual(realpath('.'), os.getcwd()) self.assertEqual(realpath('./.'), os.getcwd()) @@ -349,6 +350,7 @@ self.assertEqual(realpath(b'./.'), os.getcwdb()) self.assertEqual(realpath(b'/'.join([b'.'] * 100)), os.getcwdb()) + @skip_if_ABSTFN_contains_backslash def test_realpath_pardir(self): self.assertEqual(realpath('..'), dirname(os.getcwd())) self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 12:36:05 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 18 Feb 2013 12:36:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Disable_posixpath=2Erealpath=28=29_tests_on_Windows_=28f?= =?utf-8?q?ix_for_issue_=236975=29=2E?= Message-ID: <3Z8jh52Rb1zPw4@mail.python.org> http://hg.python.org/cpython/rev/aa77f7eb2bf1 changeset: 82258:aa77f7eb2bf1 parent: 82254:61993bb9ab0e parent: 82257:79ea59b394bf user: Serhiy Storchaka date: Mon Feb 18 13:33:37 2013 +0200 summary: Disable posixpath.realpath() tests on Windows (fix for issue #6975). files: Lib/test/test_posixpath.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py --- a/Lib/test/test_posixpath.py +++ b/Lib/test/test_posixpath.py @@ -283,6 +283,7 @@ self.assertEqual(posixpath.normpath(b"///..//./foo/.//bar"), b"/foo/bar") + @skip_if_ABSTFN_contains_backslash def test_realpath_curdir(self): self.assertEqual(realpath('.'), os.getcwd()) self.assertEqual(realpath('./.'), os.getcwd()) @@ -292,6 +293,7 @@ self.assertEqual(realpath(b'./.'), os.getcwdb()) self.assertEqual(realpath(b'/'.join([b'.'] * 100)), os.getcwdb()) + @skip_if_ABSTFN_contains_backslash def test_realpath_pardir(self): self.assertEqual(realpath('..'), dirname(os.getcwd())) self.assertEqual(realpath('../..'), dirname(dirname(os.getcwd()))) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 18 13:04:36 2013 From: python-checkins at python.org (nick.coghlan) Date: Mon, 18 Feb 2013 13:04:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_clarify_metadata_v?= =?utf-8?q?ersion_transitions?= Message-ID: <3Z8kK02K9YzPlC@mail.python.org> http://hg.python.org/peps/rev/630c5dd5a123 changeset: 4751:630c5dd5a123 user: Nick Coghlan date: Mon Feb 18 22:02:45 2013 +1000 summary: PEP 426: clarify metadata version transitions files: pep-0426.txt | 28 ++++++++++++++++++++++------ 1 files changed, 22 insertions(+), 6 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -89,9 +89,15 @@ Version of the file format; "2.0" is the only legal value. -Automated tools should warn if ``Metadata-Version`` is greater than the -highest version they support, and must fail if ``Metadata-Version`` has -a greater major version than the highest version they support. +Automated tools consuming metadata should warn if ``Metadata-Version`` is +greater than the highest version they support, and must fail if +``Metadata-Version`` has a greater major version than the highest +version they support. + +For broader compatibility, automated tools may choose to produce +distribution metadata using the lowest metadata version that includes +all of the needed fields. + Example:: @@ -1283,9 +1289,19 @@ metadata specifications is unlikely to give the expected behaviour. Whenever the major version number of the specification is incremented, it -is expected that deployment will take some time, as metadata consuming tools -much be updated before other tools can safely start producing the new -format. +is expected that deployment will take some time, as either metadata +consuming tools must be updated before other tools can safely start +producing the new format, or else the sdist and wheel formats, along with +the installation database definition, will need to be updated to support +provision of multiple versions of the metadata in parallel. + +Existing tools won't abide by this guideline until they're updated to +support the new metadata standard, so the new semantics will first take +effect for a hypothetical 2.x -> 3.0 transition. For the 1.x -> 2.0 +transition, it is recommended that tools continue to produce the +existing supplementary files (such as ``entry_points.txt``) in addition +to any equivalents specified using the new features of the standard +metadata format (including the formal extension mechanism). Standard encoding and other format clarifications -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Mon Feb 18 13:06:28 2013 From: python-checkins at python.org (nick.coghlan) Date: Mon, 18 Feb 2013 13:06:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Remove_extraneous_line?= Message-ID: <3Z8kM825jlzPxW@mail.python.org> http://hg.python.org/peps/rev/5f85b0f0796c changeset: 4752:5f85b0f0796c user: Nick Coghlan date: Mon Feb 18 22:06:19 2013 +1000 summary: Remove extraneous line files: pep-0426.txt | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -98,7 +98,6 @@ distribution metadata using the lowest metadata version that includes all of the needed fields. - Example:: Metadata-Version: 2.0 -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Mon Feb 18 13:41:57 2013 From: python-checkins at python.org (nick.coghlan) Date: Mon, 18 Feb 2013 13:41:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_427=3A_fix_header_=28spot?= =?utf-8?q?ted_by_W=2E_Trevor_King=29?= Message-ID: <3Z8l852rjSzP73@mail.python.org> http://hg.python.org/peps/rev/128e2579314d changeset: 4753:128e2579314d user: Nick Coghlan date: Mon Feb 18 22:41:37 2013 +1000 summary: PEP 427: fix header (spotted by W. Trevor King) files: pep-0427.txt | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -54,7 +54,7 @@ Details ======= -Installing a wheel 'distribution-1.0.py32.none.any.whl' +Installing a wheel 'distribution-1.0-py32-none-any.whl' ------------------------------------------------------- Wheel installation notionally consists of two phases: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Tue Feb 19 03:44:38 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 19 Feb 2013 03:44:38 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzc5NjM6IGZpeCBl?= =?utf-8?q?rror_message_when_=27object=27_called_with_arguments=2E?= Message-ID: <3Z95rQ72KczNNN@mail.python.org> http://hg.python.org/cpython/rev/b5adf2a30b73 changeset: 82259:b5adf2a30b73 branch: 3.2 parent: 82256:0bbf7cdea551 user: R David Murray date: Mon Feb 18 21:20:08 2013 -0500 summary: #7963: fix error message when 'object' called with arguments. Patch by Alexander Belopolsky. files: Misc/NEWS | 3 +++ Objects/typeobject.c | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #7963: Fixed misleading error message that issued when object is + called without arguments. + - Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. diff --git a/Objects/typeobject.c b/Objects/typeobject.c --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -2842,14 +2842,14 @@ type->tp_init != object_init) { err = PyErr_WarnEx(PyExc_DeprecationWarning, - "object.__new__() takes no parameters", + "object() takes no parameters", 1); } else if (type->tp_new != object_new || type->tp_init == object_init) { PyErr_SetString(PyExc_TypeError, - "object.__new__() takes no parameters"); + "object() takes no parameters"); err = -1; } } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 03:44:40 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 19 Feb 2013 03:44:40 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=237963=3A_fix_error_message_when_=27object=27_called_with_arg?= =?utf-8?q?uments=2E?= Message-ID: <3Z95rS2pfdzQFw@mail.python.org> http://hg.python.org/cpython/rev/0e438442fddf changeset: 82260:0e438442fddf branch: 3.3 parent: 82257:79ea59b394bf parent: 82259:b5adf2a30b73 user: R David Murray date: Mon Feb 18 21:39:18 2013 -0500 summary: #7963: fix error message when 'object' called with arguments. files: Misc/NEWS | 3 +++ Objects/typeobject.c | 2 +- 2 files changed, 4 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #7963: Fixed misleading error message that issued when object is + called without arguments. + - Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. diff --git a/Objects/typeobject.c b/Objects/typeobject.c --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -3059,7 +3059,7 @@ { if (excess_args(args, kwds) && (type->tp_init == object_init || type->tp_new != object_new)) { - PyErr_SetString(PyExc_TypeError, "object.__new__() takes no parameters"); + PyErr_SetString(PyExc_TypeError, "object() takes no parameters"); return NULL; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 03:44:41 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 19 Feb 2013 03:44:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=237963=3A_fix_error_message_when_=27object=27_?= =?utf-8?q?called_with_arguments=2E?= Message-ID: <3Z95rT5dwQzQJJ@mail.python.org> http://hg.python.org/cpython/rev/1f3ce7ba410b changeset: 82261:1f3ce7ba410b parent: 82258:aa77f7eb2bf1 parent: 82260:0e438442fddf user: R David Murray date: Mon Feb 18 21:44:03 2013 -0500 summary: Merge: #7963: fix error message when 'object' called with arguments. files: Misc/NEWS | 3 +++ Objects/typeobject.c | 2 +- 2 files changed, 4 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #7963: Fixed misleading error message that issued when object is + called without arguments. + - Issue #8745: Small speed up zipimport on Windows. Patch by Catalin Iacob. - Issue #5308: Raise ValueError when marshalling too large object (a sequence diff --git a/Objects/typeobject.c b/Objects/typeobject.c --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -3059,7 +3059,7 @@ { if (excess_args(args, kwds) && (type->tp_init == object_init || type->tp_new != object_new)) { - PyErr_SetString(PyExc_TypeError, "object.__new__() takes no parameters"); + PyErr_SetString(PyExc_TypeError, "object() takes no parameters"); return NULL; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 04:05:13 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 19 Feb 2013 04:05:13 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzc5NjM6IGZpeCBl?= =?utf-8?q?rror_message_when_=27object=27_called_with_arguments=2E?= Message-ID: <3Z96J90P8kzQcY@mail.python.org> http://hg.python.org/cpython/rev/0082b7bf9501 changeset: 82262:0082b7bf9501 branch: 2.7 parent: 82255:3c5517c4fa5d user: R David Murray date: Mon Feb 18 22:04:59 2013 -0500 summary: #7963: fix error message when 'object' called with arguments. Patch by Alexander Belopolsky. files: Misc/NEWS | 3 +++ Objects/typeobject.c | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -9,6 +9,9 @@ Core and Builtins ----------------- +- Issue #7963: Fixed misleading error message that issued when object is + called without arguments. + - Issue #5308: Raise ValueError when marshalling too large object (a sequence with size >= 2**31), instead of producing illegal marshal data. diff --git a/Objects/typeobject.c b/Objects/typeobject.c --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -2897,14 +2897,14 @@ type->tp_init != object_init) { err = PyErr_WarnEx(PyExc_DeprecationWarning, - "object.__new__() takes no parameters", + "object() takes no parameters", 1); } else if (type->tp_new != object_new || type->tp_init == object_init) { PyErr_SetString(PyExc_TypeError, - "object.__new__() takes no parameters"); + "object() takes no parameters"); err = -1; } } -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Tue Feb 19 06:02:59 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Tue, 19 Feb 2013 06:02:59 +0100 Subject: [Python-checkins] Daily reference leaks (aa77f7eb2bf1): sum=0 Message-ID: results for aa77f7eb2bf1 on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogLT1vtC', '-x'] From python-checkins at python.org Tue Feb 19 10:41:09 2013 From: python-checkins at python.org (nick.coghlan) Date: Tue, 19 Feb 2013 10:41:09 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_remove_misleading_?= =?utf-8?q?example?= Message-ID: <3Z9H510jf2zRNV@mail.python.org> http://hg.python.org/peps/rev/85f7ecae60eb changeset: 4754:85f7ecae60eb user: Nick Coghlan date: Tue Feb 19 19:41:00 2013 +1000 summary: PEP 426: remove misleading example files: pep-0426.txt | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -335,7 +335,6 @@ Examples:: - Provides-Dist: ThisProject Provides-Dist: AnotherProject (3.4) Provides-Dist: virtual_package -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Tue Feb 19 14:03:46 2013 From: python-checkins at python.org (stefan.krah) Date: Tue, 19 Feb 2013 14:03:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Fix_error_mess?= =?utf-8?q?ages=2E?= Message-ID: <3Z9MZp5WlhzSkb@mail.python.org> http://hg.python.org/cpython/rev/8c4aa4cb7930 changeset: 82263:8c4aa4cb7930 branch: 3.3 parent: 82260:0e438442fddf user: Stefan Krah date: Tue Feb 19 13:44:49 2013 +0100 summary: Fix error messages. files: Objects/memoryobject.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Objects/memoryobject.c b/Objects/memoryobject.c --- a/Objects/memoryobject.c +++ b/Objects/memoryobject.c @@ -307,7 +307,8 @@ if (!equiv_format(dest, src) || !equiv_shape(dest, src)) { PyErr_SetString(PyExc_ValueError, - "ndarray assignment: lvalue and rvalue have different structures"); + "memoryview assignment: lvalue and rvalue have different " + "structures"); return 0; } @@ -1433,7 +1434,7 @@ /* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do not make sense. */ PyErr_Format(PyExc_BufferError, - "ndarray: cannot cast to unsigned bytes if the format flag " + "memoryview: cannot cast to unsigned bytes if the format flag " "is present"); return -1; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 14:03:48 2013 From: python-checkins at python.org (stefan.krah) Date: Tue, 19 Feb 2013 14:03:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogTWVyZ2UgMy4zLg==?= Message-ID: <3Z9MZr14R8zPwk@mail.python.org> http://hg.python.org/cpython/rev/83d70dd58fef changeset: 82264:83d70dd58fef parent: 82261:1f3ce7ba410b parent: 82263:8c4aa4cb7930 user: Stefan Krah date: Tue Feb 19 14:02:59 2013 +0100 summary: Merge 3.3. files: Objects/memoryobject.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Objects/memoryobject.c b/Objects/memoryobject.c --- a/Objects/memoryobject.c +++ b/Objects/memoryobject.c @@ -307,7 +307,8 @@ if (!equiv_format(dest, src) || !equiv_shape(dest, src)) { PyErr_SetString(PyExc_ValueError, - "ndarray assignment: lvalue and rvalue have different structures"); + "memoryview assignment: lvalue and rvalue have different " + "structures"); return 0; } @@ -1433,7 +1434,7 @@ /* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do not make sense. */ PyErr_Format(PyExc_BufferError, - "ndarray: cannot cast to unsigned bytes if the format flag " + "memoryview: cannot cast to unsigned bytes if the format flag " "is present"); return -1; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 17:50:48 2013 From: python-checkins at python.org (barry.warsaw) Date: Tue, 19 Feb 2013 17:50:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_434=2C_IDLE_Enhancement_E?= =?utf-8?q?xception_for_All_Branches?= Message-ID: <3Z9Scm6yD1zNY5@mail.python.org> http://hg.python.org/peps/rev/ec6c1538f9c5 changeset: 4755:ec6c1538f9c5 user: Barry Warsaw date: Tue Feb 19 11:50:39 2013 -0500 summary: PEP 434, IDLE Enhancement Exception for All Branches files: pep-0434.txt | 85 ++++++++++++++++++++++++++++++++++++++++ 1 files changed, 85 insertions(+), 0 deletions(-) diff --git a/pep-0434.txt b/pep-0434.txt new file mode 100644 --- /dev/null +++ b/pep-0434.txt @@ -0,0 +1,85 @@ +PEP: 434 +Title: IDLE Enhancement Exception for All Branches +Version: $Revision$ +Last-Modified: $Date$ +Author: Todd Rovito +BDFL-Delegate: Nick Coghlan +Status: Draft +Type: Informational +Content-Type: text/x-rst +Created: 16-Feb-2013 +Python-Version: 2.7 +Post-History: 16-Feb-2013 + + +Abstract +======== + +Generally only new features are applied to Python 3.4 but this PEP requests an +exception for IDLE [1]_. IDLE is part of the standard library and has numerous +outstanding issues [2]_. Since IDLE is often the first thing a new Python user +sees it desperately needs to be brought up to date with modern GUI standards +across the three major platforms Linux, Mac OS X, and Windows. + + +Rationale +========= + +Python does have many advanced features yet Python is well known for being a +easy computer language for beginners [3]_. A major Python philosophy is +"batteries included" which is best demonstrated in Python's standard library +with many modules that are not typically included with other programming +languages [4]_. IDLE is a important "battery" in the Python toolbox because it +allows a beginner to get started quickly without downloading and configuring a +third party IDE. IDLE is primarily used as an application that ships with +Python, rather than as a library module used to build Python applications, +hence a different standard should apply to IDLE enhancements. Additional +patches to IDLE cannot break any existing program/library because IDLE is used +by humans. + + +Details +======= + +Python 2.7 does not accept bug fixes, this rule can be ignored for IDLE if the +Python development team accepts this PEP [5]_. IDLE issues will be carefully +tested on the three major platforms Linux, Mac OS X, and Windows before any +commits are made. Since IDLE is segregated to a particular part of the source +tree this enhancement exception only applies to Lib/idlelib directory in +Python branches >= 2.7. + + +References +========== + +.. [1] IDLE: Right Click Context Menu, Foord, Michael + (http://bugs.python.org/issue1207589) + +.. [2] Meta-issue for "Invent with Python" IDLE feedback + (http://bugs.python.org/issue13504) + +.. [3] Getting Started with Python + (http://www.python.org/about/gettingstarted/) + +.. [4] Batteries Included + (http://docs.python.org/2/tutorial/stdlib.html#batteries-included) + +.. [5] Python 2.7 Release Schedule + (http://www.python.org/dev/peps/pep-0373/) + + +Copyright +========= + +This document has been placed in the public domain. + + + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Tue Feb 19 18:21:09 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 19 Feb 2013 18:21:09 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzEzNzAwOiBNYWtl?= =?utf-8?q?_imap=2Eauthenticate_with_authobject_work=2E?= Message-ID: <3Z9THn2mqxzSt1@mail.python.org> http://hg.python.org/cpython/rev/3d4302718e7c changeset: 82265:3d4302718e7c branch: 3.2 parent: 82259:b5adf2a30b73 user: R David Murray date: Tue Feb 19 12:17:31 2013 -0500 summary: #13700: Make imap.authenticate with authobject work. This fixes a bytes/string confusion in the API which prevented custom authobjects from working at all. Original patch by Erno Tukia. files: Doc/library/imaplib.rst | 7 +- Lib/imaplib.py | 20 ++- Lib/test/test_imaplib.py | 127 +++++++++++++++++++++++++- Misc/NEWS | 3 + 4 files changed, 137 insertions(+), 20 deletions(-) diff --git a/Doc/library/imaplib.rst b/Doc/library/imaplib.rst --- a/Doc/library/imaplib.rst +++ b/Doc/library/imaplib.rst @@ -176,9 +176,10 @@ data = authobject(response) - It will be called to process server continuation responses. It should return - ``data`` that will be encoded and sent to server. It should return ``None`` if - the client abort response ``*`` should be sent instead. + It will be called to process server continuation responses; the *response* + argument it is passed will be ``bytes``. It should return ``bytes`` *data* + that will be base64 encoded and sent to the server. It should return + ``None`` if the client abort response ``*`` should be sent instead. .. method:: IMAP4.check() diff --git a/Lib/imaplib.py b/Lib/imaplib.py --- a/Lib/imaplib.py +++ b/Lib/imaplib.py @@ -360,10 +360,10 @@ data = authobject(response) - It will be called to process server continuation responses. - It should return data that will be encoded and sent to server. - It should return None if the client abort response '*' should - be sent instead. + It will be called to process server continuation responses; the + response argument it is passed will be a bytes. It should return bytes + data that will be base64 encoded and sent to the server. It should + return None if the client abort response '*' should be sent instead. """ mech = mechanism.upper() # XXX: shouldn't this code be removed, not commented out? @@ -546,7 +546,9 @@ def _CRAM_MD5_AUTH(self, challenge): """ Authobject to use with CRAM-MD5 authentication. """ import hmac - return self.user + " " + hmac.HMAC(self.password, challenge).hexdigest() + pwd = (self.password.encode('ASCII') if isinstance(self.password, str) + else self.password) + return self.user + " " + hmac.HMAC(pwd, challenge).hexdigest() def logout(self): @@ -1288,14 +1290,16 @@ # so when it gets to the end of the 8-bit input # there's no partial 6-bit output. # - oup = '' + oup = b'' + if isinstance(inp, str): + inp = inp.encode('ASCII') while inp: if len(inp) > 48: t = inp[:48] inp = inp[48:] else: t = inp - inp = '' + inp = b'' e = binascii.b2a_base64(t) if e: oup = oup + e[:-1] @@ -1303,7 +1307,7 @@ def decode(self, inp): if not inp: - return '' + return b'' return binascii.a2b_base64(inp) diff --git a/Lib/test/test_imaplib.py b/Lib/test/test_imaplib.py --- a/Lib/test/test_imaplib.py +++ b/Lib/test/test_imaplib.py @@ -78,14 +78,25 @@ class SimpleIMAPHandler(socketserver.StreamRequestHandler): timeout = 1 + continuation = None + capabilities = '' def _send(self, message): if verbose: print("SENT: %r" % message.strip()) self.wfile.write(message) + def _send_line(self, message): + self._send(message + b'\r\n') + + def _send_textline(self, message): + self._send_line(message.encode('ASCII')) + + def _send_tagged(self, tag, code, message): + self._send_textline(' '.join((tag, code, message))) + def handle(self): # Send a welcome message. - self._send(b'* OK IMAP4rev1\r\n') + self._send_textline('* OK IMAP4rev1') while 1: # Gather up input until we receive a line terminator or we timeout. # Accumulate read(1) because it's simpler to handle the differences @@ -105,19 +116,33 @@ break if verbose: print('GOT: %r' % line.strip()) - splitline = line.split() - tag = splitline[0].decode('ASCII') - cmd = splitline[1].decode('ASCII') + if self.continuation: + try: + self.continuation.send(line) + except StopIteration: + self.continuation = None + continue + splitline = line.decode('ASCII').split() + tag = splitline[0] + cmd = splitline[1] args = splitline[2:] if hasattr(self, 'cmd_'+cmd): - getattr(self, 'cmd_'+cmd)(tag, args) + continuation = getattr(self, 'cmd_'+cmd)(tag, args) + if continuation: + self.continuation = continuation + next(continuation) else: - self._send('{} BAD {} unknown\r\n'.format(tag, cmd).encode('ASCII')) + self._send_tagged(tag, 'BAD', cmd + ' unknown') def cmd_CAPABILITY(self, tag, args): - self._send(b'* CAPABILITY IMAP4rev1\r\n') - self._send('{} OK CAPABILITY completed\r\n'.format(tag).encode('ASCII')) + caps = 'IMAP4rev1 ' + self.capabilities if self.capabilities else 'IMAP4rev1' + self._send_textline('* CAPABILITY ' + caps) + self._send_tagged(tag, 'OK', 'CAPABILITY completed') + + def cmd_LOGOUT(self, tag, args): + self._send_textline('* BYE IMAP4ref1 Server logging out') + self._send_tagged(tag, 'OK', 'LOGOUT completed') class BaseThreadedNetworkedTests(unittest.TestCase): @@ -167,6 +192,16 @@ finally: self.reap_server(server, thread) + @contextmanager + def reaped_pair(self, hdlr): + server, thread = self.make_server((support.HOST, 0), hdlr) + client = self.imap_class(*server.server_address) + try: + yield server, client + finally: + client.logout() + self.reap_server(server, thread) + @reap_threads def test_connect(self): with self.reaped_server(SimpleIMAPHandler) as server: @@ -192,12 +227,86 @@ def cmd_CAPABILITY(self, tag, args): self._send(b'* CAPABILITY IMAP4rev1 AUTH\n') - self._send('{} OK CAPABILITY completed\r\n'.format(tag).encode('ASCII')) + self._send_tagged(tag, 'OK', 'CAPABILITY completed') with self.reaped_server(BadNewlineHandler) as server: self.assertRaises(imaplib.IMAP4.abort, self.imap_class, *server.server_address) + @reap_threads + def test_bad_auth_name(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_tagged(tag, 'NO', 'unrecognized authentication ' + 'type {}'.format(args[0])) + + with self.reaped_pair(MyServer) as (server, client): + with self.assertRaises(imaplib.IMAP4.error): + client.authenticate('METHOD', lambda: 1) + + @reap_threads + def test_invalid_authentication(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+') + self.response = yield + self._send_tagged(tag, 'NO', '[AUTHENTICATIONFAILED] invalid') + + with self.reaped_pair(MyServer) as (server, client): + with self.assertRaises(imaplib.IMAP4.error): + code, data = client.authenticate('MYAUTH', lambda x: b'fake') + + @reap_threads + def test_valid_authentication(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+') + self.server.response = yield + self._send_tagged(tag, 'OK', 'FAKEAUTH successful') + + with self.reaped_pair(MyServer) as (server, client): + code, data = client.authenticate('MYAUTH', lambda x: b'fake') + self.assertEqual(code, 'OK') + self.assertEqual(server.response, + b'ZmFrZQ==\r\n') #b64 encoded 'fake' + + with self.reaped_pair(MyServer) as (server, client): + code, data = client.authenticate('MYAUTH', lambda x: 'fake') + self.assertEqual(code, 'OK') + self.assertEqual(server.response, + b'ZmFrZQ==\r\n') #b64 encoded 'fake' + + @reap_threads + def test_login_cram_md5(self): + + class AuthHandler(SimpleIMAPHandler): + + capabilities = 'LOGINDISABLED AUTH=CRAM-MD5' + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+ PDE4OTYuNjk3MTcwOTUyQHBvc3RvZmZpY2Uucm' + 'VzdG9uLm1jaS5uZXQ=') + r = yield + if r == b'dGltIGYxY2E2YmU0NjRiOWVmYTFjY2E2ZmZkNmNmMmQ5ZjMy\r\n': + self._send_tagged(tag, 'OK', 'CRAM-MD5 successful') + else: + self._send_tagged(tag, 'NO', 'No access') + + with self.reaped_pair(AuthHandler) as (server, client): + self.assertTrue('AUTH=CRAM-MD5' in client.capabilities) + ret, data = client.login_cram_md5("tim", "tanstaaftanstaaf") + self.assertEqual(ret, "OK") + + with self.reaped_pair(AuthHandler) as (server, client): + self.assertTrue('AUTH=CRAM-MD5' in client.capabilities) + ret, data = client.login_cram_md5("tim", b"tanstaaftanstaaf") + self.assertEqual(ret, "OK") class ThreadedNetworkedTests(BaseThreadedNetworkedTests): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -227,6 +227,9 @@ Library ------- +- Issue #13700: Fix byte/string handling in imaplib authentication when an + authobject is specified. + - Issue #13153: Tkinter functions now raise TclError instead of ValueError when a string argument contains non-BMP character. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 18:21:11 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 19 Feb 2013 18:21:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge=3A_=2313700=3A_Make_imap=2Eauthenticate_with_authobject_?= =?utf-8?q?work=2E?= Message-ID: <3Z9THq12W7zSrQ@mail.python.org> http://hg.python.org/cpython/rev/b21f955b8ba2 changeset: 82266:b21f955b8ba2 branch: 3.3 parent: 82263:8c4aa4cb7930 parent: 82265:3d4302718e7c user: R David Murray date: Tue Feb 19 12:19:13 2013 -0500 summary: Merge: #13700: Make imap.authenticate with authobject work. This fixes a bytes/string confusion in the API which prevented custom authobjects from working at all. Original patch by Erno Tukia. files: Doc/library/imaplib.rst | 7 +- Lib/imaplib.py | 20 ++- Lib/test/test_imaplib.py | 127 +++++++++++++++++++++++++- Misc/NEWS | 3 + 4 files changed, 137 insertions(+), 20 deletions(-) diff --git a/Doc/library/imaplib.rst b/Doc/library/imaplib.rst --- a/Doc/library/imaplib.rst +++ b/Doc/library/imaplib.rst @@ -185,9 +185,10 @@ data = authobject(response) - It will be called to process server continuation responses. It should return - ``data`` that will be encoded and sent to server. It should return ``None`` if - the client abort response ``*`` should be sent instead. + It will be called to process server continuation responses; the *response* + argument it is passed will be ``bytes``. It should return ``bytes`` *data* + that will be base64 encoded and sent to the server. It should return + ``None`` if the client abort response ``*`` should be sent instead. .. method:: IMAP4.check() diff --git a/Lib/imaplib.py b/Lib/imaplib.py --- a/Lib/imaplib.py +++ b/Lib/imaplib.py @@ -352,10 +352,10 @@ data = authobject(response) - It will be called to process server continuation responses. - It should return data that will be encoded and sent to server. - It should return None if the client abort response '*' should - be sent instead. + It will be called to process server continuation responses; the + response argument it is passed will be a bytes. It should return bytes + data that will be base64 encoded and sent to the server. It should + return None if the client abort response '*' should be sent instead. """ mech = mechanism.upper() # XXX: shouldn't this code be removed, not commented out? @@ -538,7 +538,9 @@ def _CRAM_MD5_AUTH(self, challenge): """ Authobject to use with CRAM-MD5 authentication. """ import hmac - return self.user + " " + hmac.HMAC(self.password, challenge).hexdigest() + pwd = (self.password.encode('ASCII') if isinstance(self.password, str) + else self.password) + return self.user + " " + hmac.HMAC(pwd, challenge).hexdigest() def logout(self): @@ -1295,14 +1297,16 @@ # so when it gets to the end of the 8-bit input # there's no partial 6-bit output. # - oup = '' + oup = b'' + if isinstance(inp, str): + inp = inp.encode('ASCII') while inp: if len(inp) > 48: t = inp[:48] inp = inp[48:] else: t = inp - inp = '' + inp = b'' e = binascii.b2a_base64(t) if e: oup = oup + e[:-1] @@ -1310,7 +1314,7 @@ def decode(self, inp): if not inp: - return '' + return b'' return binascii.a2b_base64(inp) Months = ' Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec'.split(' ') diff --git a/Lib/test/test_imaplib.py b/Lib/test/test_imaplib.py --- a/Lib/test/test_imaplib.py +++ b/Lib/test/test_imaplib.py @@ -94,14 +94,25 @@ class SimpleIMAPHandler(socketserver.StreamRequestHandler): timeout = 1 + continuation = None + capabilities = '' def _send(self, message): if verbose: print("SENT: %r" % message.strip()) self.wfile.write(message) + def _send_line(self, message): + self._send(message + b'\r\n') + + def _send_textline(self, message): + self._send_line(message.encode('ASCII')) + + def _send_tagged(self, tag, code, message): + self._send_textline(' '.join((tag, code, message))) + def handle(self): # Send a welcome message. - self._send(b'* OK IMAP4rev1\r\n') + self._send_textline('* OK IMAP4rev1') while 1: # Gather up input until we receive a line terminator or we timeout. # Accumulate read(1) because it's simpler to handle the differences @@ -121,19 +132,33 @@ break if verbose: print('GOT: %r' % line.strip()) - splitline = line.split() - tag = splitline[0].decode('ASCII') - cmd = splitline[1].decode('ASCII') + if self.continuation: + try: + self.continuation.send(line) + except StopIteration: + self.continuation = None + continue + splitline = line.decode('ASCII').split() + tag = splitline[0] + cmd = splitline[1] args = splitline[2:] if hasattr(self, 'cmd_'+cmd): - getattr(self, 'cmd_'+cmd)(tag, args) + continuation = getattr(self, 'cmd_'+cmd)(tag, args) + if continuation: + self.continuation = continuation + next(continuation) else: - self._send('{} BAD {} unknown\r\n'.format(tag, cmd).encode('ASCII')) + self._send_tagged(tag, 'BAD', cmd + ' unknown') def cmd_CAPABILITY(self, tag, args): - self._send(b'* CAPABILITY IMAP4rev1\r\n') - self._send('{} OK CAPABILITY completed\r\n'.format(tag).encode('ASCII')) + caps = 'IMAP4rev1 ' + self.capabilities if self.capabilities else 'IMAP4rev1' + self._send_textline('* CAPABILITY ' + caps) + self._send_tagged(tag, 'OK', 'CAPABILITY completed') + + def cmd_LOGOUT(self, tag, args): + self._send_textline('* BYE IMAP4ref1 Server logging out') + self._send_tagged(tag, 'OK', 'LOGOUT completed') class BaseThreadedNetworkedTests(unittest.TestCase): @@ -183,6 +208,16 @@ finally: self.reap_server(server, thread) + @contextmanager + def reaped_pair(self, hdlr): + server, thread = self.make_server((support.HOST, 0), hdlr) + client = self.imap_class(*server.server_address) + try: + yield server, client + finally: + client.logout() + self.reap_server(server, thread) + @reap_threads def test_connect(self): with self.reaped_server(SimpleIMAPHandler) as server: @@ -208,12 +243,86 @@ def cmd_CAPABILITY(self, tag, args): self._send(b'* CAPABILITY IMAP4rev1 AUTH\n') - self._send('{} OK CAPABILITY completed\r\n'.format(tag).encode('ASCII')) + self._send_tagged(tag, 'OK', 'CAPABILITY completed') with self.reaped_server(BadNewlineHandler) as server: self.assertRaises(imaplib.IMAP4.abort, self.imap_class, *server.server_address) + @reap_threads + def test_bad_auth_name(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_tagged(tag, 'NO', 'unrecognized authentication ' + 'type {}'.format(args[0])) + + with self.reaped_pair(MyServer) as (server, client): + with self.assertRaises(imaplib.IMAP4.error): + client.authenticate('METHOD', lambda: 1) + + @reap_threads + def test_invalid_authentication(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+') + self.response = yield + self._send_tagged(tag, 'NO', '[AUTHENTICATIONFAILED] invalid') + + with self.reaped_pair(MyServer) as (server, client): + with self.assertRaises(imaplib.IMAP4.error): + code, data = client.authenticate('MYAUTH', lambda x: b'fake') + + @reap_threads + def test_valid_authentication(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+') + self.server.response = yield + self._send_tagged(tag, 'OK', 'FAKEAUTH successful') + + with self.reaped_pair(MyServer) as (server, client): + code, data = client.authenticate('MYAUTH', lambda x: b'fake') + self.assertEqual(code, 'OK') + self.assertEqual(server.response, + b'ZmFrZQ==\r\n') #b64 encoded 'fake' + + with self.reaped_pair(MyServer) as (server, client): + code, data = client.authenticate('MYAUTH', lambda x: 'fake') + self.assertEqual(code, 'OK') + self.assertEqual(server.response, + b'ZmFrZQ==\r\n') #b64 encoded 'fake' + + @reap_threads + def test_login_cram_md5(self): + + class AuthHandler(SimpleIMAPHandler): + + capabilities = 'LOGINDISABLED AUTH=CRAM-MD5' + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+ PDE4OTYuNjk3MTcwOTUyQHBvc3RvZmZpY2Uucm' + 'VzdG9uLm1jaS5uZXQ=') + r = yield + if r == b'dGltIGYxY2E2YmU0NjRiOWVmYTFjY2E2ZmZkNmNmMmQ5ZjMy\r\n': + self._send_tagged(tag, 'OK', 'CRAM-MD5 successful') + else: + self._send_tagged(tag, 'NO', 'No access') + + with self.reaped_pair(AuthHandler) as (server, client): + self.assertTrue('AUTH=CRAM-MD5' in client.capabilities) + ret, data = client.login_cram_md5("tim", "tanstaaftanstaaf") + self.assertEqual(ret, "OK") + + with self.reaped_pair(AuthHandler) as (server, client): + self.assertTrue('AUTH=CRAM-MD5' in client.capabilities) + ret, data = client.login_cram_md5("tim", b"tanstaaftanstaaf") + self.assertEqual(ret, "OK") class ThreadedNetworkedTests(BaseThreadedNetworkedTests): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -181,6 +181,9 @@ Library ------- +- Issue #13700: Fix byte/string handling in imaplib authentication when an + authobject is specified. + - Issue #13153: Tkinter functions now raise TclError instead of ValueError when a string argument contains non-BMP character. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 18:21:12 2013 From: python-checkins at python.org (r.david.murray) Date: Tue, 19 Feb 2013 18:21:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge=3A_=2313700=3A_Make_imap=2Eauthenticate_with_autho?= =?utf-8?q?bject_work=2E?= Message-ID: <3Z9THr66XWzSsc@mail.python.org> http://hg.python.org/cpython/rev/d404d33a999c changeset: 82267:d404d33a999c parent: 82264:83d70dd58fef parent: 82266:b21f955b8ba2 user: R David Murray date: Tue Feb 19 12:20:32 2013 -0500 summary: Merge: #13700: Make imap.authenticate with authobject work. This fixes a bytes/string confusion in the API which prevented custom authobjects from working at all. Original patch by Erno Tukia. files: Doc/library/imaplib.rst | 7 +- Lib/imaplib.py | 20 ++- Lib/test/test_imaplib.py | 127 +++++++++++++++++++++++++- Misc/NEWS | 3 + 4 files changed, 137 insertions(+), 20 deletions(-) diff --git a/Doc/library/imaplib.rst b/Doc/library/imaplib.rst --- a/Doc/library/imaplib.rst +++ b/Doc/library/imaplib.rst @@ -185,9 +185,10 @@ data = authobject(response) - It will be called to process server continuation responses. It should return - ``data`` that will be encoded and sent to server. It should return ``None`` if - the client abort response ``*`` should be sent instead. + It will be called to process server continuation responses; the *response* + argument it is passed will be ``bytes``. It should return ``bytes`` *data* + that will be base64 encoded and sent to the server. It should return + ``None`` if the client abort response ``*`` should be sent instead. .. method:: IMAP4.check() diff --git a/Lib/imaplib.py b/Lib/imaplib.py --- a/Lib/imaplib.py +++ b/Lib/imaplib.py @@ -352,10 +352,10 @@ data = authobject(response) - It will be called to process server continuation responses. - It should return data that will be encoded and sent to server. - It should return None if the client abort response '*' should - be sent instead. + It will be called to process server continuation responses; the + response argument it is passed will be a bytes. It should return bytes + data that will be base64 encoded and sent to the server. It should + return None if the client abort response '*' should be sent instead. """ mech = mechanism.upper() # XXX: shouldn't this code be removed, not commented out? @@ -538,7 +538,9 @@ def _CRAM_MD5_AUTH(self, challenge): """ Authobject to use with CRAM-MD5 authentication. """ import hmac - return self.user + " " + hmac.HMAC(self.password, challenge).hexdigest() + pwd = (self.password.encode('ASCII') if isinstance(self.password, str) + else self.password) + return self.user + " " + hmac.HMAC(pwd, challenge).hexdigest() def logout(self): @@ -1295,14 +1297,16 @@ # so when it gets to the end of the 8-bit input # there's no partial 6-bit output. # - oup = '' + oup = b'' + if isinstance(inp, str): + inp = inp.encode('ASCII') while inp: if len(inp) > 48: t = inp[:48] inp = inp[48:] else: t = inp - inp = '' + inp = b'' e = binascii.b2a_base64(t) if e: oup = oup + e[:-1] @@ -1310,7 +1314,7 @@ def decode(self, inp): if not inp: - return '' + return b'' return binascii.a2b_base64(inp) Months = ' Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec'.split(' ') diff --git a/Lib/test/test_imaplib.py b/Lib/test/test_imaplib.py --- a/Lib/test/test_imaplib.py +++ b/Lib/test/test_imaplib.py @@ -94,14 +94,25 @@ class SimpleIMAPHandler(socketserver.StreamRequestHandler): timeout = 1 + continuation = None + capabilities = '' def _send(self, message): if verbose: print("SENT: %r" % message.strip()) self.wfile.write(message) + def _send_line(self, message): + self._send(message + b'\r\n') + + def _send_textline(self, message): + self._send_line(message.encode('ASCII')) + + def _send_tagged(self, tag, code, message): + self._send_textline(' '.join((tag, code, message))) + def handle(self): # Send a welcome message. - self._send(b'* OK IMAP4rev1\r\n') + self._send_textline('* OK IMAP4rev1') while 1: # Gather up input until we receive a line terminator or we timeout. # Accumulate read(1) because it's simpler to handle the differences @@ -121,19 +132,33 @@ break if verbose: print('GOT: %r' % line.strip()) - splitline = line.split() - tag = splitline[0].decode('ASCII') - cmd = splitline[1].decode('ASCII') + if self.continuation: + try: + self.continuation.send(line) + except StopIteration: + self.continuation = None + continue + splitline = line.decode('ASCII').split() + tag = splitline[0] + cmd = splitline[1] args = splitline[2:] if hasattr(self, 'cmd_'+cmd): - getattr(self, 'cmd_'+cmd)(tag, args) + continuation = getattr(self, 'cmd_'+cmd)(tag, args) + if continuation: + self.continuation = continuation + next(continuation) else: - self._send('{} BAD {} unknown\r\n'.format(tag, cmd).encode('ASCII')) + self._send_tagged(tag, 'BAD', cmd + ' unknown') def cmd_CAPABILITY(self, tag, args): - self._send(b'* CAPABILITY IMAP4rev1\r\n') - self._send('{} OK CAPABILITY completed\r\n'.format(tag).encode('ASCII')) + caps = 'IMAP4rev1 ' + self.capabilities if self.capabilities else 'IMAP4rev1' + self._send_textline('* CAPABILITY ' + caps) + self._send_tagged(tag, 'OK', 'CAPABILITY completed') + + def cmd_LOGOUT(self, tag, args): + self._send_textline('* BYE IMAP4ref1 Server logging out') + self._send_tagged(tag, 'OK', 'LOGOUT completed') class BaseThreadedNetworkedTests(unittest.TestCase): @@ -183,6 +208,16 @@ finally: self.reap_server(server, thread) + @contextmanager + def reaped_pair(self, hdlr): + server, thread = self.make_server((support.HOST, 0), hdlr) + client = self.imap_class(*server.server_address) + try: + yield server, client + finally: + client.logout() + self.reap_server(server, thread) + @reap_threads def test_connect(self): with self.reaped_server(SimpleIMAPHandler) as server: @@ -208,12 +243,86 @@ def cmd_CAPABILITY(self, tag, args): self._send(b'* CAPABILITY IMAP4rev1 AUTH\n') - self._send('{} OK CAPABILITY completed\r\n'.format(tag).encode('ASCII')) + self._send_tagged(tag, 'OK', 'CAPABILITY completed') with self.reaped_server(BadNewlineHandler) as server: self.assertRaises(imaplib.IMAP4.abort, self.imap_class, *server.server_address) + @reap_threads + def test_bad_auth_name(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_tagged(tag, 'NO', 'unrecognized authentication ' + 'type {}'.format(args[0])) + + with self.reaped_pair(MyServer) as (server, client): + with self.assertRaises(imaplib.IMAP4.error): + client.authenticate('METHOD', lambda: 1) + + @reap_threads + def test_invalid_authentication(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+') + self.response = yield + self._send_tagged(tag, 'NO', '[AUTHENTICATIONFAILED] invalid') + + with self.reaped_pair(MyServer) as (server, client): + with self.assertRaises(imaplib.IMAP4.error): + code, data = client.authenticate('MYAUTH', lambda x: b'fake') + + @reap_threads + def test_valid_authentication(self): + + class MyServer(SimpleIMAPHandler): + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+') + self.server.response = yield + self._send_tagged(tag, 'OK', 'FAKEAUTH successful') + + with self.reaped_pair(MyServer) as (server, client): + code, data = client.authenticate('MYAUTH', lambda x: b'fake') + self.assertEqual(code, 'OK') + self.assertEqual(server.response, + b'ZmFrZQ==\r\n') #b64 encoded 'fake' + + with self.reaped_pair(MyServer) as (server, client): + code, data = client.authenticate('MYAUTH', lambda x: 'fake') + self.assertEqual(code, 'OK') + self.assertEqual(server.response, + b'ZmFrZQ==\r\n') #b64 encoded 'fake' + + @reap_threads + def test_login_cram_md5(self): + + class AuthHandler(SimpleIMAPHandler): + + capabilities = 'LOGINDISABLED AUTH=CRAM-MD5' + + def cmd_AUTHENTICATE(self, tag, args): + self._send_textline('+ PDE4OTYuNjk3MTcwOTUyQHBvc3RvZmZpY2Uucm' + 'VzdG9uLm1jaS5uZXQ=') + r = yield + if r == b'dGltIGYxY2E2YmU0NjRiOWVmYTFjY2E2ZmZkNmNmMmQ5ZjMy\r\n': + self._send_tagged(tag, 'OK', 'CRAM-MD5 successful') + else: + self._send_tagged(tag, 'NO', 'No access') + + with self.reaped_pair(AuthHandler) as (server, client): + self.assertTrue('AUTH=CRAM-MD5' in client.capabilities) + ret, data = client.login_cram_md5("tim", "tanstaaftanstaaf") + self.assertEqual(ret, "OK") + + with self.reaped_pair(AuthHandler) as (server, client): + self.assertTrue('AUTH=CRAM-MD5' in client.capabilities) + ret, data = client.login_cram_md5("tim", b"tanstaaftanstaaf") + self.assertEqual(ret, "OK") class ThreadedNetworkedTests(BaseThreadedNetworkedTests): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -260,6 +260,9 @@ Library ------- +- Issue #13700: Fix byte/string handling in imaplib authentication when an + authobject is specified. + - Issue #13153: Tkinter functions now raise TclError instead of ValueError when a string argument contains non-BMP character. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 19 20:52:48 2013 From: python-checkins at python.org (ezio.melotti) Date: Tue, 19 Feb 2013 20:52:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_=2317242=3A_fix_code_high?= =?utf-8?q?light=2E__Patch_by_Berker_Peksag=2E?= Message-ID: <3Z9Xfm0QZFzSqM@mail.python.org> http://hg.python.org/devguide/rev/6015789cbce0 changeset: 599:6015789cbce0 user: Ezio Melotti date: Tue Feb 19 21:52:33 2013 +0200 summary: #17242: fix code highlight. Patch by Berker Peksag. files: docquality.rst | 20 +++++++++++--------- 1 files changed, 11 insertions(+), 9 deletions(-) diff --git a/docquality.rst b/docquality.rst --- a/docquality.rst +++ b/docquality.rst @@ -72,30 +72,32 @@ Helping with the Developer's Guide ---------------------------------- +.. highlight:: bash + The Developer's Guide uses the same process as the main Python documentation, except for some small differences. The source lives in a `separate -repository`_. Bug reports and patches should be submitted to the `python +repository`_. Bug reports and patches should be submitted to the `Python bug tracker`_ using the ``devguide`` component. Changes to the devguide are normally published within a day, on a schedule that may be different from the main documentation. .. _separate repository: http://hg.python.org/devguide -.. _python bug tracker: http://bugs.python.org +.. _Python bug tracker: http://bugs.python.org -To clone the Developer's Guide: +To clone the Developer's Guide:: -``hg clone http://hg.python.org/devguide`` + $ hg clone http://hg.python.org/devguide -Core developers should use: +Core developers should use:: -``hg clone ssh://hg at hg.python.org/devguide`` + $ hg clone ssh://hg at hg.python.org/devguide instead so that they can push back their edits to the server. -To build the devguide, you must have `Sphinx`_ installed. The devguide html -can be built by running: +To build the devguide, you must have `Sphinx`_ installed. The devguide HTML +can be built by running:: - make html + $ make html in the checkout directory, which will write the files to the ``_build/html`` directory. -- Repository URL: http://hg.python.org/devguide From python-checkins at python.org Wed Feb 20 00:33:47 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 20 Feb 2013 00:33:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_=236623=3A_Add_explicit_de?= =?utf-8?q?precation_warning_for_ftplib=2ENetrc=2E?= Message-ID: <3Z9dYl2ZkWzSpX@mail.python.org> http://hg.python.org/cpython/rev/acf247d25f17 changeset: 82268:acf247d25f17 user: R David Murray date: Tue Feb 19 18:32:28 2013 -0500 summary: #6623: Add explicit deprecation warning for ftplib.Netrc. files: Lib/ftplib.py | 3 +++ Lib/test/test_ftplib.py | 13 ++++++++++++- Misc/NEWS | 3 +++ 3 files changed, 18 insertions(+), 1 deletions(-) diff --git a/Lib/ftplib.py b/Lib/ftplib.py --- a/Lib/ftplib.py +++ b/Lib/ftplib.py @@ -39,6 +39,7 @@ import os import sys import socket +import warnings from socket import _GLOBAL_DEFAULT_TIMEOUT __all__ = ["FTP","Netrc"] @@ -953,6 +954,8 @@ __defacct = None def __init__(self, filename=None): + warnings.warn("This class is deprecated, use the netrc module instead", + DeprecationWarning, 2) if filename is None: if "HOME" in os.environ: filename = os.path.join(os.environ["HOME"], diff --git a/Lib/test/test_ftplib.py b/Lib/test/test_ftplib.py --- a/Lib/test/test_ftplib.py +++ b/Lib/test/test_ftplib.py @@ -985,8 +985,19 @@ ftp.close() +class TestNetrcDeprecation(TestCase): + + def test_deprecation(self): + with support.temp_cwd(), support.EnvironmentVarGuard() as env: + env['HOME'] = os.getcwd() + open('.netrc', 'w').close() + with self.assertWarns(DeprecationWarning): + ftplib.Netrc() + + + def test_main(): - tests = [TestFTPClass, TestTimeouts] + tests = [TestFTPClass, TestTimeouts, TestNetrcDeprecation] if support.IPV6_ENABLED: tests.append(TestIPv6Environment) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -260,6 +260,9 @@ Library ------- +- Issue #6623: Added explicit DeprecationWarning for ftplib.netrc, which has + been deprecated and undocumented for a long time. + - Issue #13700: Fix byte/string handling in imaplib authentication when an authobject is specified. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 01:55:06 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 20 Feb 2013 01:55:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzE3MTQzOiBmaXgg?= =?utf-8?q?buildbot_failures_on_Windows=2E?= Message-ID: <3Z9gMZ3YQKzSxr@mail.python.org> http://hg.python.org/cpython/rev/662f97427acf changeset: 82269:662f97427acf branch: 3.3 parent: 82266:b21f955b8ba2 user: Ezio Melotti date: Wed Feb 20 02:52:49 2013 +0200 summary: #17143: fix buildbot failures on Windows. files: Lib/test/test_trace.py | 11 ++++++----- 1 files changed, 6 insertions(+), 5 deletions(-) diff --git a/Lib/test/test_trace.py b/Lib/test/test_trace.py --- a/Lib/test/test_trace.py +++ b/Lib/test/test_trace.py @@ -3,7 +3,6 @@ import sys from test.support import (run_unittest, TESTFN, rmtree, unlink, captured_stdout) -import tempfile import unittest import trace @@ -396,14 +395,16 @@ trace.find_lines(foo.__code__, ["eggs"]) def test_deprecated_find_strings(self): + with open(TESTFN, 'w') as fd: + self.addCleanup(unlink, TESTFN) with self.assertWarns(DeprecationWarning): - with tempfile.NamedTemporaryFile() as fd: - trace.find_strings(fd.name) + trace.find_strings(fd.name) def test_deprecated_find_executable_linenos(self): + with open(TESTFN, 'w') as fd: + self.addCleanup(unlink, TESTFN) with self.assertWarns(DeprecationWarning): - with tempfile.NamedTemporaryFile() as fd: - trace.find_executable_linenos(fd.name) + trace.find_executable_linenos(fd.name) def test_main(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 01:55:07 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 20 Feb 2013 01:55:07 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MTQzOiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3Z9gMb692szSxr@mail.python.org> http://hg.python.org/cpython/rev/1bf0ff7db856 changeset: 82270:1bf0ff7db856 parent: 82268:acf247d25f17 parent: 82269:662f97427acf user: Ezio Melotti date: Wed Feb 20 02:54:50 2013 +0200 summary: #17143: merge with 3.3. files: Lib/test/test_trace.py | 11 ++++++----- 1 files changed, 6 insertions(+), 5 deletions(-) diff --git a/Lib/test/test_trace.py b/Lib/test/test_trace.py --- a/Lib/test/test_trace.py +++ b/Lib/test/test_trace.py @@ -3,7 +3,6 @@ import sys from test.support import (run_unittest, TESTFN, rmtree, unlink, captured_stdout) -import tempfile import unittest import trace @@ -396,14 +395,16 @@ trace.find_lines(foo.__code__, ["eggs"]) def test_deprecated_find_strings(self): + with open(TESTFN, 'w') as fd: + self.addCleanup(unlink, TESTFN) with self.assertWarns(DeprecationWarning): - with tempfile.NamedTemporaryFile() as fd: - trace.find_strings(fd.name) + trace.find_strings(fd.name) def test_deprecated_find_executable_linenos(self): + with open(TESTFN, 'w') as fd: + self.addCleanup(unlink, TESTFN) with self.assertWarns(DeprecationWarning): - with tempfile.NamedTemporaryFile() as fd: - trace.find_executable_linenos(fd.name) + trace.find_executable_linenos(fd.name) def test_main(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 02:01:16 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 20 Feb 2013 02:01:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzc4NDI6IGJhY2tw?= =?utf-8?q?ort_fix_for_py=5Fcompile=2Ecompile_syntax_error_message_handlin?= =?utf-8?q?g=2E?= Message-ID: <3Z9gVh2tznzRdZ@mail.python.org> http://hg.python.org/cpython/rev/c7f04c09dc56 changeset: 82271:c7f04c09dc56 branch: 2.7 parent: 82262:0082b7bf9501 user: R David Murray date: Tue Feb 19 20:00:11 2013 -0500 summary: #7842: backport fix for py_compile.compile syntax error message handling. files: Lib/py_compile.py | 2 +- Misc/NEWS | 2 ++ 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Lib/py_compile.py b/Lib/py_compile.py --- a/Lib/py_compile.py +++ b/Lib/py_compile.py @@ -112,7 +112,7 @@ try: codeobject = __builtin__.compile(codestring, dfile or file,'exec') except Exception,err: - py_exc = PyCompileError(err.__class__,err.args,dfile or file) + py_exc = PyCompileError(err.__class__, err, dfile or file) if doraise: raise py_exc else: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -208,6 +208,8 @@ Library ------- +- Issue #7842: backported fix for py_compile.compile() syntax error handling. + - Issue #13153: Tkinter functions now raise TclError instead of ValueError when a unicode argument contains non-BMP character. -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Wed Feb 20 06:06:42 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Wed, 20 Feb 2013 06:06:42 +0100 Subject: [Python-checkins] Daily reference leaks (1bf0ff7db856): sum=1 Message-ID: results for 1bf0ff7db856 on branch "default" -------------------------------------------- test_unittest leaked [0, -1, 2] memory blocks, sum=1 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflog3453Dz', '-x'] From python-checkins at python.org Wed Feb 20 17:02:25 2013 From: python-checkins at python.org (barry.warsaw) Date: Wed, 20 Feb 2013 17:02:25 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Add_missing_2=2E6=2E8_release?= =?utf-8?q?=2C_and_describe_plans_for_2=2E6=2E9=2E?= Message-ID: <3ZB3VT5KCWzRYT@mail.python.org> http://hg.python.org/peps/rev/c5abe58489d1 changeset: 4756:c5abe58489d1 user: Barry Warsaw date: Wed Feb 20 11:02:21 2013 -0500 summary: Add missing 2.6.8 release, and describe plans for 2.6.9. files: pep-0361.txt | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diff --git a/pep-0361.txt b/pep-0361.txt --- a/pep-0361.txt +++ b/pep-0361.txt @@ -80,8 +80,10 @@ Mar 19 2010: Python 2.6.5 final released Aug 24 2010: Python 2.6.6 final released Jun 03 2011: Python 2.6.7 final released (security-only) + Apr 10 2012: Python 2.6.8 final released (security-only) - Python 2.6.8 (security-only) planned for Feb 10-17 2012 + Python 2.6.9 (security-only) planned for October 2013. This + will be the last Python 2.6 release. See the public `Google calendar`_ -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Wed Feb 20 18:53:46 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE1MzAx?= =?utf-8?q?=3A_Enhance_os=2E*chown=28=29_testing=2E__Based_on_patch_by_Lar?= =?utf-8?q?ry_Hastings=2E?= Message-ID: <3ZB5yy1yjbzT2F@mail.python.org> http://hg.python.org/cpython/rev/9b37e53838eb changeset: 82272:9b37e53838eb branch: 2.7 user: Serhiy Storchaka date: Wed Feb 20 19:39:59 2013 +0200 summary: Issue #15301: Enhance os.*chown() testing. Based on patch by Larry Hastings. files: Lib/test/test_posix.py | 64 +++++++++++++++++++---------- 1 files changed, 41 insertions(+), 23 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -224,30 +224,42 @@ def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" - def check_stat(): + def check_stat(uid, gid): if stat_func is not None: stat = stat_func(first_param) - self.assertEqual(stat.st_uid, os.getuid()) - self.assertEqual(stat.st_gid, os.getgid()) + self.assertEqual(stat.st_uid, uid) + self.assertEqual(stat.st_gid, gid) + uid = os.getuid() + gid = os.getgid() # test a successful chown call - chown_func(first_param, os.getuid(), os.getgid()) - check_stat() - chown_func(first_param, -1, os.getgid()) - check_stat() - chown_func(first_param, os.getuid(), -1) - check_stat() + chown_func(first_param, uid, gid) + check_stat(uid, gid) + chown_func(first_param, -1, gid) + check_stat(uid, gid) + chown_func(first_param, uid, -1) + check_stat(uid, gid) - if os.getuid() == 0: - try: - # Many linux distros have a nfsnobody user as MAX_UID-2 - # that makes a good test case for signedness issues. - # http://bugs.python.org/issue1747858 - # This part of the test only runs when run as root. - # Only scary people run their tests as root. - ent = pwd.getpwnam('nfsnobody') - chown_func(first_param, ent.pw_uid, ent.pw_gid) - except KeyError: - pass + if uid == 0: + # Try an amusingly large uid/gid to make sure we handle + # large unsigned values. (chown lets you use any + # uid/gid you like, even if they aren't defined.) + # + # This problem keeps coming up: + # http://bugs.python.org/issue1747858 + # http://bugs.python.org/issue4591 + # http://bugs.python.org/issue15301 + # Hopefully the fix in 4591 fixes it for good! + # + # This part of the test only runs when run as root. + # Only scary people run their tests as root. + + big_value = 2**31 + chown_func(first_param, big_value, big_value) + check_stat(big_value, big_value) + chown_func(first_param, -1, -1) + check_stat(big_value, big_value) + chown_func(first_param, uid, gid) + check_stat(uid, gid) elif platform.system() in ('HP-UX', 'SunOS'): # HP-UX and Solaris can allow a non-root user to chown() to root # (issue #5113) @@ -256,11 +268,17 @@ else: # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) - check_stat() + check_stat(uid, gid) + # test illegal types + for t in str, float: + self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) + check_stat(uid, gid) + self.assertRaises(TypeError, chown_func, first_param, uid, t(gid)) + check_stat(uid, gid) @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 18:53:47 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE1MzAx?= =?utf-8?q?=3A_Enhance_os=2E*chown=28=29_testing=2E__Based_on_patch_by_Lar?= =?utf-8?q?ry_Hastings=2E?= Message-ID: <3ZB5yz69cgz7LkT@mail.python.org> http://hg.python.org/cpython/rev/a0baf5347cd1 changeset: 82273:a0baf5347cd1 branch: 3.2 parent: 82265:3d4302718e7c user: Serhiy Storchaka date: Wed Feb 20 19:40:25 2013 +0200 summary: Issue #15301: Enhance os.*chown() testing. Based on patch by Larry Hastings. files: Lib/test/test_posix.py | 64 +++++++++++++++++++---------- 1 files changed, 41 insertions(+), 23 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -234,30 +234,42 @@ def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" - def check_stat(): + def check_stat(uid, gid): if stat_func is not None: stat = stat_func(first_param) - self.assertEqual(stat.st_uid, os.getuid()) - self.assertEqual(stat.st_gid, os.getgid()) + self.assertEqual(stat.st_uid, uid) + self.assertEqual(stat.st_gid, gid) + uid = os.getuid() + gid = os.getgid() # test a successful chown call - chown_func(first_param, os.getuid(), os.getgid()) - check_stat() - chown_func(first_param, -1, os.getgid()) - check_stat() - chown_func(first_param, os.getuid(), -1) - check_stat() + chown_func(first_param, uid, gid) + check_stat(uid, gid) + chown_func(first_param, -1, gid) + check_stat(uid, gid) + chown_func(first_param, uid, -1) + check_stat(uid, gid) - if os.getuid() == 0: - try: - # Many linux distros have a nfsnobody user as MAX_UID-2 - # that makes a good test case for signedness issues. - # http://bugs.python.org/issue1747858 - # This part of the test only runs when run as root. - # Only scary people run their tests as root. - ent = pwd.getpwnam('nfsnobody') - chown_func(first_param, ent.pw_uid, ent.pw_gid) - except KeyError: - pass + if uid == 0: + # Try an amusingly large uid/gid to make sure we handle + # large unsigned values. (chown lets you use any + # uid/gid you like, even if they aren't defined.) + # + # This problem keeps coming up: + # http://bugs.python.org/issue1747858 + # http://bugs.python.org/issue4591 + # http://bugs.python.org/issue15301 + # Hopefully the fix in 4591 fixes it for good! + # + # This part of the test only runs when run as root. + # Only scary people run their tests as root. + + big_value = 2**31 + chown_func(first_param, big_value, big_value) + check_stat(big_value, big_value) + chown_func(first_param, -1, -1) + check_stat(big_value, big_value) + chown_func(first_param, uid, gid) + check_stat(uid, gid) elif platform.system() in ('HP-UX', 'SunOS'): # HP-UX and Solaris can allow a non-root user to chown() to root # (issue #5113) @@ -266,11 +278,17 @@ else: # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) - check_stat() + check_stat(uid, gid) + # test illegal types + for t in str, float: + self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) + check_stat(uid, gid) + self.assertRaises(TypeError, chown_func, first_param, uid, t(gid)) + check_stat(uid, gid) @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 18:53:49 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2315301=3A_Enhance_os=2E*chown=28=29_testing=2E__Based_?= =?utf-8?q?on_patch_by_Larry_Hastings=2E?= Message-ID: <3ZB5z13RZDz7LlD@mail.python.org> http://hg.python.org/cpython/rev/e97b6394848b changeset: 82274:e97b6394848b branch: 3.3 parent: 82269:662f97427acf parent: 82273:a0baf5347cd1 user: Serhiy Storchaka date: Wed Feb 20 19:42:31 2013 +0200 summary: Issue #15301: Enhance os.*chown() testing. Based on patch by Larry Hastings. files: Lib/test/test_posix.py | 64 +++++++++++++++++++---------- 1 files changed, 41 insertions(+), 23 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -406,30 +406,42 @@ def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" - def check_stat(): + def check_stat(uid, gid): if stat_func is not None: stat = stat_func(first_param) - self.assertEqual(stat.st_uid, os.getuid()) - self.assertEqual(stat.st_gid, os.getgid()) + self.assertEqual(stat.st_uid, uid) + self.assertEqual(stat.st_gid, gid) + uid = os.getuid() + gid = os.getgid() # test a successful chown call - chown_func(first_param, os.getuid(), os.getgid()) - check_stat() - chown_func(first_param, -1, os.getgid()) - check_stat() - chown_func(first_param, os.getuid(), -1) - check_stat() + chown_func(first_param, uid, gid) + check_stat(uid, gid) + chown_func(first_param, -1, gid) + check_stat(uid, gid) + chown_func(first_param, uid, -1) + check_stat(uid, gid) - if os.getuid() == 0: - try: - # Many linux distros have a nfsnobody user as MAX_UID-2 - # that makes a good test case for signedness issues. - # http://bugs.python.org/issue1747858 - # This part of the test only runs when run as root. - # Only scary people run their tests as root. - ent = pwd.getpwnam('nfsnobody') - chown_func(first_param, ent.pw_uid, ent.pw_gid) - except KeyError: - pass + if uid == 0: + # Try an amusingly large uid/gid to make sure we handle + # large unsigned values. (chown lets you use any + # uid/gid you like, even if they aren't defined.) + # + # This problem keeps coming up: + # http://bugs.python.org/issue1747858 + # http://bugs.python.org/issue4591 + # http://bugs.python.org/issue15301 + # Hopefully the fix in 4591 fixes it for good! + # + # This part of the test only runs when run as root. + # Only scary people run their tests as root. + + big_value = 2**31 + chown_func(first_param, big_value, big_value) + check_stat(big_value, big_value) + chown_func(first_param, -1, -1) + check_stat(big_value, big_value) + chown_func(first_param, uid, gid) + check_stat(uid, gid) elif platform.system() in ('HP-UX', 'SunOS'): # HP-UX and Solaris can allow a non-root user to chown() to root # (issue #5113) @@ -438,11 +450,17 @@ else: # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) - check_stat() + check_stat(uid, gid) + # test illegal types + for t in str, float: + self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) + check_stat(uid, gid) + self.assertRaises(TypeError, chown_func, first_param, uid, t(gid)) + check_stat(uid, gid) @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 18:53:51 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2315301=3A_Enhance_os=2E*chown=28=29_testing=2E__?= =?utf-8?q?Based_on_patch_by_Larry_Hastings=2E?= Message-ID: <3ZB5z30BSDz7Ll6@mail.python.org> http://hg.python.org/cpython/rev/d4bf997a34e9 changeset: 82275:d4bf997a34e9 parent: 82270:1bf0ff7db856 parent: 82274:e97b6394848b user: Serhiy Storchaka date: Wed Feb 20 19:43:05 2013 +0200 summary: Issue #15301: Enhance os.*chown() testing. Based on patch by Larry Hastings. files: Lib/test/test_posix.py | 64 +++++++++++++++++++---------- 1 files changed, 41 insertions(+), 23 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -406,30 +406,42 @@ def _test_all_chown_common(self, chown_func, first_param, stat_func): """Common code for chown, fchown and lchown tests.""" - def check_stat(): + def check_stat(uid, gid): if stat_func is not None: stat = stat_func(first_param) - self.assertEqual(stat.st_uid, os.getuid()) - self.assertEqual(stat.st_gid, os.getgid()) + self.assertEqual(stat.st_uid, uid) + self.assertEqual(stat.st_gid, gid) + uid = os.getuid() + gid = os.getgid() # test a successful chown call - chown_func(first_param, os.getuid(), os.getgid()) - check_stat() - chown_func(first_param, -1, os.getgid()) - check_stat() - chown_func(first_param, os.getuid(), -1) - check_stat() + chown_func(first_param, uid, gid) + check_stat(uid, gid) + chown_func(first_param, -1, gid) + check_stat(uid, gid) + chown_func(first_param, uid, -1) + check_stat(uid, gid) - if os.getuid() == 0: - try: - # Many linux distros have a nfsnobody user as MAX_UID-2 - # that makes a good test case for signedness issues. - # http://bugs.python.org/issue1747858 - # This part of the test only runs when run as root. - # Only scary people run their tests as root. - ent = pwd.getpwnam('nfsnobody') - chown_func(first_param, ent.pw_uid, ent.pw_gid) - except KeyError: - pass + if uid == 0: + # Try an amusingly large uid/gid to make sure we handle + # large unsigned values. (chown lets you use any + # uid/gid you like, even if they aren't defined.) + # + # This problem keeps coming up: + # http://bugs.python.org/issue1747858 + # http://bugs.python.org/issue4591 + # http://bugs.python.org/issue15301 + # Hopefully the fix in 4591 fixes it for good! + # + # This part of the test only runs when run as root. + # Only scary people run their tests as root. + + big_value = 2**31 + chown_func(first_param, big_value, big_value) + check_stat(big_value, big_value) + chown_func(first_param, -1, -1) + check_stat(big_value, big_value) + chown_func(first_param, uid, gid) + check_stat(uid, gid) elif platform.system() in ('HP-UX', 'SunOS'): # HP-UX and Solaris can allow a non-root user to chown() to root # (issue #5113) @@ -438,11 +450,17 @@ else: # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat() + check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) - check_stat() + check_stat(uid, gid) + # test illegal types + for t in str, float: + self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) + check_stat(uid, gid) + self.assertRaises(TypeError, chown_func, first_param, uid, t(gid)) + check_stat(uid, gid) @unittest.skipUnless(hasattr(posix, 'chown'), "test needs os.chown()") def test_chown(self): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 18:53:52 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MjQ4?= =?utf-8?q?=3A_Fix_os=2E*chown=28=29_testing_when_user_has_group_root=2E?= Message-ID: <3ZB5z465c6z7LkH@mail.python.org> http://hg.python.org/cpython/rev/0383a54347ea changeset: 82276:0383a54347ea branch: 2.7 parent: 82272:9b37e53838eb user: Serhiy Storchaka date: Wed Feb 20 19:47:31 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user has group root. files: Lib/test/test_posix.py | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -269,10 +269,11 @@ # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) check_stat(uid, gid) - self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) + if gid != 0: + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat(uid, gid) # test illegal types for t in str, float: self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 18:53:54 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MjQ4?= =?utf-8?q?=3A_Fix_os=2E*chown=28=29_testing_when_user_has_group_root=2E?= Message-ID: <3ZB5z61bS5z7LlJ@mail.python.org> http://hg.python.org/cpython/rev/a49bbaadce67 changeset: 82277:a49bbaadce67 branch: 3.2 parent: 82273:a0baf5347cd1 user: Serhiy Storchaka date: Wed Feb 20 19:48:22 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user has group root. files: Lib/test/test_posix.py | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -279,10 +279,11 @@ # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) check_stat(uid, gid) - self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) + if gid != 0: + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat(uid, gid) # test illegal types for t in str, float: self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 18:53:55 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317248=3A_Fix_os=2E*chown=28=29_testing_when_user_has_?= =?utf-8?q?group_root=2E?= Message-ID: <3ZB5z749jGz7LlH@mail.python.org> http://hg.python.org/cpython/rev/96b4acb253f8 changeset: 82278:96b4acb253f8 branch: 3.3 parent: 82274:e97b6394848b parent: 82277:a49bbaadce67 user: Serhiy Storchaka date: Wed Feb 20 19:48:47 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user has group root. files: Lib/test/test_posix.py | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -451,10 +451,11 @@ # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) check_stat(uid, gid) - self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) + if gid != 0: + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat(uid, gid) # test illegal types for t in str, float: self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 18:53:56 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Wed, 20 Feb 2013 18:53:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317248=3A_Fix_os=2E*chown=28=29_testing_when_use?= =?utf-8?q?r_has_group_root=2E?= Message-ID: <3ZB5z86q3TzT2F@mail.python.org> http://hg.python.org/cpython/rev/8c11bbdbac09 changeset: 82279:8c11bbdbac09 parent: 82275:d4bf997a34e9 parent: 82278:96b4acb253f8 user: Serhiy Storchaka date: Wed Feb 20 19:49:12 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user has group root. files: Lib/test/test_posix.py | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -451,10 +451,11 @@ # non-root cannot chown to root, raises OSError self.assertRaises(OSError, chown_func, first_param, 0, 0) check_stat(uid, gid) - self.assertRaises(OSError, chown_func, first_param, -1, 0) - check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) + if gid != 0: + self.assertRaises(OSError, chown_func, first_param, -1, 0) + check_stat(uid, gid) # test illegal types for t in str, float: self.assertRaises(TypeError, chown_func, first_param, t(uid), gid) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 21:16:03 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 20 Feb 2013 21:16:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Rebuild_import?= =?utf-8?q?lib=2Eh_after_the_changes_introduced_in_0f65bf6063ca=2E?= Message-ID: <3ZB9774yS5z7Lld@mail.python.org> http://hg.python.org/cpython/rev/9d00c79b27e1 changeset: 82280:9d00c79b27e1 branch: 3.3 parent: 82278:96b4acb253f8 user: Ezio Melotti date: Wed Feb 20 21:42:46 2013 +0200 summary: Rebuild importlib.h after the changes introduced in 0f65bf6063ca. files: Python/importlib.h | 2199 +++++++++++++++---------------- 1 files changed, 1098 insertions(+), 1101 deletions(-) diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 21:16:05 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 20 Feb 2013 21:16:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_importlib=2Eh_rebuild_from_3=2E3_after_the_changes?= =?utf-8?q?_introduced_in_65eaac000147=2E?= Message-ID: <3ZB9790lPdz7Lld@mail.python.org> http://hg.python.org/cpython/rev/cf0b7d3e5fc6 changeset: 82281:cf0b7d3e5fc6 parent: 82279:8c11bbdbac09 parent: 82280:9d00c79b27e1 user: Ezio Melotti date: Wed Feb 20 22:15:47 2013 +0200 summary: Merge importlib.h rebuild from 3.3 after the changes introduced in 65eaac000147. files: Python/importlib.h | 2191 +++++++++++++++---------------- 1 files changed, 1094 insertions(+), 1097 deletions(-) diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 22:56:16 2013 From: python-checkins at python.org (benjamin.peterson) Date: Wed, 20 Feb 2013 22:56:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_fix_building_w?= =?utf-8?q?ithout_pymalloc_=28closes_=2317228=29?= Message-ID: <3ZBCLm4c0Gz7Llb@mail.python.org> http://hg.python.org/cpython/rev/470350fd2831 changeset: 82282:470350fd2831 branch: 3.3 parent: 82280:9d00c79b27e1 user: Benjamin Peterson date: Wed Feb 20 16:54:30 2013 -0500 summary: fix building without pymalloc (closes #17228) files: Misc/NEWS | 2 ++ Objects/obmalloc.c | 2 +- 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -689,6 +689,8 @@ Build ----- +- Issue #17228: Fix building without pymalloc. + - Issue #3718: Use AC_ARG_VAR to set MACHDEP in configure.ac. - Issue #17031: Fix running regen in cross builds. diff --git a/Objects/obmalloc.c b/Objects/obmalloc.c --- a/Objects/obmalloc.c +++ b/Objects/obmalloc.c @@ -1737,7 +1737,7 @@ k = 3; do { size_t nextvalue = value / 10; - uint digit = (uint)(value - nextvalue * 10); + unsigned int digit = (unsigned int)(value - nextvalue * 10); value = nextvalue; buf[i--] = (char)(digit + '0'); --k; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 22:56:18 2013 From: python-checkins at python.org (benjamin.peterson) Date: Wed, 20 Feb 2013 22:56:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_fix_building_w?= =?utf-8?q?ithout_pymalloc_=28closes_=2317228=29?= Message-ID: <3ZBCLp0P8pz7Llh@mail.python.org> http://hg.python.org/cpython/rev/67fa0643751d changeset: 82283:67fa0643751d branch: 2.7 parent: 82276:0383a54347ea user: Benjamin Peterson date: Wed Feb 20 16:54:30 2013 -0500 summary: fix building without pymalloc (closes #17228) files: Misc/NEWS | 2 ++ Objects/obmalloc.c | 2 +- 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -853,6 +853,8 @@ Build ----- +- Issue #17228: Fix building without pymalloc. + - Issue #17086: Backport the patches from the 3.3 branch to cross-build the package. diff --git a/Objects/obmalloc.c b/Objects/obmalloc.c --- a/Objects/obmalloc.c +++ b/Objects/obmalloc.c @@ -1713,7 +1713,7 @@ k = 3; do { size_t nextvalue = value / 10; - uint digit = (uint)(value - nextvalue * 10); + unsigned int digit = (unsigned int)(value - nextvalue * 10); value = nextvalue; buf[i--] = (char)(digit + '0'); --k; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 22:56:19 2013 From: python-checkins at python.org (benjamin.peterson) Date: Wed, 20 Feb 2013 22:56:19 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogbWVyZ2UgMy4zICgjMTcyMjgp?= Message-ID: <3ZBCLq31xqz7Llh@mail.python.org> http://hg.python.org/cpython/rev/ea4a36c667ce changeset: 82284:ea4a36c667ce parent: 82281:cf0b7d3e5fc6 parent: 82282:470350fd2831 user: Benjamin Peterson date: Wed Feb 20 16:56:06 2013 -0500 summary: merge 3.3 (#17228) files: Misc/NEWS | 2 ++ Objects/obmalloc.c | 2 +- 2 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -940,6 +940,8 @@ Build ----- +- Issue #17228: Fix building without pymalloc. + - Issue #3718: Use AC_ARG_VAR to set MACHDEP in configure.ac. - Issue #16235: Implement python-config as a shell script. diff --git a/Objects/obmalloc.c b/Objects/obmalloc.c --- a/Objects/obmalloc.c +++ b/Objects/obmalloc.c @@ -1763,7 +1763,7 @@ k = 3; do { size_t nextvalue = value / 10; - uint digit = (uint)(value - nextvalue * 10); + unsigned int digit = (unsigned int)(value - nextvalue * 10); value = nextvalue; buf[i--] = (char)(digit + '0'); --k; -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 23:01:57 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 20 Feb 2013 23:01:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Silence_Deprec?= =?utf-8?q?ationWarnings_in_test=5Funicode=2E?= Message-ID: <3ZBCTK3J21zNmD@mail.python.org> http://hg.python.org/cpython/rev/7f34f8fa799d changeset: 82285:7f34f8fa799d branch: 3.3 parent: 82280:9d00c79b27e1 user: Ezio Melotti date: Wed Feb 20 23:56:01 2013 +0200 summary: Silence DeprecationWarnings in test_unicode. files: Lib/test/test_unicode.py | 18 ++++++++++-------- 1 files changed, 10 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py --- a/Lib/test/test_unicode.py +++ b/Lib/test/test_unicode.py @@ -2172,15 +2172,17 @@ # generate a fresh string (refcount=1) text = 'a' * length + 'b' - # fill wstr internal field - abc = text.encode('unicode_internal') - self.assertEqual(abc.decode('unicode_internal'), text) + with support.check_warnings(('unicode_internal codec has been ' + 'deprecated', DeprecationWarning)): + # fill wstr internal field + abc = text.encode('unicode_internal') + self.assertEqual(abc.decode('unicode_internal'), text) - # resize text: wstr field must be cleared and then recomputed - text += 'c' - abcdef = text.encode('unicode_internal') - self.assertNotEqual(abc, abcdef) - self.assertEqual(abcdef.decode('unicode_internal'), text) + # resize text: wstr field must be cleared and then recomputed + text += 'c' + abcdef = text.encode('unicode_internal') + self.assertNotEqual(abc, abcdef) + self.assertEqual(abcdef.decode('unicode_internal'), text) class StringModuleTest(unittest.TestCase): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 23:01:58 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 20 Feb 2013 23:01:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4zIC0+IDMuMyk6?= =?utf-8?q?_Merge_heads=2E?= Message-ID: <3ZBCTM00pbzSvX@mail.python.org> http://hg.python.org/cpython/rev/519ad21e12c4 changeset: 82286:519ad21e12c4 branch: 3.3 parent: 82282:470350fd2831 parent: 82285:7f34f8fa799d user: Ezio Melotti date: Thu Feb 21 00:00:17 2013 +0200 summary: Merge heads. files: Lib/test/test_unicode.py | 18 ++++++++++-------- 1 files changed, 10 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py --- a/Lib/test/test_unicode.py +++ b/Lib/test/test_unicode.py @@ -2172,15 +2172,17 @@ # generate a fresh string (refcount=1) text = 'a' * length + 'b' - # fill wstr internal field - abc = text.encode('unicode_internal') - self.assertEqual(abc.decode('unicode_internal'), text) + with support.check_warnings(('unicode_internal codec has been ' + 'deprecated', DeprecationWarning)): + # fill wstr internal field + abc = text.encode('unicode_internal') + self.assertEqual(abc.decode('unicode_internal'), text) - # resize text: wstr field must be cleared and then recomputed - text += 'c' - abcdef = text.encode('unicode_internal') - self.assertNotEqual(abc, abcdef) - self.assertEqual(abcdef.decode('unicode_internal'), text) + # resize text: wstr field must be cleared and then recomputed + text += 'c' + abcdef = text.encode('unicode_internal') + self.assertNotEqual(abc, abcdef) + self.assertEqual(abcdef.decode('unicode_internal'), text) class StringModuleTest(unittest.TestCase): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 20 23:02:00 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 20 Feb 2013 23:02:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_DeprecationWarnings_silencing_in_test=5Funicode_fr?= =?utf-8?q?om_3=2E3=2E?= Message-ID: <3ZBCTN2jSDzSv3@mail.python.org> http://hg.python.org/cpython/rev/e2aa7ffa2005 changeset: 82287:e2aa7ffa2005 parent: 82284:ea4a36c667ce parent: 82286:519ad21e12c4 user: Ezio Melotti date: Thu Feb 21 00:01:44 2013 +0200 summary: Merge DeprecationWarnings silencing in test_unicode from 3.3. files: Lib/test/test_unicode.py | 18 ++++++++++-------- 1 files changed, 10 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py --- a/Lib/test/test_unicode.py +++ b/Lib/test/test_unicode.py @@ -2196,15 +2196,17 @@ # generate a fresh string (refcount=1) text = 'a' * length + 'b' - # fill wstr internal field - abc = text.encode('unicode_internal') - self.assertEqual(abc.decode('unicode_internal'), text) + with support.check_warnings(('unicode_internal codec has been ' + 'deprecated', DeprecationWarning)): + # fill wstr internal field + abc = text.encode('unicode_internal') + self.assertEqual(abc.decode('unicode_internal'), text) - # resize text: wstr field must be cleared and then recomputed - text += 'c' - abcdef = text.encode('unicode_internal') - self.assertNotEqual(abc, abcdef) - self.assertEqual(abcdef.decode('unicode_internal'), text) + # resize text: wstr field must be cleared and then recomputed + text += 'c' + abcdef = text.encode('unicode_internal') + self.assertNotEqual(abc, abcdef) + self.assertEqual(abcdef.decode('unicode_internal'), text) class StringModuleTest(unittest.TestCase): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 00:26:04 2013 From: python-checkins at python.org (barry.warsaw) Date: Thu, 21 Feb 2013 00:26:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi42KTogLSBJc3N1ZSAjMTYy?= =?utf-8?q?48=3A_Disable_code_execution_from_the_user=27s_home_directory_b?= =?utf-8?q?y?= Message-ID: <3ZBFLN2pxpz7LkH@mail.python.org> http://hg.python.org/cpython/rev/936621d33c38 changeset: 82288:936621d33c38 branch: 2.6 parent: 79994:4a17784f2fee user: Barry Warsaw date: Wed Feb 20 18:19:55 2013 -0500 summary: - Issue #16248: Disable code execution from the user's home directory by tkinter when the -E flag is passed to Python. Patch by Zachary Ware. files: Lib/lib-tk/Tkinter.py | 4 +++- Misc/NEWS | 3 +++ 2 files changed, 6 insertions(+), 1 deletions(-) diff --git a/Lib/lib-tk/Tkinter.py b/Lib/lib-tk/Tkinter.py --- a/Lib/lib-tk/Tkinter.py +++ b/Lib/lib-tk/Tkinter.py @@ -1643,7 +1643,9 @@ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) if useTk: self._loadtk() - self.readprofile(baseName, className) + if not sys.flags.ignore_environment: + # Issue #16248: Honor the -E flag to avoid code injection. + self.readprofile(baseName, className) def loadtk(self): if not self._tkloaded: self.tk.loadtk() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -13,6 +13,9 @@ Library ------- +- Issue #16248: Disable code execution from the user's home directory by + tkinter when the -E flag is passed to Python. Patch by Zachary Ware. + What's New in Python 2.6.8? =========================== -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 00:26:05 2013 From: python-checkins at python.org (barry.warsaw) Date: Thu, 21 Feb 2013 00:26:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMi42IC0+IDIuNyk6?= =?utf-8?q?_null_merge_from_2=2E6?= Message-ID: <3ZBFLP5fmZzT0g@mail.python.org> http://hg.python.org/cpython/rev/479bc802a645 changeset: 82289:479bc802a645 branch: 2.7 parent: 82283:67fa0643751d parent: 82288:936621d33c38 user: Barry Warsaw date: Wed Feb 20 18:25:17 2013 -0500 summary: null merge from 2.6 files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 01:56:11 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 01:56:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Silence_Deprec?= =?utf-8?q?ationWarnings_in_test=5Furllib=2E?= Message-ID: <3ZBHLM3M1lz7LjV@mail.python.org> http://hg.python.org/cpython/rev/1bcddc0a3765 changeset: 82290:1bcddc0a3765 branch: 3.3 parent: 82286:519ad21e12c4 user: Ezio Melotti date: Thu Feb 21 02:41:42 2013 +0200 summary: Silence DeprecationWarnings in test_urllib. files: Lib/test/test_urllib.py | 21 +++++++++++++-------- 1 files changed, 13 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_urllib.py b/Lib/test/test_urllib.py --- a/Lib/test/test_urllib.py +++ b/Lib/test/test_urllib.py @@ -30,7 +30,10 @@ if proxies is not None: opener = urllib.request.FancyURLopener(proxies=proxies) elif not _urlopener: - opener = urllib.request.FancyURLopener() + with support.check_warnings( + ('FancyURLopener style of invoking requests is deprecated.', + DeprecationWarning)): + opener = urllib.request.FancyURLopener() _urlopener = opener else: opener = _urlopener @@ -1196,14 +1199,16 @@ class DummyURLopener(urllib.request.URLopener): def open_spam(self, url): return url + with support.check_warnings( + ('DummyURLopener style of invoking requests is deprecated.', + DeprecationWarning)): + self.assertEqual(DummyURLopener().open( + 'spam://example/ /'),'//example/%20/') - self.assertEqual(DummyURLopener().open( - 'spam://example/ /'),'//example/%20/') - - # test the safe characters are not quoted by urlopen - self.assertEqual(DummyURLopener().open( - "spam://c:|windows%/:=&?~#+!$,;'@()*[]|/path/"), - "//c:|windows%/:=&?~#+!$,;'@()*[]|/path/") + # test the safe characters are not quoted by urlopen + self.assertEqual(DummyURLopener().open( + "spam://c:|windows%/:=&?~#+!$,;'@()*[]|/path/"), + "//c:|windows%/:=&?~#+!$,;'@()*[]|/path/") # Just commented them out. # Can't really tell why keep failing in windows and sparc. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 01:56:12 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 01:56:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_DeprecationWarnings_silencing_in_test=5Furllib_fro?= =?utf-8?b?bSAzLjMu?= Message-ID: <3ZBHLN61CSz7LkZ@mail.python.org> http://hg.python.org/cpython/rev/3a110a506d35 changeset: 82291:3a110a506d35 parent: 82287:e2aa7ffa2005 parent: 82290:1bcddc0a3765 user: Ezio Melotti date: Thu Feb 21 02:55:56 2013 +0200 summary: Merge DeprecationWarnings silencing in test_urllib from 3.3. files: Lib/test/test_urllib.py | 21 +++++++++++++-------- 1 files changed, 13 insertions(+), 8 deletions(-) diff --git a/Lib/test/test_urllib.py b/Lib/test/test_urllib.py --- a/Lib/test/test_urllib.py +++ b/Lib/test/test_urllib.py @@ -30,7 +30,10 @@ if proxies is not None: opener = urllib.request.FancyURLopener(proxies=proxies) elif not _urlopener: - opener = urllib.request.FancyURLopener() + with support.check_warnings( + ('FancyURLopener style of invoking requests is deprecated.', + DeprecationWarning)): + opener = urllib.request.FancyURLopener() _urlopener = opener else: opener = _urlopener @@ -1271,14 +1274,16 @@ class DummyURLopener(urllib.request.URLopener): def open_spam(self, url): return url + with support.check_warnings( + ('DummyURLopener style of invoking requests is deprecated.', + DeprecationWarning)): + self.assertEqual(DummyURLopener().open( + 'spam://example/ /'),'//example/%20/') - self.assertEqual(DummyURLopener().open( - 'spam://example/ /'),'//example/%20/') - - # test the safe characters are not quoted by urlopen - self.assertEqual(DummyURLopener().open( - "spam://c:|windows%/:=&?~#+!$,;'@()*[]|/path/"), - "//c:|windows%/:=&?~#+!$,;'@()*[]|/path/") + # test the safe characters are not quoted by urlopen + self.assertEqual(DummyURLopener().open( + "spam://c:|windows%/:=&?~#+!$,;'@()*[]|/path/"), + "//c:|windows%/:=&?~#+!$,;'@()*[]|/path/") # Just commented them out. # Can't really tell why keep failing in windows and sparc. -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Thu Feb 21 05:58:39 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Thu, 21 Feb 2013 05:58:39 +0100 Subject: [Python-checkins] Daily reference leaks (3a110a506d35): sum=0 Message-ID: results for 3a110a506d35 on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/refloghfcvZ1', '-x'] From python-checkins at python.org Thu Feb 21 11:36:11 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 11:36:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MjY1OiBmaXgg?= =?utf-8?q?highlight_in_template_example=2E__Initial_patch_by_Berker_Peksa?= =?utf-8?q?g=2E?= Message-ID: <3ZBXCb2J1PzNtY@mail.python.org> http://hg.python.org/cpython/rev/943ea41d3ceb changeset: 82292:943ea41d3ceb branch: 2.7 parent: 82289:479bc802a645 user: Ezio Melotti date: Thu Feb 21 12:30:32 2013 +0200 summary: #17265: fix highlight in template example. Initial patch by Berker Peksag. files: Doc/library/string.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/string.rst b/Doc/library/string.rst --- a/Doc/library/string.rst +++ b/Doc/library/string.rst @@ -707,7 +707,7 @@ This is the object passed to the constructor's *template* argument. In general, you shouldn't change it, but read-only access is not enforced. -Here is an example of how to use a Template: +Here is an example of how to use a Template:: >>> from string import Template >>> s = Template('$who likes $what') @@ -716,11 +716,11 @@ >>> d = dict(who='tim') >>> Template('Give $who $100').substitute(d) Traceback (most recent call last): - [...] + ... ValueError: Invalid placeholder in string: line 1, col 11 >>> Template('$who likes $what').substitute(d) Traceback (most recent call last): - [...] + ... KeyError: 'what' >>> Template('$who likes $what').safe_substitute(d) 'tim likes $what' -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 11:36:12 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 11:36:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MjY1OiBmaXgg?= =?utf-8?q?highlight_in_template_example=2E__Initial_patch_by_Berker_Peksa?= =?utf-8?q?g=2E?= Message-ID: <3ZBXCc5LfQz7Lmf@mail.python.org> http://hg.python.org/cpython/rev/1b9de5788698 changeset: 82293:1b9de5788698 branch: 3.2 parent: 82277:a49bbaadce67 user: Ezio Melotti date: Thu Feb 21 12:30:32 2013 +0200 summary: #17265: fix highlight in template example. Initial patch by Berker Peksag. files: Doc/library/string.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/string.rst b/Doc/library/string.rst --- a/Doc/library/string.rst +++ b/Doc/library/string.rst @@ -688,7 +688,7 @@ This is the object passed to the constructor's *template* argument. In general, you shouldn't change it, but read-only access is not enforced. -Here is an example of how to use a Template: +Here is an example of how to use a Template:: >>> from string import Template >>> s = Template('$who likes $what') @@ -697,11 +697,11 @@ >>> d = dict(who='tim') >>> Template('Give $who $100').substitute(d) Traceback (most recent call last): - [...] + ... ValueError: Invalid placeholder in string: line 1, col 11 >>> Template('$who likes $what').substitute(d) Traceback (most recent call last): - [...] + ... KeyError: 'what' >>> Template('$who likes $what').safe_substitute(d) 'tim likes $what' -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 11:36:14 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 11:36:14 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317265=3A_merge_with_3=2E2=2E?= Message-ID: <3ZBXCf0mk2z7LnP@mail.python.org> http://hg.python.org/cpython/rev/0e2d89f34ae5 changeset: 82294:0e2d89f34ae5 branch: 3.3 parent: 82290:1bcddc0a3765 parent: 82293:1b9de5788698 user: Ezio Melotti date: Thu Feb 21 12:35:40 2013 +0200 summary: #17265: merge with 3.2. files: Doc/library/string.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/string.rst b/Doc/library/string.rst --- a/Doc/library/string.rst +++ b/Doc/library/string.rst @@ -688,7 +688,7 @@ This is the object passed to the constructor's *template* argument. In general, you shouldn't change it, but read-only access is not enforced. -Here is an example of how to use a Template: +Here is an example of how to use a Template:: >>> from string import Template >>> s = Template('$who likes $what') @@ -697,11 +697,11 @@ >>> d = dict(who='tim') >>> Template('Give $who $100').substitute(d) Traceback (most recent call last): - [...] + ... ValueError: Invalid placeholder in string: line 1, col 11 >>> Template('$who likes $what').substitute(d) Traceback (most recent call last): - [...] + ... KeyError: 'what' >>> Template('$who likes $what').safe_substitute(d) 'tim likes $what' -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 11:36:15 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 11:36:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MjY1OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZBXCg3Ndvz7LnH@mail.python.org> http://hg.python.org/cpython/rev/a11ddd687a0b changeset: 82295:a11ddd687a0b parent: 82291:3a110a506d35 parent: 82294:0e2d89f34ae5 user: Ezio Melotti date: Thu Feb 21 12:35:57 2013 +0200 summary: #17265: merge with 3.3. files: Doc/library/string.rst | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Doc/library/string.rst b/Doc/library/string.rst --- a/Doc/library/string.rst +++ b/Doc/library/string.rst @@ -688,7 +688,7 @@ This is the object passed to the constructor's *template* argument. In general, you shouldn't change it, but read-only access is not enforced. -Here is an example of how to use a Template: +Here is an example of how to use a Template:: >>> from string import Template >>> s = Template('$who likes $what') @@ -697,11 +697,11 @@ >>> d = dict(who='tim') >>> Template('Give $who $100').substitute(d) Traceback (most recent call last): - [...] + ... ValueError: Invalid placeholder in string: line 1, col 11 >>> Template('$who likes $what').substitute(d) Traceback (most recent call last): - [...] + ... KeyError: 'what' >>> Template('$who likes $what').safe_substitute(d) 'tim likes $what' -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 13:38:03 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 13:38:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MjQ4?= =?utf-8?q?=3A_Fix_os=2E*chown=28=29_testing_when_user_is_in_root_group=2E?= Message-ID: <3ZBZwC5Pyzz7Lmx@mail.python.org> http://hg.python.org/cpython/rev/7a9ea3d08f51 changeset: 82296:7a9ea3d08f51 branch: 2.7 parent: 82292:943ea41d3ceb user: Serhiy Storchaka date: Thu Feb 21 14:33:45 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user is in root group. files: Lib/test/test_posix.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -271,7 +271,7 @@ check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) - if gid != 0: + if 0 not in os.getgroups(): self.assertRaises(OSError, chown_func, first_param, -1, 0) check_stat(uid, gid) # test illegal types -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 13:38:05 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 13:38:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MjQ4?= =?utf-8?q?=3A_Fix_os=2E*chown=28=29_testing_when_user_is_in_root_group=2E?= Message-ID: <3ZBZwF1FPPz7Lp8@mail.python.org> http://hg.python.org/cpython/rev/0f7383e6ced7 changeset: 82297:0f7383e6ced7 branch: 3.2 parent: 82293:1b9de5788698 user: Serhiy Storchaka date: Thu Feb 21 14:34:36 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user is in root group. files: Lib/test/test_posix.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -281,7 +281,7 @@ check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) - if gid != 0: + if 0 not in os.getgroups(): self.assertRaises(OSError, chown_func, first_param, -1, 0) check_stat(uid, gid) # test illegal types -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 13:38:06 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 13:38:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317248=3A_Fix_os=2E*chown=28=29_testing_when_user_is_i?= =?utf-8?q?n_root_group=2E?= Message-ID: <3ZBZwG442Xz7Lnv@mail.python.org> http://hg.python.org/cpython/rev/a4e348c4b5d3 changeset: 82298:a4e348c4b5d3 branch: 3.3 parent: 82294:0e2d89f34ae5 parent: 82297:0f7383e6ced7 user: Serhiy Storchaka date: Thu Feb 21 14:34:59 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user is in root group. files: Lib/test/test_posix.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -453,7 +453,7 @@ check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) - if gid != 0: + if 0 not in os.getgroups(): self.assertRaises(OSError, chown_func, first_param, -1, 0) check_stat(uid, gid) # test illegal types -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 13:38:07 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 13:38:07 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317248=3A_Fix_os=2E*chown=28=29_testing_when_use?= =?utf-8?q?r_is_in_root_group=2E?= Message-ID: <3ZBZwH6Yf6z7Lnj@mail.python.org> http://hg.python.org/cpython/rev/d49685548a7a changeset: 82299:d49685548a7a parent: 82295:a11ddd687a0b parent: 82298:a4e348c4b5d3 user: Serhiy Storchaka date: Thu Feb 21 14:35:51 2013 +0200 summary: Issue #17248: Fix os.*chown() testing when user is in root group. files: Lib/test/test_posix.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py --- a/Lib/test/test_posix.py +++ b/Lib/test/test_posix.py @@ -453,7 +453,7 @@ check_stat(uid, gid) self.assertRaises(OSError, chown_func, first_param, 0, -1) check_stat(uid, gid) - if gid != 0: + if 0 not in os.getgroups(): self.assertRaises(OSError, chown_func, first_param, -1, 0) check_stat(uid, gid) # test illegal types -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 14:37:39 2013 From: python-checkins at python.org (nick.coghlan) Date: Thu, 21 Feb 2013 14:37:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_analyse_the_PyPI_m?= =?utf-8?q?etrics_correctly?= Message-ID: <3ZBcDz2FQqz7LpF@mail.python.org> http://hg.python.org/peps/rev/516b67ed1a2d changeset: 4757:516b67ed1a2d user: Nick Coghlan date: Thu Feb 21 23:37:00 2013 +1000 summary: PEP 426: analyse the PyPI metrics correctly files: pep-0426.txt | 145 +++++++++++++----- pep-0426/pepsort.py | 249 ++++++++++++++++++------------- 2 files changed, 249 insertions(+), 145 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -44,8 +44,9 @@ distribution. This format is parseable by the ``email`` module with an appropriate -``email.policy.Policy()``. When ``metadata`` is a Unicode string, -```email.parser.Parser().parsestr(metadata)`` is a serviceable parser. +``email.policy.Policy()`` (see `Appendix A`_). When ``metadata`` is a +Unicode string, ```email.parser.Parser().parsestr(metadata)`` is a +serviceable parser. There are three standard locations for these metadata files: @@ -1358,25 +1359,41 @@ Finally, as the version scheme in use is dependent on the metadata version, it was deemed simpler to merge the scheme definition directly into -this PEP rather than continuing to maintain it as a separate PEP. This will -also allow all of the distutils-specific elements of PEP 386 to finally be -formally rejected. +this PEP rather than continuing to maintain it as a separate PEP. -The following statistics provide an analysis of the compatibility of existing -projects on PyPI with the specified versioning scheme (as of 16th February, -2013). +`Appendix B` shows detailed results of an analysis of PyPI distribution +version information, as collected on 19th February, 2013. This analysis +compares the behaviour of the explicitly ordered version schemes defined in +this PEP and PEP 386 with the de facto standard defined by the behaviour +of setuptools. These metrics are useful, as the intent of both PEPs is to +follow existing setuptools behaviour as closely as is feasible, while +still throwing exceptions for unorderable versions (rather than trying +to guess an appropriate order as setuptools does). -* Total number of distributions analysed: 28088 -* Distributions with no releases: 248 / 28088 (0.88 %) -* Fully compatible distributions: 24142 / 28088 (85.95 %) -* Compatible distributions after translation: 2830 / 28088 (10.08 %) -* Compatible distributions after filtering: 511 / 28088 (1.82 %) -* Distributions sorted differently after translation: 38 / 28088 (0.14 %) -* Distributions sorted differently without translation: 2 / 28088 (0.01 %) -* Distributions with no compatible releases: 317 / 28088 (1.13 %) +Overall, the percentage of compatible distributions improves from 97.7% +with PEP 386 to 98.7% with this PEP. While the number of projects affected +in practice was small, some of the affected projects are in widespread use +(such as Pinax and selenium). The surprising ordering discrepancy also +concerned developers and acted as an unnecessary barrier to adoption of +the new metadata standard. + +The data also shows that the pre-release sorting discrepancies are seen +only when analysing *all* versions from PyPI, rather than when analysing +public versions. This is largely due to the fact that PyPI normally reports +only the most recent version for each project (unless the maintainers +explicitly configure it to display additional versions). However, +installers that need to satisfy detailed version constraints often need +to look at all available versions, as they may need to retrieve an older +release. + +Even this PEP doesn't completely eliminate the sorting differences relative +to setuptools: + +* Sorts differently (after translations): 38 / 28194 (0.13 %) +* Sorts differently (no translations): 2 / 28194 (0.01 %) The two remaining sort order discrepancies picked up by the analysis are due -to a pair of projects which have published releases ending with a carriage +to a pair of projects which have PyPI releases ending with a carriage return, alongside releases with the same version number, only *without* the trailing carriage return. @@ -1390,26 +1407,6 @@ standard scheme will normalize both representations to ".devN" and sort them by the numeric component. -For comparison, here are the corresponding analysis results for PEP 386: - -* Total number of distributions analysed: 28088 -* Distributions with no releases: 248 / 28088 (0.88 %) -* Fully compatible distributions: 23874 / 28088 (85.00 %) -* Compatible distributions after translation: 2786 / 28088 (9.92 %) -* Compatible distributions after filtering: 527 / 28088 (1.88 %) -* Distributions sorted differently after translation: 96 / 28088 (0.34 %) -* Distributions sorted differently without translation: 14 / 28088 (0.05 %) -* Distributions with no compatible releases: 543 / 28088 (1.93 %) - -These figures make it clear that only a relatively small number of current -projects are affected by these changes. However, some of the affected -projects are in widespread use (such as Pinax and selenium). The -changes also serve to bring the standard scheme more into line with -developer's expectations, which is an important element in encouraging -adoption of the new metadata version. - -The script used for the above analysis is available at [3]_. - A more opinionated description of the versioning scheme ------------------------------------------------------- @@ -1550,8 +1547,10 @@ .. [3] Version compatibility analysis script: http://hg.python.org/peps/file/default/pep-0426/pepsort.py -Appendix -======== +Appendix A +========== + +The script used for this analysis is available at [3]_. Parsing and generating the Metadata 2.0 serialization format using Python 3.3:: @@ -1610,6 +1609,74 @@ # Correct if sys.stdout.encoding == 'UTF-8': Generator(sys.stdout, maxheaderlen=0).flatten(m) +Appendix B +========== + +Metadata v2.0 guidelines versus setuptools:: + + $ ./pepsort.py + Comparing PEP 426 version sort to setuptools. + + Analysing release versions + Compatible: 24477 / 28194 (86.82 %) + Compatible with translation: 247 / 28194 (0.88 %) + Compatible with filtering: 84 / 28194 (0.30 %) + No compatible versions: 420 / 28194 (1.49 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 2966 / 28194 (10.52 %) + + Analysing public versions + Compatible: 25600 / 28194 (90.80 %) + Compatible with translation: 1505 / 28194 (5.34 %) + Compatible with filtering: 13 / 28194 (0.05 %) + No compatible versions: 420 / 28194 (1.49 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 656 / 28194 (2.33 %) + + Analysing all versions + Compatible: 24239 / 28194 (85.97 %) + Compatible with translation: 2833 / 28194 (10.05 %) + Compatible with filtering: 513 / 28194 (1.82 %) + No compatible versions: 320 / 28194 (1.13 %) + Sorts differently (after translations): 38 / 28194 (0.13 %) + Sorts differently (no translations): 2 / 28194 (0.01 %) + No applicable versions: 249 / 28194 (0.88 %) + +Metadata v1.2 guidelines versus setuptools:: + + $ ./pepsort.py 386 + Comparing PEP 386 version sort to setuptools. + + Analysing release versions + Compatible: 24244 / 28194 (85.99 %) + Compatible with translation: 247 / 28194 (0.88 %) + Compatible with filtering: 84 / 28194 (0.30 %) + No compatible versions: 648 / 28194 (2.30 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 2971 / 28194 (10.54 %) + + Analysing public versions + Compatible: 25371 / 28194 (89.99 %) + Compatible with translation: 1507 / 28194 (5.35 %) + Compatible with filtering: 12 / 28194 (0.04 %) + No compatible versions: 648 / 28194 (2.30 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 656 / 28194 (2.33 %) + + Analysing all versions + Compatible: 23969 / 28194 (85.01 %) + Compatible with translation: 2789 / 28194 (9.89 %) + Compatible with filtering: 530 / 28194 (1.88 %) + No compatible versions: 547 / 28194 (1.94 %) + Sorts differently (after translations): 96 / 28194 (0.34 %) + Sorts differently (no translations): 14 / 28194 (0.05 %) + No applicable versions: 249 / 28194 (0.88 %) + + Copyright ========= diff --git a/pep-0426/pepsort.py b/pep-0426/pepsort.py --- a/pep-0426/pepsort.py +++ b/pep-0426/pepsort.py @@ -20,6 +20,8 @@ PEP426_VERSION_RE = re.compile('^(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' '(\.(post)(\d+))?(\.(dev)(\d+))?$') +PEP426_PRERELEASE_RE = re.compile('(a|b|c|rc|dev)\d+') + def pep426_key(s): s = s.strip() m = PEP426_VERSION_RE.match(s) @@ -60,23 +62,28 @@ return nums, pre, post, dev +def is_release_version(s): + return not bool(PEP426_PRERELEASE_RE.search(s)) + def cache_projects(cache_name): logger.info("Retrieving package data from PyPI") client = xmlrpclib.ServerProxy('http://python.org/pypi') projects = dict.fromkeys(client.list_packages()) + public = projects.copy() failed = [] for pname in projects: - time.sleep(0.1) + time.sleep(0.01) logger.debug("Retrieving versions for %s", pname) try: projects[pname] = list(client.package_releases(pname, True)) + public[pname] = list(client.package_releases(pname)) except: failed.append(pname) logger.warn("Error retrieving versions for %s", failed) with open(cache_name, 'w') as f: - json.dump(projects, f, sort_keys=True, + json.dump([projects, public], f, sort_keys=True, indent=2, separators=(',', ': ')) - return projects + return projects, public def get_projects(cache_name): try: @@ -84,11 +91,11 @@ except IOError as exc: if exc.errno != errno.ENOENT: raise - projects = cache_projects(cache_name); + projects, public = cache_projects(cache_name); else: with f: - projects = json.load(f) - return projects + projects, public = json.load(f) + return projects, public VERSION_CACHE = "pepsort_cache.json" @@ -112,109 +119,139 @@ "426": pep426_key, } +class Analysis: + + def __init__(self, title, projects, releases_only=False): + self.title = title + self.projects = projects + + num_projects = len(projects) + + compatible_projects = Category("Compatible", num_projects) + translated_projects = Category("Compatible with translation", num_projects) + filtered_projects = Category("Compatible with filtering", num_projects) + incompatible_projects = Category("No compatible versions", num_projects) + sort_error_translated_projects = Category("Sorts differently (after translations)", num_projects) + sort_error_compatible_projects = Category("Sorts differently (no translations)", num_projects) + null_projects = Category("No applicable versions", num_projects) + + self.categories = [ + compatible_projects, + translated_projects, + filtered_projects, + incompatible_projects, + sort_error_translated_projects, + sort_error_compatible_projects, + null_projects, + ] + + sort_key = SORT_KEYS[pepno] + sort_failures = 0 + for i, (pname, versions) in enumerate(projects.items()): + if i % 100 == 0: + sys.stderr.write('%s / %s\r' % (i, num_projects)) + sys.stderr.flush() + if not versions: + logger.debug('%-15.15s has no versions', pname) + null_projects.add(pname) + continue + # list_legacy and list_pep will contain 2-tuples + # comprising a sortable representation according to either + # the setuptools (legacy) algorithm or the PEP algorithm. + # followed by the original version string + # Go through the PEP 386/426 stuff one by one, since + # we might get failures + list_pep = [] + release_versions = set() + prerelease_versions = set() + excluded_versions = set() + translated_versions = set() + for v in versions: + s = v + try: + k = sort_key(v) + except Exception: + s = suggest_normalized_version(v) + if not s: + good = False + logger.debug('%-15.15s failed for %r, no suggestions', pname, v) + excluded_versions.add(v) + continue + else: + try: + k = sort_key(s) + except ValueError: + logger.error('%-15.15s failed for %r, with suggestion %r', + pname, v, s) + excluded_versions.add(v) + continue + logger.debug('%-15.15s translated %r to %r', pname, v, s) + translated_versions.add(v) + if is_release_version(s): + release_versions.add(v) + else: + prerelease_versions.add(v) + if releases_only: + logger.debug('%-15.15s ignoring pre-release %r', pname, s) + continue + list_pep.append((k, v)) + if releases_only and prerelease_versions and not release_versions: + logger.debug('%-15.15s has no release versions', pname) + null_projects.add(pname) + continue + if not list_pep: + logger.debug('%-15.15s has no compatible versions', pname) + incompatible_projects.add(pname) + continue + # The legacy approach doesn't refuse the temptation to guess, + # so it *always* gives some kind of answer + if releases_only: + excluded_versions |= prerelease_versions + accepted_versions = set(versions) - excluded_versions + list_legacy = [(legacy_key(v), v) for v in accepted_versions] + assert len(list_legacy) == len(list_pep) + sorted_legacy = sorted(list_legacy) + sorted_pep = sorted(list_pep) + sv_legacy = [t[1] for t in sorted_legacy] + sv_pep = [t[1] for t in sorted_pep] + if sv_legacy != sv_pep: + if translated_versions: + logger.debug('%-15.15s translation creates sort differences', pname) + sort_error_translated_projects.add(pname) + else: + logger.debug('%-15.15s incompatible due to sort errors', pname) + sort_error_compatible_projects.add(pname) + logger.debug('%-15.15s unequal: legacy: %s', pname, sv_legacy) + logger.debug('%-15.15s unequal: pep%s: %s', pname, pepno, sv_pep) + continue + # The project is compatible to some degree, + if excluded_versions: + logger.debug('%-15.15s has some compatible versions', pname) + filtered_projects.add(pname) + continue + if translated_versions: + logger.debug('%-15.15s is compatible after translation', pname) + translated_projects.add(pname) + continue + logger.debug('%-15.15s is fully compatible', pname) + compatible_projects.add(pname) + + def print_report(self): + print("Analysing {}".format(self.title)) + for category in self.categories: + print(" ", category) + + def main(pepno = '426'): - sort_key = SORT_KEYS[pepno] print('Comparing PEP %s version sort to setuptools.' % pepno) - projects = get_projects(VERSION_CACHE) - num_projects = len(projects) - - null_projects = Category("No releases", num_projects) - compatible_projects = Category("Compatible", num_projects) - translated_projects = Category("Compatible with translation", num_projects) - filtered_projects = Category("Compatible with filtering", num_projects) - sort_error_translated_projects = Category("Translations sort differently", num_projects) - sort_error_compatible_projects = Category("Incompatible due to sorting errors", num_projects) - incompatible_projects = Category("Incompatible", num_projects) - - categories = [ - null_projects, - compatible_projects, - translated_projects, - filtered_projects, - sort_error_translated_projects, - sort_error_compatible_projects, - incompatible_projects, - ] - - sort_failures = 0 - for i, (pname, versions) in enumerate(projects.items()): - if i % 100 == 0: - sys.stderr.write('%s / %s\r' % (i, num_projects)) - sys.stderr.flush() - if not versions: - logger.debug('%-15.15s has no releases', pname) - null_projects.add(pname) - continue - # list_legacy and list_pep will contain 2-tuples - # comprising a sortable representation according to either - # the setuptools (legacy) algorithm or the PEP algorithm. - # followed by the original version string - list_legacy = [(legacy_key(v), v) for v in versions] - # Go through the PEP 386/426 stuff one by one, since - # we might get failures - list_pep = [] - excluded_versions = set() - translated_versions = set() - for v in versions: - try: - k = sort_key(v) - except Exception: - s = suggest_normalized_version(v) - if not s: - good = False - logger.debug('%-15.15s failed for %r, no suggestions', pname, v) - excluded_versions.add(v) - continue - else: - try: - k = sort_key(s) - except ValueError: - logger.error('%-15.15s failed for %r, with suggestion %r', - pname, v, s) - excluded_versions.add(v) - continue - logger.debug('%-15.15s translated %r to %r', pname, v, s) - translated_versions.add(v) - list_pep.append((k, v)) - if not list_pep: - logger.debug('%-15.15s has no compatible releases', pname) - incompatible_projects.add(pname) - continue - # Now check the versions sort as expected - if excluded_versions: - list_legacy = [(k, v) for k, v in list_legacy - if v not in excluded_versions] - assert len(list_legacy) == len(list_pep) - sorted_legacy = sorted(list_legacy) - sorted_pep = sorted(list_pep) - sv_legacy = [t[1] for t in sorted_legacy] - sv_pep = [t[1] for t in sorted_pep] - if sv_legacy != sv_pep: - if translated_versions: - logger.debug('%-15.15s translation creates sort differences', pname) - sort_error_translated_projects.add(pname) - else: - logger.debug('%-15.15s incompatible due to sort errors', pname) - sort_error_compatible_projects.add(pname) - logger.debug('%-15.15s unequal: legacy: %s', pname, sv_legacy) - logger.debug('%-15.15s unequal: pep%s: %s', pname, pepno, sv_pep) - continue - # The project is compatible to some degree, - if excluded_versions: - logger.debug('%-15.15s has some compatible releases', pname) - filtered_projects.add(pname) - continue - if translated_versions: - logger.debug('%-15.15s is compatible after translation', pname) - translated_projects.add(pname) - continue - logger.debug('%-15.15s is fully compatible', pname) - compatible_projects.add(pname) - - for category in categories: - print(category) - + projects, public = get_projects(VERSION_CACHE) + print() + Analysis("release versions", public, releases_only=True).print_report() + print() + Analysis("public versions", public).print_report() + print() + Analysis("all versions", projects).print_report() # Uncomment the line below to explore differences in details # import pdb; pdb.set_trace() # Grepping the log files is also informative -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Thu Feb 21 14:51:34 2013 From: python-checkins at python.org (nick.coghlan) Date: Thu, 21 Feb 2013 14:51:34 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_avoid_ambiguous_pr?= =?utf-8?q?onouns?= Message-ID: <3ZBcY22XsXz7LpF@mail.python.org> http://hg.python.org/peps/rev/78c97770df80 changeset: 4758:78c97770df80 user: Nick Coghlan date: Thu Feb 21 23:51:24 2013 +1000 summary: PEP 426: avoid ambiguous pronouns files: pep-0426.txt | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -1380,8 +1380,8 @@ The data also shows that the pre-release sorting discrepancies are seen only when analysing *all* versions from PyPI, rather than when analysing public versions. This is largely due to the fact that PyPI normally reports -only the most recent version for each project (unless the maintainers -explicitly configure it to display additional versions). However, +only the most recent version for each project (unless maintainers +explicitly configure their project to display additional versions). However, installers that need to satisfy detailed version constraints often need to look at all available versions, as they may need to retrieve an older release. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Thu Feb 21 15:10:35 2013 From: python-checkins at python.org (nick.coghlan) Date: Thu, 21 Feb 2013 15:10:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_Update_the_section?= =?utf-8?q?_on_handling_old_non-compliant_versions?= Message-ID: <3ZBcyz10cPz7LqP@mail.python.org> http://hg.python.org/peps/rev/22fac210d100 changeset: 4759:22fac210d100 user: Nick Coghlan date: Fri Feb 22 00:10:25 2013 +1000 summary: PEP 426: Update the section on handling old non-compliant versions files: pep-0426.txt | 23 ++++++++++++----------- 1 files changed, 12 insertions(+), 11 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -897,19 +897,20 @@ acknowledges that the de facto standard for ordering them is the scheme used by the ``pkg_resources`` component of ``setuptools``. -Software that automatically processes distribution metadata may either -treat non-compliant version identifiers as an error, or attempt to normalize -them to the standard scheme. This means that projects using non-compliant -version identifiers may not be handled consistently across different tools, -even when correctly publishing the earlier metadata versions. +Software that automatically processes distribution metadata should attempt +to normalize non-compliant version identifiers to the standard scheme, and +ignore them if normalization fails. As any normalization scheme will be +implementation specific, this means that projects using non-compliant +version identifiers may not be handled consistently across different +tools, even when correctly publishing the earlier metadata versions. -Distribution developers can help ensure consistent automated handling by -marking non-compliant versions as "hidden" on the Python Package Index -(removing them is generally undesirable, as users may be depending on -those specific versions being available). +For distributions currently using non-compliant version identifiers, these +filtering guidelines mean that it should be enough for the project to +simply switch to the use of compliant version identifiers to ensure +consistent handling by automated tools. -Distribution users may also wish to remove non-compliant versions from any -private package indexes they control. +Distribution users may wish to explicitly remove non-compliant versions from +any private package indexes they control. For metadata v1.2 (PEP 345), the version ordering described in this PEP should be used in preference to the one defined in PEP 386. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Thu Feb 21 19:31:12 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 19:31:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MjI1?= =?utf-8?q?=3A_JSON_decoder_now_counts_columns_in_the_first_line_starting?= Message-ID: <3ZBklh0WSzzSn2@mail.python.org> http://hg.python.org/cpython/rev/ce583eb0bec2 changeset: 82300:ce583eb0bec2 branch: 2.7 parent: 82296:7a9ea3d08f51 user: Serhiy Storchaka date: Thu Feb 21 20:17:54 2013 +0200 summary: Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. files: Doc/library/json.rst | 2 +- Lib/json/__init__.py | 2 +- Lib/json/decoder.py | 2 +- Lib/json/tool.py | 2 +- Misc/NEWS | 3 +++ 5 files changed, 7 insertions(+), 4 deletions(-) diff --git a/Doc/library/json.rst b/Doc/library/json.rst --- a/Doc/library/json.rst +++ b/Doc/library/json.rst @@ -103,7 +103,7 @@ "json": "obj" } $ echo '{1.2:3.4}' | python -mjson.tool - Expecting property name enclosed in double quotes: line 1 column 1 (char 1) + Expecting property name enclosed in double quotes: line 1 column 2 (char 1) .. highlight:: python diff --git a/Lib/json/__init__.py b/Lib/json/__init__.py --- a/Lib/json/__init__.py +++ b/Lib/json/__init__.py @@ -95,7 +95,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ __version__ = '2.0.9' __all__ = [ diff --git a/Lib/json/decoder.py b/Lib/json/decoder.py --- a/Lib/json/decoder.py +++ b/Lib/json/decoder.py @@ -27,7 +27,7 @@ def linecol(doc, pos): lineno = doc.count('\n', 0, pos) + 1 if lineno == 1: - colno = pos + colno = pos + 1 else: colno = pos - doc.rindex('\n', 0, pos) return lineno, colno diff --git a/Lib/json/tool.py b/Lib/json/tool.py --- a/Lib/json/tool.py +++ b/Lib/json/tool.py @@ -7,7 +7,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ import sys diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -208,6 +208,9 @@ Library ------- +- Issue #17225: JSON decoder now counts columns in the first line starting + with 1, as in other lines. + - Issue #7842: backported fix for py_compile.compile() syntax error handling. - Issue #13153: Tkinter functions now raise TclError instead of ValueError when -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 19:31:13 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 19:31:13 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MjI1?= =?utf-8?q?=3A_JSON_decoder_now_counts_columns_in_the_first_line_starting?= Message-ID: <3ZBklj3KgSzSmt@mail.python.org> http://hg.python.org/cpython/rev/36220cf535aa changeset: 82301:36220cf535aa branch: 3.2 parent: 82297:0f7383e6ced7 user: Serhiy Storchaka date: Thu Feb 21 20:19:16 2013 +0200 summary: Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. files: Doc/library/json.rst | 2 +- Lib/json/__init__.py | 2 +- Lib/json/decoder.py | 2 +- Lib/json/tool.py | 2 +- Misc/NEWS | 3 +++ 5 files changed, 7 insertions(+), 4 deletions(-) diff --git a/Doc/library/json.rst b/Doc/library/json.rst --- a/Doc/library/json.rst +++ b/Doc/library/json.rst @@ -102,7 +102,7 @@ "json": "obj" } $ echo '{1.2:3.4}' | python -mjson.tool - Expecting property name enclosed in double quotes: line 1 column 1 (char 1) + Expecting property name enclosed in double quotes: line 1 column 2 (char 1) .. highlight:: python3 diff --git a/Lib/json/__init__.py b/Lib/json/__init__.py --- a/Lib/json/__init__.py +++ b/Lib/json/__init__.py @@ -97,7 +97,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ __version__ = '2.0.9' __all__ = [ diff --git a/Lib/json/decoder.py b/Lib/json/decoder.py --- a/Lib/json/decoder.py +++ b/Lib/json/decoder.py @@ -32,7 +32,7 @@ newline = '\n' lineno = doc.count(newline, 0, pos) + 1 if lineno == 1: - colno = pos + colno = pos + 1 else: colno = pos - doc.rindex(newline, 0, pos) return lineno, colno diff --git a/Lib/json/tool.py b/Lib/json/tool.py --- a/Lib/json/tool.py +++ b/Lib/json/tool.py @@ -7,7 +7,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ import sys diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -227,6 +227,9 @@ Library ------- +- Issue #17225: JSON decoder now counts columns in the first line starting + with 1, as in other lines. + - Issue #13700: Fix byte/string handling in imaplib authentication when an authobject is specified. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 19:31:14 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 19:31:14 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317225=3A_JSON_decoder_now_counts_columns_in_the_first?= =?utf-8?q?_line_starting?= Message-ID: <3ZBklk6PDkzSnH@mail.python.org> http://hg.python.org/cpython/rev/361ba6d4b7c9 changeset: 82302:361ba6d4b7c9 branch: 3.3 parent: 82298:a4e348c4b5d3 parent: 82301:36220cf535aa user: Serhiy Storchaka date: Thu Feb 21 20:21:21 2013 +0200 summary: Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. files: Doc/library/json.rst | 2 +- Lib/json/__init__.py | 2 +- Lib/json/decoder.py | 2 +- Lib/json/tool.py | 2 +- Misc/NEWS | 3 +++ 5 files changed, 7 insertions(+), 4 deletions(-) diff --git a/Doc/library/json.rst b/Doc/library/json.rst --- a/Doc/library/json.rst +++ b/Doc/library/json.rst @@ -102,7 +102,7 @@ "json": "obj" } $ echo '{1.2:3.4}' | python -mjson.tool - Expecting property name enclosed in double quotes: line 1 column 1 (char 1) + Expecting property name enclosed in double quotes: line 1 column 2 (char 1) .. highlight:: python3 diff --git a/Lib/json/__init__.py b/Lib/json/__init__.py --- a/Lib/json/__init__.py +++ b/Lib/json/__init__.py @@ -97,7 +97,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ __version__ = '2.0.9' __all__ = [ diff --git a/Lib/json/decoder.py b/Lib/json/decoder.py --- a/Lib/json/decoder.py +++ b/Lib/json/decoder.py @@ -32,7 +32,7 @@ newline = '\n' lineno = doc.count(newline, 0, pos) + 1 if lineno == 1: - colno = pos + colno = pos + 1 else: colno = pos - doc.rindex(newline, 0, pos) return lineno, colno diff --git a/Lib/json/tool.py b/Lib/json/tool.py --- a/Lib/json/tool.py +++ b/Lib/json/tool.py @@ -7,7 +7,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ import sys diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -181,6 +181,9 @@ Library ------- +- Issue #17225: JSON decoder now counts columns in the first line starting + with 1, as in other lines. + - Issue #13700: Fix byte/string handling in imaplib authentication when an authobject is specified. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 19:31:16 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Thu, 21 Feb 2013 19:31:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317225=3A_JSON_decoder_now_counts_columns_in_the?= =?utf-8?q?_first_line_starting?= Message-ID: <3ZBklm3GDWzSmM@mail.python.org> http://hg.python.org/cpython/rev/69f793cc34fc changeset: 82303:69f793cc34fc parent: 82299:d49685548a7a parent: 82302:361ba6d4b7c9 user: Serhiy Storchaka date: Thu Feb 21 20:26:52 2013 +0200 summary: Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. files: Doc/library/json.rst | 2 +- Lib/json/__init__.py | 2 +- Lib/json/decoder.py | 2 +- Lib/json/tool.py | 2 +- Lib/test/json_tests/test_fail.py | 24 +++++++++++++++----- Misc/NEWS | 3 ++ 6 files changed, 25 insertions(+), 10 deletions(-) diff --git a/Doc/library/json.rst b/Doc/library/json.rst --- a/Doc/library/json.rst +++ b/Doc/library/json.rst @@ -101,7 +101,7 @@ "json": "obj" } $ echo '{1.2:3.4}' | python -mjson.tool - Expecting property name enclosed in double quotes: line 1 column 1 (char 1) + Expecting property name enclosed in double quotes: line 1 column 2 (char 1) .. highlight:: python3 diff --git a/Lib/json/__init__.py b/Lib/json/__init__.py --- a/Lib/json/__init__.py +++ b/Lib/json/__init__.py @@ -96,7 +96,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ __version__ = '2.0.9' __all__ = [ diff --git a/Lib/json/decoder.py b/Lib/json/decoder.py --- a/Lib/json/decoder.py +++ b/Lib/json/decoder.py @@ -24,7 +24,7 @@ newline = '\n' lineno = doc.count(newline, 0, pos) + 1 if lineno == 1: - colno = pos + colno = pos + 1 else: colno = pos - doc.rindex(newline, 0, pos) return lineno, colno diff --git a/Lib/json/tool.py b/Lib/json/tool.py --- a/Lib/json/tool.py +++ b/Lib/json/tool.py @@ -7,7 +7,7 @@ "json": "obj" } $ echo '{ 1.2:3.4}' | python -m json.tool - Expecting property name enclosed in double quotes: line 1 column 2 (char 2) + Expecting property name enclosed in double quotes: line 1 column 3 (char 2) """ import sys diff --git a/Lib/test/json_tests/test_fail.py b/Lib/test/json_tests/test_fail.py --- a/Lib/test/json_tests/test_fail.py +++ b/Lib/test/json_tests/test_fail.py @@ -125,8 +125,8 @@ ] for data, msg, idx in test_cases: self.assertRaisesRegex(ValueError, - r'^{0}: line 1 column {1} \(char {1}\)'.format( - re.escape(msg), idx), + r'^{0}: line 1 column {1} \(char {2}\)'.format( + re.escape(msg), idx + 1, idx), self.loads, data) def test_unexpected_data(self): @@ -155,8 +155,8 @@ ] for data, msg, idx in test_cases: self.assertRaisesRegex(ValueError, - r'^{0}: line 1 column {1} \(char {1}\)'.format( - re.escape(msg), idx), + r'^{0}: line 1 column {1} \(char {2}\)'.format( + re.escape(msg), idx + 1, idx), self.loads, data) def test_extra_data(self): @@ -173,10 +173,22 @@ for data, msg, idx in test_cases: self.assertRaisesRegex(ValueError, r'^{0}: line 1 column {1} - line 1 column {2}' - r' \(char {1} - {2}\)'.format( - re.escape(msg), idx, len(data)), + r' \(char {3} - {4}\)'.format( + re.escape(msg), idx + 1, len(data) + 1, idx, len(data)), self.loads, data) + def test_linecol(self): + test_cases = [ + ('!', 1, 1, 0), + (' !', 1, 2, 1), + ('\n!', 2, 1, 1), + ('\n \n\n !', 4, 6, 10), + ] + for data, line, col, idx in test_cases: + self.assertRaisesRegex(ValueError, + r'^Expecting value: line {0} column {1}' + r' \(char {2}\)$'.format(line, col, idx), + self.loads, data) class TestPyFail(TestFail, PyTest): pass class TestCFail(TestFail, CTest): pass diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -260,6 +260,9 @@ Library ------- +- Issue #17225: JSON decoder now counts columns in the first line starting + with 1, as in other lines. + - Issue #6623: Added explicit DeprecationWarning for ftplib.netrc, which has been deprecated and undocumented for a long time. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 22:17:49 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 22:17:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MjU1OiB0ZXN0?= =?utf-8?q?_short-circuiting_behavior_of_any=28=29/all=28=29=2E__Patch_by_?= =?utf-8?q?Wim_Glenn=2E?= Message-ID: <3ZBpRx24J0zSy8@mail.python.org> http://hg.python.org/cpython/rev/124237eb5de9 changeset: 82304:124237eb5de9 branch: 2.7 parent: 82300:ce583eb0bec2 user: Ezio Melotti date: Thu Feb 21 23:15:40 2013 +0200 summary: #17255: test short-circuiting behavior of any()/all(). Patch by Wim Glenn. files: Lib/test/test_builtin.py | 2 ++ Misc/ACKS | 1 + 2 files changed, 3 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -110,6 +110,7 @@ self.assertRaises(TypeError, all) # No args self.assertRaises(TypeError, all, [2, 4, 6], []) # Too many args self.assertEqual(all([]), True) # Empty iterator + self.assertEqual(all([0, TestFailingBool()]), False)# Short-circuit S = [50, 60] self.assertEqual(all(x > 42 for x in S), True) S = [50, 40, 60] @@ -124,6 +125,7 @@ self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args self.assertEqual(any([]), False) # Empty iterator + self.assertEqual(any([1, TestFailingBool()]), True) # Short-circuit S = [40, 60, 30] self.assertEqual(any(x > 42 for x in S), True) S = [10, 20, 30] diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -352,6 +352,7 @@ Jonathan Giddy Johannes Gijsbers Michael Gilfix +Wim Glenn Christoph Gohlke Tim Golden Chris Gonnerman -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 22:17:50 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 22:17:50 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MjU1OiB0ZXN0?= =?utf-8?q?_short-circuiting_behavior_of_any=28=29/all=28=29=2E__Patch_by_?= =?utf-8?q?Wim_Glenn=2E?= Message-ID: <3ZBpRy4n4jzStW@mail.python.org> http://hg.python.org/cpython/rev/34b7240d678b changeset: 82305:34b7240d678b branch: 3.2 parent: 82301:36220cf535aa user: Ezio Melotti date: Thu Feb 21 23:15:40 2013 +0200 summary: #17255: test short-circuiting behavior of any()/all(). Patch by Wim Glenn. files: Lib/test/test_builtin.py | 2 ++ Misc/ACKS | 1 + 2 files changed, 3 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -155,6 +155,7 @@ self.assertRaises(TypeError, all) # No args self.assertRaises(TypeError, all, [2, 4, 6], []) # Too many args self.assertEqual(all([]), True) # Empty iterator + self.assertEqual(all([0, TestFailingBool()]), False)# Short-circuit S = [50, 60] self.assertEqual(all(x > 42 for x in S), True) S = [50, 40, 60] @@ -169,6 +170,7 @@ self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args self.assertEqual(any([]), False) # Empty iterator + self.assertEqual(any([1, TestFailingBool()]), True) # Short-circuit S = [40, 60, 30] self.assertEqual(any(x > 42 for x in S), True) S = [10, 20, 30] diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -387,6 +387,7 @@ Johannes Gijsbers Michael Gilfix Matt Giuca +Wim Glenn Christoph Gohlke Tim Golden Guilherme Gon?alves -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 22:17:52 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 22:17:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317255=3A_merge_with_3=2E2=2E?= Message-ID: <3ZBpS00JHGzT0M@mail.python.org> http://hg.python.org/cpython/rev/576d2c885eb6 changeset: 82306:576d2c885eb6 branch: 3.3 parent: 82302:361ba6d4b7c9 parent: 82305:34b7240d678b user: Ezio Melotti date: Thu Feb 21 23:17:08 2013 +0200 summary: #17255: merge with 3.2. files: Lib/test/test_builtin.py | 2 ++ Misc/ACKS | 1 + 2 files changed, 3 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -180,6 +180,7 @@ self.assertRaises(TypeError, all) # No args self.assertRaises(TypeError, all, [2, 4, 6], []) # Too many args self.assertEqual(all([]), True) # Empty iterator + self.assertEqual(all([0, TestFailingBool()]), False)# Short-circuit S = [50, 60] self.assertEqual(all(x > 42 for x in S), True) S = [50, 40, 60] @@ -194,6 +195,7 @@ self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args self.assertEqual(any([]), False) # Empty iterator + self.assertEqual(any([1, TestFailingBool()]), True) # Short-circuit S = [40, 60, 30] self.assertEqual(any(x > 42 for x in S), True) S = [10, 20, 30] diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -423,6 +423,7 @@ Michael Gilfix Yannick Gingras Matt Giuca +Wim Glenn Michael Goderbauer Christoph Gohlke Tim Golden -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 21 22:17:53 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 21 Feb 2013 22:17:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MjU1OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZBpS12vmczT0c@mail.python.org> http://hg.python.org/cpython/rev/c65fcedc511c changeset: 82307:c65fcedc511c parent: 82303:69f793cc34fc parent: 82306:576d2c885eb6 user: Ezio Melotti date: Thu Feb 21 23:17:34 2013 +0200 summary: #17255: merge with 3.3. files: Lib/test/test_builtin.py | 2 ++ Misc/ACKS | 1 + 2 files changed, 3 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -180,6 +180,7 @@ self.assertRaises(TypeError, all) # No args self.assertRaises(TypeError, all, [2, 4, 6], []) # Too many args self.assertEqual(all([]), True) # Empty iterator + self.assertEqual(all([0, TestFailingBool()]), False)# Short-circuit S = [50, 60] self.assertEqual(all(x > 42 for x in S), True) S = [50, 40, 60] @@ -194,6 +195,7 @@ self.assertRaises(TypeError, any) # No args self.assertRaises(TypeError, any, [2, 4, 6], []) # Too many args self.assertEqual(any([]), False) # Empty iterator + self.assertEqual(any([1, TestFailingBool()]), True) # Short-circuit S = [40, 60, 30] self.assertEqual(any(x > 42 for x in S), True) S = [10, 20, 30] diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -426,6 +426,7 @@ Michael Gilfix Yannick Gingras Matt Giuca +Wim Glenn Michael Goderbauer Christoph Gohlke Tim Golden -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 03:25:27 2013 From: python-checkins at python.org (chris.jerdonek) Date: Fri, 22 Feb 2013 03:25:27 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_Issue_=2317270=3A_clarify?= =?utf-8?q?_that_the_section_header_doc_convention_is_optional=2E?= Message-ID: <3ZBxGv6T0fzNlv@mail.python.org> http://hg.python.org/devguide/rev/fa06f733e2fe changeset: 600:fa06f733e2fe user: Chris Jerdonek date: Thu Feb 21 18:23:47 2013 -0800 summary: Issue #17270: clarify that the section header doc convention is optional. files: documenting.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/documenting.rst b/documenting.rst --- a/documenting.rst +++ b/documenting.rst @@ -419,7 +419,7 @@ Normally, there are no heading levels assigned to certain characters as the structure is determined from the succession of headings. However, for the -Python documentation, we use this convention: +Python documentation, here is a suggested convention: * ``#`` with overline, for parts * ``*`` with overline, for chapters -- Repository URL: http://hg.python.org/devguide From python-checkins at python.org Fri Feb 22 03:41:54 2013 From: python-checkins at python.org (terry.reedy) Date: Fri, 22 Feb 2013 03:41:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_At_Todd=27s_request=2C_add_my?= =?utf-8?q?self_as_co-author=2E?= Message-ID: <3ZBxdt731TzQjk@mail.python.org> http://hg.python.org/peps/rev/09062fd082d3 changeset: 4760:09062fd082d3 parent: 4756:c5abe58489d1 user: Terry Jan Reedy date: Thu Feb 21 21:37:44 2013 -0500 summary: At Todd's request, add myself as co-author. files: pep-0434.txt | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/pep-0434.txt b/pep-0434.txt --- a/pep-0434.txt +++ b/pep-0434.txt @@ -1,8 +1,8 @@ PEP: 434 Title: IDLE Enhancement Exception for All Branches -Version: $Revision$ -Last-Modified: $Date$ -Author: Todd Rovito +Version: +Last-Modified: +Author: Todd Rovito , Terry Reedy BDFL-Delegate: Nick Coghlan Status: Draft Type: Informational -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 22 03:41:56 2013 From: python-checkins at python.org (terry.reedy) Date: Fri, 22 Feb 2013 03:41:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps_=28merge_default_-=3E_default=29?= =?utf-8?q?=3A_Merge?= Message-ID: <3ZBxdw55vhzQjk@mail.python.org> http://hg.python.org/peps/rev/4de791923041 changeset: 4761:4de791923041 parent: 4760:09062fd082d3 parent: 4759:22fac210d100 user: Terry Jan Reedy date: Thu Feb 21 21:41:19 2013 -0500 summary: Merge files: pep-0426.txt | 168 +++++++++++++++------ pep-0426/pepsort.py | 249 ++++++++++++++++++------------- 2 files changed, 261 insertions(+), 156 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -44,8 +44,9 @@ distribution. This format is parseable by the ``email`` module with an appropriate -``email.policy.Policy()``. When ``metadata`` is a Unicode string, -```email.parser.Parser().parsestr(metadata)`` is a serviceable parser. +``email.policy.Policy()`` (see `Appendix A`_). When ``metadata`` is a +Unicode string, ```email.parser.Parser().parsestr(metadata)`` is a +serviceable parser. There are three standard locations for these metadata files: @@ -896,19 +897,20 @@ acknowledges that the de facto standard for ordering them is the scheme used by the ``pkg_resources`` component of ``setuptools``. -Software that automatically processes distribution metadata may either -treat non-compliant version identifiers as an error, or attempt to normalize -them to the standard scheme. This means that projects using non-compliant -version identifiers may not be handled consistently across different tools, -even when correctly publishing the earlier metadata versions. +Software that automatically processes distribution metadata should attempt +to normalize non-compliant version identifiers to the standard scheme, and +ignore them if normalization fails. As any normalization scheme will be +implementation specific, this means that projects using non-compliant +version identifiers may not be handled consistently across different +tools, even when correctly publishing the earlier metadata versions. -Distribution developers can help ensure consistent automated handling by -marking non-compliant versions as "hidden" on the Python Package Index -(removing them is generally undesirable, as users may be depending on -those specific versions being available). +For distributions currently using non-compliant version identifiers, these +filtering guidelines mean that it should be enough for the project to +simply switch to the use of compliant version identifiers to ensure +consistent handling by automated tools. -Distribution users may also wish to remove non-compliant versions from any -private package indexes they control. +Distribution users may wish to explicitly remove non-compliant versions from +any private package indexes they control. For metadata v1.2 (PEP 345), the version ordering described in this PEP should be used in preference to the one defined in PEP 386. @@ -1358,25 +1360,41 @@ Finally, as the version scheme in use is dependent on the metadata version, it was deemed simpler to merge the scheme definition directly into -this PEP rather than continuing to maintain it as a separate PEP. This will -also allow all of the distutils-specific elements of PEP 386 to finally be -formally rejected. +this PEP rather than continuing to maintain it as a separate PEP. -The following statistics provide an analysis of the compatibility of existing -projects on PyPI with the specified versioning scheme (as of 16th February, -2013). +`Appendix B` shows detailed results of an analysis of PyPI distribution +version information, as collected on 19th February, 2013. This analysis +compares the behaviour of the explicitly ordered version schemes defined in +this PEP and PEP 386 with the de facto standard defined by the behaviour +of setuptools. These metrics are useful, as the intent of both PEPs is to +follow existing setuptools behaviour as closely as is feasible, while +still throwing exceptions for unorderable versions (rather than trying +to guess an appropriate order as setuptools does). -* Total number of distributions analysed: 28088 -* Distributions with no releases: 248 / 28088 (0.88 %) -* Fully compatible distributions: 24142 / 28088 (85.95 %) -* Compatible distributions after translation: 2830 / 28088 (10.08 %) -* Compatible distributions after filtering: 511 / 28088 (1.82 %) -* Distributions sorted differently after translation: 38 / 28088 (0.14 %) -* Distributions sorted differently without translation: 2 / 28088 (0.01 %) -* Distributions with no compatible releases: 317 / 28088 (1.13 %) +Overall, the percentage of compatible distributions improves from 97.7% +with PEP 386 to 98.7% with this PEP. While the number of projects affected +in practice was small, some of the affected projects are in widespread use +(such as Pinax and selenium). The surprising ordering discrepancy also +concerned developers and acted as an unnecessary barrier to adoption of +the new metadata standard. + +The data also shows that the pre-release sorting discrepancies are seen +only when analysing *all* versions from PyPI, rather than when analysing +public versions. This is largely due to the fact that PyPI normally reports +only the most recent version for each project (unless maintainers +explicitly configure their project to display additional versions). However, +installers that need to satisfy detailed version constraints often need +to look at all available versions, as they may need to retrieve an older +release. + +Even this PEP doesn't completely eliminate the sorting differences relative +to setuptools: + +* Sorts differently (after translations): 38 / 28194 (0.13 %) +* Sorts differently (no translations): 2 / 28194 (0.01 %) The two remaining sort order discrepancies picked up by the analysis are due -to a pair of projects which have published releases ending with a carriage +to a pair of projects which have PyPI releases ending with a carriage return, alongside releases with the same version number, only *without* the trailing carriage return. @@ -1390,26 +1408,6 @@ standard scheme will normalize both representations to ".devN" and sort them by the numeric component. -For comparison, here are the corresponding analysis results for PEP 386: - -* Total number of distributions analysed: 28088 -* Distributions with no releases: 248 / 28088 (0.88 %) -* Fully compatible distributions: 23874 / 28088 (85.00 %) -* Compatible distributions after translation: 2786 / 28088 (9.92 %) -* Compatible distributions after filtering: 527 / 28088 (1.88 %) -* Distributions sorted differently after translation: 96 / 28088 (0.34 %) -* Distributions sorted differently without translation: 14 / 28088 (0.05 %) -* Distributions with no compatible releases: 543 / 28088 (1.93 %) - -These figures make it clear that only a relatively small number of current -projects are affected by these changes. However, some of the affected -projects are in widespread use (such as Pinax and selenium). The -changes also serve to bring the standard scheme more into line with -developer's expectations, which is an important element in encouraging -adoption of the new metadata version. - -The script used for the above analysis is available at [3]_. - A more opinionated description of the versioning scheme ------------------------------------------------------- @@ -1550,8 +1548,10 @@ .. [3] Version compatibility analysis script: http://hg.python.org/peps/file/default/pep-0426/pepsort.py -Appendix -======== +Appendix A +========== + +The script used for this analysis is available at [3]_. Parsing and generating the Metadata 2.0 serialization format using Python 3.3:: @@ -1610,6 +1610,74 @@ # Correct if sys.stdout.encoding == 'UTF-8': Generator(sys.stdout, maxheaderlen=0).flatten(m) +Appendix B +========== + +Metadata v2.0 guidelines versus setuptools:: + + $ ./pepsort.py + Comparing PEP 426 version sort to setuptools. + + Analysing release versions + Compatible: 24477 / 28194 (86.82 %) + Compatible with translation: 247 / 28194 (0.88 %) + Compatible with filtering: 84 / 28194 (0.30 %) + No compatible versions: 420 / 28194 (1.49 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 2966 / 28194 (10.52 %) + + Analysing public versions + Compatible: 25600 / 28194 (90.80 %) + Compatible with translation: 1505 / 28194 (5.34 %) + Compatible with filtering: 13 / 28194 (0.05 %) + No compatible versions: 420 / 28194 (1.49 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 656 / 28194 (2.33 %) + + Analysing all versions + Compatible: 24239 / 28194 (85.97 %) + Compatible with translation: 2833 / 28194 (10.05 %) + Compatible with filtering: 513 / 28194 (1.82 %) + No compatible versions: 320 / 28194 (1.13 %) + Sorts differently (after translations): 38 / 28194 (0.13 %) + Sorts differently (no translations): 2 / 28194 (0.01 %) + No applicable versions: 249 / 28194 (0.88 %) + +Metadata v1.2 guidelines versus setuptools:: + + $ ./pepsort.py 386 + Comparing PEP 386 version sort to setuptools. + + Analysing release versions + Compatible: 24244 / 28194 (85.99 %) + Compatible with translation: 247 / 28194 (0.88 %) + Compatible with filtering: 84 / 28194 (0.30 %) + No compatible versions: 648 / 28194 (2.30 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 2971 / 28194 (10.54 %) + + Analysing public versions + Compatible: 25371 / 28194 (89.99 %) + Compatible with translation: 1507 / 28194 (5.35 %) + Compatible with filtering: 12 / 28194 (0.04 %) + No compatible versions: 648 / 28194 (2.30 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 656 / 28194 (2.33 %) + + Analysing all versions + Compatible: 23969 / 28194 (85.01 %) + Compatible with translation: 2789 / 28194 (9.89 %) + Compatible with filtering: 530 / 28194 (1.88 %) + No compatible versions: 547 / 28194 (1.94 %) + Sorts differently (after translations): 96 / 28194 (0.34 %) + Sorts differently (no translations): 14 / 28194 (0.05 %) + No applicable versions: 249 / 28194 (0.88 %) + + Copyright ========= diff --git a/pep-0426/pepsort.py b/pep-0426/pepsort.py --- a/pep-0426/pepsort.py +++ b/pep-0426/pepsort.py @@ -20,6 +20,8 @@ PEP426_VERSION_RE = re.compile('^(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' '(\.(post)(\d+))?(\.(dev)(\d+))?$') +PEP426_PRERELEASE_RE = re.compile('(a|b|c|rc|dev)\d+') + def pep426_key(s): s = s.strip() m = PEP426_VERSION_RE.match(s) @@ -60,23 +62,28 @@ return nums, pre, post, dev +def is_release_version(s): + return not bool(PEP426_PRERELEASE_RE.search(s)) + def cache_projects(cache_name): logger.info("Retrieving package data from PyPI") client = xmlrpclib.ServerProxy('http://python.org/pypi') projects = dict.fromkeys(client.list_packages()) + public = projects.copy() failed = [] for pname in projects: - time.sleep(0.1) + time.sleep(0.01) logger.debug("Retrieving versions for %s", pname) try: projects[pname] = list(client.package_releases(pname, True)) + public[pname] = list(client.package_releases(pname)) except: failed.append(pname) logger.warn("Error retrieving versions for %s", failed) with open(cache_name, 'w') as f: - json.dump(projects, f, sort_keys=True, + json.dump([projects, public], f, sort_keys=True, indent=2, separators=(',', ': ')) - return projects + return projects, public def get_projects(cache_name): try: @@ -84,11 +91,11 @@ except IOError as exc: if exc.errno != errno.ENOENT: raise - projects = cache_projects(cache_name); + projects, public = cache_projects(cache_name); else: with f: - projects = json.load(f) - return projects + projects, public = json.load(f) + return projects, public VERSION_CACHE = "pepsort_cache.json" @@ -112,109 +119,139 @@ "426": pep426_key, } +class Analysis: + + def __init__(self, title, projects, releases_only=False): + self.title = title + self.projects = projects + + num_projects = len(projects) + + compatible_projects = Category("Compatible", num_projects) + translated_projects = Category("Compatible with translation", num_projects) + filtered_projects = Category("Compatible with filtering", num_projects) + incompatible_projects = Category("No compatible versions", num_projects) + sort_error_translated_projects = Category("Sorts differently (after translations)", num_projects) + sort_error_compatible_projects = Category("Sorts differently (no translations)", num_projects) + null_projects = Category("No applicable versions", num_projects) + + self.categories = [ + compatible_projects, + translated_projects, + filtered_projects, + incompatible_projects, + sort_error_translated_projects, + sort_error_compatible_projects, + null_projects, + ] + + sort_key = SORT_KEYS[pepno] + sort_failures = 0 + for i, (pname, versions) in enumerate(projects.items()): + if i % 100 == 0: + sys.stderr.write('%s / %s\r' % (i, num_projects)) + sys.stderr.flush() + if not versions: + logger.debug('%-15.15s has no versions', pname) + null_projects.add(pname) + continue + # list_legacy and list_pep will contain 2-tuples + # comprising a sortable representation according to either + # the setuptools (legacy) algorithm or the PEP algorithm. + # followed by the original version string + # Go through the PEP 386/426 stuff one by one, since + # we might get failures + list_pep = [] + release_versions = set() + prerelease_versions = set() + excluded_versions = set() + translated_versions = set() + for v in versions: + s = v + try: + k = sort_key(v) + except Exception: + s = suggest_normalized_version(v) + if not s: + good = False + logger.debug('%-15.15s failed for %r, no suggestions', pname, v) + excluded_versions.add(v) + continue + else: + try: + k = sort_key(s) + except ValueError: + logger.error('%-15.15s failed for %r, with suggestion %r', + pname, v, s) + excluded_versions.add(v) + continue + logger.debug('%-15.15s translated %r to %r', pname, v, s) + translated_versions.add(v) + if is_release_version(s): + release_versions.add(v) + else: + prerelease_versions.add(v) + if releases_only: + logger.debug('%-15.15s ignoring pre-release %r', pname, s) + continue + list_pep.append((k, v)) + if releases_only and prerelease_versions and not release_versions: + logger.debug('%-15.15s has no release versions', pname) + null_projects.add(pname) + continue + if not list_pep: + logger.debug('%-15.15s has no compatible versions', pname) + incompatible_projects.add(pname) + continue + # The legacy approach doesn't refuse the temptation to guess, + # so it *always* gives some kind of answer + if releases_only: + excluded_versions |= prerelease_versions + accepted_versions = set(versions) - excluded_versions + list_legacy = [(legacy_key(v), v) for v in accepted_versions] + assert len(list_legacy) == len(list_pep) + sorted_legacy = sorted(list_legacy) + sorted_pep = sorted(list_pep) + sv_legacy = [t[1] for t in sorted_legacy] + sv_pep = [t[1] for t in sorted_pep] + if sv_legacy != sv_pep: + if translated_versions: + logger.debug('%-15.15s translation creates sort differences', pname) + sort_error_translated_projects.add(pname) + else: + logger.debug('%-15.15s incompatible due to sort errors', pname) + sort_error_compatible_projects.add(pname) + logger.debug('%-15.15s unequal: legacy: %s', pname, sv_legacy) + logger.debug('%-15.15s unequal: pep%s: %s', pname, pepno, sv_pep) + continue + # The project is compatible to some degree, + if excluded_versions: + logger.debug('%-15.15s has some compatible versions', pname) + filtered_projects.add(pname) + continue + if translated_versions: + logger.debug('%-15.15s is compatible after translation', pname) + translated_projects.add(pname) + continue + logger.debug('%-15.15s is fully compatible', pname) + compatible_projects.add(pname) + + def print_report(self): + print("Analysing {}".format(self.title)) + for category in self.categories: + print(" ", category) + + def main(pepno = '426'): - sort_key = SORT_KEYS[pepno] print('Comparing PEP %s version sort to setuptools.' % pepno) - projects = get_projects(VERSION_CACHE) - num_projects = len(projects) - - null_projects = Category("No releases", num_projects) - compatible_projects = Category("Compatible", num_projects) - translated_projects = Category("Compatible with translation", num_projects) - filtered_projects = Category("Compatible with filtering", num_projects) - sort_error_translated_projects = Category("Translations sort differently", num_projects) - sort_error_compatible_projects = Category("Incompatible due to sorting errors", num_projects) - incompatible_projects = Category("Incompatible", num_projects) - - categories = [ - null_projects, - compatible_projects, - translated_projects, - filtered_projects, - sort_error_translated_projects, - sort_error_compatible_projects, - incompatible_projects, - ] - - sort_failures = 0 - for i, (pname, versions) in enumerate(projects.items()): - if i % 100 == 0: - sys.stderr.write('%s / %s\r' % (i, num_projects)) - sys.stderr.flush() - if not versions: - logger.debug('%-15.15s has no releases', pname) - null_projects.add(pname) - continue - # list_legacy and list_pep will contain 2-tuples - # comprising a sortable representation according to either - # the setuptools (legacy) algorithm or the PEP algorithm. - # followed by the original version string - list_legacy = [(legacy_key(v), v) for v in versions] - # Go through the PEP 386/426 stuff one by one, since - # we might get failures - list_pep = [] - excluded_versions = set() - translated_versions = set() - for v in versions: - try: - k = sort_key(v) - except Exception: - s = suggest_normalized_version(v) - if not s: - good = False - logger.debug('%-15.15s failed for %r, no suggestions', pname, v) - excluded_versions.add(v) - continue - else: - try: - k = sort_key(s) - except ValueError: - logger.error('%-15.15s failed for %r, with suggestion %r', - pname, v, s) - excluded_versions.add(v) - continue - logger.debug('%-15.15s translated %r to %r', pname, v, s) - translated_versions.add(v) - list_pep.append((k, v)) - if not list_pep: - logger.debug('%-15.15s has no compatible releases', pname) - incompatible_projects.add(pname) - continue - # Now check the versions sort as expected - if excluded_versions: - list_legacy = [(k, v) for k, v in list_legacy - if v not in excluded_versions] - assert len(list_legacy) == len(list_pep) - sorted_legacy = sorted(list_legacy) - sorted_pep = sorted(list_pep) - sv_legacy = [t[1] for t in sorted_legacy] - sv_pep = [t[1] for t in sorted_pep] - if sv_legacy != sv_pep: - if translated_versions: - logger.debug('%-15.15s translation creates sort differences', pname) - sort_error_translated_projects.add(pname) - else: - logger.debug('%-15.15s incompatible due to sort errors', pname) - sort_error_compatible_projects.add(pname) - logger.debug('%-15.15s unequal: legacy: %s', pname, sv_legacy) - logger.debug('%-15.15s unequal: pep%s: %s', pname, pepno, sv_pep) - continue - # The project is compatible to some degree, - if excluded_versions: - logger.debug('%-15.15s has some compatible releases', pname) - filtered_projects.add(pname) - continue - if translated_versions: - logger.debug('%-15.15s is compatible after translation', pname) - translated_projects.add(pname) - continue - logger.debug('%-15.15s is fully compatible', pname) - compatible_projects.add(pname) - - for category in categories: - print(category) - + projects, public = get_projects(VERSION_CACHE) + print() + Analysis("release versions", public, releases_only=True).print_report() + print() + Analysis("public versions", public).print_report() + print() + Analysis("all versions", projects).print_report() # Uncomment the line below to explore differences in details # import pdb; pdb.set_trace() # Grepping the log files is also informative -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Fri Feb 22 04:04:54 2013 From: python-checkins at python.org (chris.jerdonek) Date: Fri, 22 Feb 2013 04:04:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MjAz?= =?utf-8?q?=3A_add_long_option_names_to_unittest_discovery_docs=2E?= Message-ID: <3ZBy8Q1BLKzSrl@mail.python.org> http://hg.python.org/cpython/rev/f4ccc5aab287 changeset: 82308:f4ccc5aab287 branch: 2.7 parent: 82304:124237eb5de9 user: Chris Jerdonek date: Thu Feb 21 18:52:12 2013 -0800 summary: Issue #17203: add long option names to unittest discovery docs. files: Doc/library/unittest.rst | 18 +++++++++--------- Misc/NEWS | 2 ++ 2 files changed, 11 insertions(+), 9 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -279,15 +279,15 @@ Verbose output -.. cmdoption:: -s directory - - Directory to start discovery ('.' default) - -.. cmdoption:: -p pattern - - Pattern to match test files ('test*.py' default) - -.. cmdoption:: -t directory +.. cmdoption:: -s, --start-directory directory + + Directory to start discovery (``.`` default) + +.. cmdoption:: -p, --pattern pattern + + Pattern to match test files (``test*.py`` default) + +.. cmdoption:: -t, --top-level-directory directory Top level directory of project (defaults to start directory) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -918,6 +918,8 @@ Documentation ------------- +- Issue #17203: add long option names to unittest discovery docs. + - Issue #13094: add "Why do lambdas defined in a loop with different values all return the same result?" programming FAQ. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 04:04:55 2013 From: python-checkins at python.org (chris.jerdonek) Date: Fri, 22 Feb 2013 04:04:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MjAz?= =?utf-8?q?=3A_add_long_option_names_to_unittest_discovery_docs=2E?= Message-ID: <3ZBy8R409FzSxc@mail.python.org> http://hg.python.org/cpython/rev/c0581f7be196 changeset: 82309:c0581f7be196 branch: 3.2 parent: 82305:34b7240d678b user: Chris Jerdonek date: Thu Feb 21 18:54:43 2013 -0800 summary: Issue #17203: add long option names to unittest discovery docs. files: Doc/library/unittest.rst | 6 +++--- Misc/NEWS | 2 ++ 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -304,15 +304,15 @@ Verbose output -.. cmdoption:: -s directory +.. cmdoption:: -s, --start-directory directory Directory to start discovery (``.`` default) -.. cmdoption:: -p pattern +.. cmdoption:: -p, --pattern pattern Pattern to match test files (``test*.py`` default) -.. cmdoption:: -t directory +.. cmdoption:: -t, --top-level-directory directory Top level directory of project (defaults to start directory) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1070,6 +1070,8 @@ Documentation ------------- +- Issue #17203: add long option names to unittest discovery docs. + - Issue #13094: add "Why do lambdas defined in a loop with different values all return the same result?" programming FAQ. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 04:04:56 2013 From: python-checkins at python.org (chris.jerdonek) Date: Fri, 22 Feb 2013 04:04:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2317203=3A_add_long_option_names_to_unittest_discovery_?= =?utf-8?q?docs=2E?= Message-ID: <3ZBy8S71SDzSxs@mail.python.org> http://hg.python.org/cpython/rev/7d1122c79985 changeset: 82310:7d1122c79985 branch: 3.3 parent: 82306:576d2c885eb6 parent: 82309:c0581f7be196 user: Chris Jerdonek date: Thu Feb 21 19:00:06 2013 -0800 summary: Issue #17203: add long option names to unittest discovery docs. files: Doc/library/unittest.rst | 6 +++--- Misc/NEWS | 2 ++ 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -263,15 +263,15 @@ Verbose output -.. cmdoption:: -s directory +.. cmdoption:: -s, --start-directory directory Directory to start discovery (``.`` default) -.. cmdoption:: -p pattern +.. cmdoption:: -p, --pattern pattern Pattern to match test files (``test*.py`` default) -.. cmdoption:: -t directory +.. cmdoption:: -t, --top-level-directory directory Top level directory of project (defaults to start directory) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -748,6 +748,8 @@ Documentation ------------- +- Issue #17203: add long option names to unittest discovery docs. + - Issue #13094: add "Why do lambdas defined in a loop with different values all return the same result?" programming FAQ. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 04:04:58 2013 From: python-checkins at python.org (chris.jerdonek) Date: Fri, 22 Feb 2013 04:04:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2317203=3A_add_long_option_names_to_unittest_disc?= =?utf-8?q?overy_docs=2E?= Message-ID: <3ZBy8V2dHCz7LjM@mail.python.org> http://hg.python.org/cpython/rev/bbe5efa9c667 changeset: 82311:bbe5efa9c667 parent: 82307:c65fcedc511c parent: 82310:7d1122c79985 user: Chris Jerdonek date: Thu Feb 21 19:02:38 2013 -0800 summary: Issue #17203: add long option names to unittest discovery docs. files: Doc/library/unittest.rst | 6 +++--- Misc/NEWS | 4 +++- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -263,15 +263,15 @@ Verbose output -.. cmdoption:: -s directory +.. cmdoption:: -s, --start-directory directory Directory to start discovery (``.`` default) -.. cmdoption:: -p pattern +.. cmdoption:: -p, --pattern pattern Pattern to match test files (``test*.py`` default) -.. cmdoption:: -t directory +.. cmdoption:: -t, --top-level-directory directory Top level directory of project (defaults to start directory) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1001,6 +1001,8 @@ Documentation ------------- +- Issue #17203: add long option names to unittest discovery docs. + - Issue #13094: add "Why do lambdas defined in a loop with different values all return the same result?" programming FAQ. @@ -1064,7 +1066,7 @@ Tools/Demos ----------- -- Issue #17028: Allowed Python arguments to be supplied to the Windows +- Issue #17028: Allowed Python arguments to be supplied to the Windows launcher. - Issue #15539: Fix a number of bugs in Tools/scripts/pindent.py. Now -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Fri Feb 22 05:59:53 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Fri, 22 Feb 2013 05:59:53 +0100 Subject: [Python-checkins] Daily reference leaks (c65fcedc511c): sum=0 Message-ID: results for c65fcedc511c on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogx3EiZZ', '-x'] From python-checkins at python.org Fri Feb 22 06:53:39 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 06:53:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MDM1OiB1c2Ug?= =?utf-8?q?new_style_classes_in_classmethod/staticmethod_examples=2E__Patc?= =?utf-8?q?h_by?= Message-ID: <3ZC1v74YRrzN5P@mail.python.org> http://hg.python.org/cpython/rev/30e7bc28d4f5 changeset: 82312:30e7bc28d4f5 branch: 2.7 parent: 82308:f4ccc5aab287 user: Ezio Melotti date: Fri Feb 22 07:34:52 2013 +0200 summary: #17035: use new style classes in classmethod/staticmethod examples. Patch by Berker Peksag. files: Doc/library/functions.rst | 10 ++++++---- 1 files changed, 6 insertions(+), 4 deletions(-) diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst --- a/Doc/library/functions.rst +++ b/Doc/library/functions.rst @@ -162,9 +162,10 @@ instance method receives the instance. To declare a class method, use this idiom:: - class C: + class C(object): @classmethod - def f(cls, arg1, arg2, ...): ... + def f(cls, arg1, arg2, ...): + ... The ``@classmethod`` form is a function :term:`decorator` -- see the description of function definitions in :ref:`function` for details. @@ -1303,9 +1304,10 @@ A static method does not receive an implicit first argument. To declare a static method, use this idiom:: - class C: + class C(object): @staticmethod - def f(arg1, arg2, ...): ... + def f(arg1, arg2, ...): + ... The ``@staticmethod`` form is a function :term:`decorator` -- see the description of function definitions in :ref:`function` for details. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 06:53:41 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 06:53:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MjU2OiBmaXgg?= =?utf-8?q?syntax_highlight_in_embedding_example=2E__Patch_by_Kushal_Das?= =?utf-8?q?=2E?= Message-ID: <3ZC1v902fyzSvX@mail.python.org> http://hg.python.org/cpython/rev/ad55dc7de7fc changeset: 82313:ad55dc7de7fc branch: 2.7 user: Ezio Melotti date: Fri Feb 22 07:38:11 2013 +0200 summary: #17256: fix syntax highlight in embedding example. Patch by Kushal Das. files: Doc/extending/embedding.rst | 8 ++++++-- 1 files changed, 6 insertions(+), 2 deletions(-) diff --git a/Doc/extending/embedding.rst b/Doc/extending/embedding.rst --- a/Doc/extending/embedding.rst +++ b/Doc/extending/embedding.rst @@ -229,7 +229,9 @@ These two lines initialize the ``numargs`` variable, and make the :func:`emb.numargs` function accessible to the embedded Python interpreter. -With these extensions, the Python script can do things like :: +With these extensions, the Python script can do things like + +.. code-block:: python import emb print "Number of arguments", emb.numargs() @@ -273,7 +275,9 @@ Determining the right options to use for any given platform can be quite difficult, but fortunately the Python configuration already has those values. To retrieve them from an installed Python interpreter, start an interactive -interpreter and have a short session like this:: +interpreter and have a short session like this + +.. code-block:: python >>> import distutils.sysconfig >>> distutils.sysconfig.get_config_var('LINKFORSHARED') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 06:53:42 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 06:53:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MjU2OiBmaXgg?= =?utf-8?q?syntax_highlight_in_embedding_example=2E__Patch_by_Kushal_Das?= =?utf-8?q?=2E?= Message-ID: <3ZC1vB2dSYzT0M@mail.python.org> http://hg.python.org/cpython/rev/b42e7aeb4235 changeset: 82314:b42e7aeb4235 branch: 3.2 parent: 82309:c0581f7be196 user: Ezio Melotti date: Fri Feb 22 07:46:22 2013 +0200 summary: #17256: fix syntax highlight in embedding example. Patch by Kushal Das. files: Doc/extending/embedding.rst | 12 +++++++++--- 1 files changed, 9 insertions(+), 3 deletions(-) diff --git a/Doc/extending/embedding.rst b/Doc/extending/embedding.rst --- a/Doc/extending/embedding.rst +++ b/Doc/extending/embedding.rst @@ -138,7 +138,9 @@ in ``argv[2]``. Its integer arguments are the other values of the ``argv`` array. If you :ref:`compile and link ` this program (let's call the finished executable :program:`call`), and use it to execute a Python -script, such as:: +script, such as: + +.. code-block:: python def multiply(a,b): print("Will compute", a, "times", b) @@ -238,7 +240,9 @@ These two lines initialize the ``numargs`` variable, and make the :func:`emb.numargs` function accessible to the embedded Python interpreter. -With these extensions, the Python script can do things like :: +With these extensions, the Python script can do things like + +.. code-block:: python import emb print("Number of arguments", emb.numargs()) @@ -303,7 +307,9 @@ to find its location) and compilation options. In this case, the :mod:`sysconfig` module is a useful tool to programmatically extract the configuration values that you will want to -combine together:: +combine together: + +.. code-block:: python >>> import sysconfig >>> sysconfig.get_config_var('LINKFORSHARED') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 06:53:43 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 06:53:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317256=3A_merge_with_3=2E2=2E?= Message-ID: <3ZC1vC5c9WzT0M@mail.python.org> http://hg.python.org/cpython/rev/3405d828ce95 changeset: 82315:3405d828ce95 branch: 3.3 parent: 82310:7d1122c79985 parent: 82314:b42e7aeb4235 user: Ezio Melotti date: Fri Feb 22 07:51:18 2013 +0200 summary: #17256: merge with 3.2. files: Doc/extending/embedding.rst | 12 +++++++++--- 1 files changed, 9 insertions(+), 3 deletions(-) diff --git a/Doc/extending/embedding.rst b/Doc/extending/embedding.rst --- a/Doc/extending/embedding.rst +++ b/Doc/extending/embedding.rst @@ -138,7 +138,9 @@ in ``argv[2]``. Its integer arguments are the other values of the ``argv`` array. If you :ref:`compile and link ` this program (let's call the finished executable :program:`call`), and use it to execute a Python -script, such as:: +script, such as: + +.. code-block:: python def multiply(a,b): print("Will compute", a, "times", b) @@ -238,7 +240,9 @@ These two lines initialize the ``numargs`` variable, and make the :func:`emb.numargs` function accessible to the embedded Python interpreter. -With these extensions, the Python script can do things like :: +With these extensions, the Python script can do things like + +.. code-block:: python import emb print("Number of arguments", emb.numargs()) @@ -303,7 +307,9 @@ to find its location) and compilation options. In this case, the :mod:`sysconfig` module is a useful tool to programmatically extract the configuration values that you will want to -combine together:: +combine together: + +.. code-block:: python >>> import sysconfig >>> sysconfig.get_config_var('LINKFORSHARED') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 06:53:45 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 06:53:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MjU2OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZC1vF1PXzzSrY@mail.python.org> http://hg.python.org/cpython/rev/fb50eb64e097 changeset: 82316:fb50eb64e097 parent: 82311:bbe5efa9c667 parent: 82315:3405d828ce95 user: Ezio Melotti date: Fri Feb 22 07:51:43 2013 +0200 summary: #17256: merge with 3.3. files: Doc/extending/embedding.rst | 12 +++++++++--- 1 files changed, 9 insertions(+), 3 deletions(-) diff --git a/Doc/extending/embedding.rst b/Doc/extending/embedding.rst --- a/Doc/extending/embedding.rst +++ b/Doc/extending/embedding.rst @@ -138,7 +138,9 @@ in ``argv[2]``. Its integer arguments are the other values of the ``argv`` array. If you :ref:`compile and link ` this program (let's call the finished executable :program:`call`), and use it to execute a Python -script, such as:: +script, such as: + +.. code-block:: python def multiply(a,b): print("Will compute", a, "times", b) @@ -238,7 +240,9 @@ These two lines initialize the ``numargs`` variable, and make the :func:`emb.numargs` function accessible to the embedded Python interpreter. -With these extensions, the Python script can do things like :: +With these extensions, the Python script can do things like + +.. code-block:: python import emb print("Number of arguments", emb.numargs()) @@ -303,7 +307,9 @@ to find its location) and compilation options. In this case, the :mod:`sysconfig` module is a useful tool to programmatically extract the configuration values that you will want to -combine together:: +combine together: + +.. code-block:: python >>> import sysconfig >>> sysconfig.get_config_var('LINKFORSHARED') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 06:53:46 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 06:53:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_the_highli?= =?utf-8?q?ght_in_another_example=2E?= Message-ID: <3ZC1vG3vpgzStj@mail.python.org> http://hg.python.org/cpython/rev/3213fe4a72e0 changeset: 82317:3213fe4a72e0 branch: 2.7 parent: 82313:ad55dc7de7fc user: Ezio Melotti date: Fri Feb 22 07:53:23 2013 +0200 summary: Fix the highlight in another example. files: Doc/extending/embedding.rst | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diff --git a/Doc/extending/embedding.rst b/Doc/extending/embedding.rst --- a/Doc/extending/embedding.rst +++ b/Doc/extending/embedding.rst @@ -140,7 +140,9 @@ This code loads a Python script using ``argv[1]``, and calls the function named in ``argv[2]``. Its integer arguments are the other values of the ``argv`` array. If you compile and link this program (let's call the finished executable -:program:`call`), and use it to execute a Python script, such as:: +:program:`call`), and use it to execute a Python script, such as: + +.. code-block:: python def multiply(a,b): print "Will compute", a, "times", b -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 07:29:47 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 07:29:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MjcxOiB1cGRh?= =?utf-8?q?te_example_in_tempfile_docs=2E?= Message-ID: <3ZC2hq3h3szSvX@mail.python.org> http://hg.python.org/cpython/rev/82343bbf8868 changeset: 82318:82343bbf8868 branch: 3.2 parent: 82314:b42e7aeb4235 user: Ezio Melotti date: Fri Feb 22 08:28:14 2013 +0200 summary: #17271: update example in tempfile docs. files: Doc/library/tempfile.rst | 7 +++---- 1 files changed, 3 insertions(+), 4 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -179,11 +179,10 @@ ``delete=False`` parameter:: >>> f = NamedTemporaryFile(delete=False) - >>> f - ', mode 'w+b' at 0x384698> >>> f.name - '/var/folders/5q/5qTPn6xq2RaWqk+1Ytw3-U+++TI/-Tmp-/tmpG7V1Y0' - >>> f.write("Hello World!\n") + '/tmp/tmptjujjt' + >>> f.write(b"Hello World!\n") + 13 >>> f.close() >>> os.unlink(f.name) >>> os.path.exists(f.name) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 07:29:48 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 07:29:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317271=3A_merge_with_3=2E2=2E?= Message-ID: <3ZC2hr6RfyzT1h@mail.python.org> http://hg.python.org/cpython/rev/a9993d40821f changeset: 82319:a9993d40821f branch: 3.3 parent: 82315:3405d828ce95 parent: 82318:82343bbf8868 user: Ezio Melotti date: Fri Feb 22 08:29:11 2013 +0200 summary: #17271: merge with 3.2. files: Doc/library/tempfile.rst | 7 +++---- 1 files changed, 3 insertions(+), 4 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -181,11 +181,10 @@ ``delete=False`` parameter:: >>> f = NamedTemporaryFile(delete=False) - >>> f - ', mode 'w+b' at 0x384698> >>> f.name - '/var/folders/5q/5qTPn6xq2RaWqk+1Ytw3-U+++TI/-Tmp-/tmpG7V1Y0' - >>> f.write("Hello World!\n") + '/tmp/tmptjujjt' + >>> f.write(b"Hello World!\n") + 13 >>> f.close() >>> os.unlink(f.name) >>> os.path.exists(f.name) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Fri Feb 22 07:29:50 2013 From: python-checkins at python.org (ezio.melotti) Date: Fri, 22 Feb 2013 07:29:50 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MjcxOiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZC2ht2LFQz7LlK@mail.python.org> http://hg.python.org/cpython/rev/fcc61327c86c changeset: 82320:fcc61327c86c parent: 82316:fb50eb64e097 parent: 82319:a9993d40821f user: Ezio Melotti date: Fri Feb 22 08:29:34 2013 +0200 summary: #17271: merge with 3.3. files: Doc/library/tempfile.rst | 7 +++---- 1 files changed, 3 insertions(+), 4 deletions(-) diff --git a/Doc/library/tempfile.rst b/Doc/library/tempfile.rst --- a/Doc/library/tempfile.rst +++ b/Doc/library/tempfile.rst @@ -181,11 +181,10 @@ ``delete=False`` parameter:: >>> f = NamedTemporaryFile(delete=False) - >>> f - ', mode 'w+b' at 0x384698> >>> f.name - '/var/folders/5q/5qTPn6xq2RaWqk+1Ytw3-U+++TI/-Tmp-/tmpG7V1Y0' - >>> f.write("Hello World!\n") + '/tmp/tmptjujjt' + >>> f.write(b"Hello World!\n") + 13 >>> f.close() >>> os.unlink(f.name) >>> os.path.exists(f.name) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 03:56:03 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 03:56:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE1NDM4OiBhZGQg?= =?utf-8?q?a_note_to_math=2Epow=28=29_that_suggests_using_**/pow=28=29_for?= =?utf-8?q?_integers=2E?= Message-ID: <3ZCYvl6SP3zQD6@mail.python.org> http://hg.python.org/cpython/rev/ad0712f4b3e0 changeset: 82321:ad0712f4b3e0 branch: 2.7 parent: 82317:3213fe4a72e0 user: Ezio Melotti date: Sat Feb 23 04:53:44 2013 +0200 summary: #15438: add a note to math.pow() that suggests using **/pow() for integers. Patch by Mark Dickinson. files: Doc/library/math.rst | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/Doc/library/math.rst b/Doc/library/math.rst --- a/Doc/library/math.rst +++ b/Doc/library/math.rst @@ -212,6 +212,10 @@ ``x`` is negative, and ``y`` is not an integer then ``pow(x, y)`` is undefined, and raises :exc:`ValueError`. + Unlike the built-in ``**`` operator, :func:`math.pow` converts both + its arguments to type :class:`float`. Use ``**`` or the built-in + :func:`pow` function for computing exact integer powers. + .. versionchanged:: 2.6 The outcome of ``1**nan`` and ``nan**0`` was undefined. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 03:56:05 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 03:56:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE1NDM4OiBhZGQg?= =?utf-8?q?a_note_to_math=2Epow=28=29_that_suggests_using_**/pow=28=29_for?= =?utf-8?q?_integers=2E?= Message-ID: <3ZCYvn2JzvzRjD@mail.python.org> http://hg.python.org/cpython/rev/7d95a0aa6b5a changeset: 82322:7d95a0aa6b5a branch: 3.2 parent: 82318:82343bbf8868 user: Ezio Melotti date: Sat Feb 23 04:53:44 2013 +0200 summary: #15438: add a note to math.pow() that suggests using **/pow() for integers. Patch by Mark Dickinson. files: Doc/library/math.rst | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/Doc/library/math.rst b/Doc/library/math.rst --- a/Doc/library/math.rst +++ b/Doc/library/math.rst @@ -202,6 +202,10 @@ ``x`` is negative, and ``y`` is not an integer then ``pow(x, y)`` is undefined, and raises :exc:`ValueError`. + Unlike the built-in ``**`` operator, :func:`math.pow` converts both + its arguments to type :class:`float`. Use ``**`` or the built-in + :func:`pow` function for computing exact integer powers. + .. function:: sqrt(x) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 03:56:06 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 03:56:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2315438=3A_merge_with_3=2E2=2E?= Message-ID: <3ZCYvp4xKWzRl5@mail.python.org> http://hg.python.org/cpython/rev/a305901366a6 changeset: 82323:a305901366a6 branch: 3.3 parent: 82319:a9993d40821f parent: 82322:7d95a0aa6b5a user: Ezio Melotti date: Sat Feb 23 04:55:24 2013 +0200 summary: #15438: merge with 3.2. files: Doc/library/math.rst | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/Doc/library/math.rst b/Doc/library/math.rst --- a/Doc/library/math.rst +++ b/Doc/library/math.rst @@ -215,6 +215,10 @@ ``x`` is negative, and ``y`` is not an integer then ``pow(x, y)`` is undefined, and raises :exc:`ValueError`. + Unlike the built-in ``**`` operator, :func:`math.pow` converts both + its arguments to type :class:`float`. Use ``**`` or the built-in + :func:`pow` function for computing exact integer powers. + .. function:: sqrt(x) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 03:56:08 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 03:56:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE1NDM4OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZCYvr0LzJzQhK@mail.python.org> http://hg.python.org/cpython/rev/e0f940829eb6 changeset: 82324:e0f940829eb6 parent: 82320:fcc61327c86c parent: 82323:a305901366a6 user: Ezio Melotti date: Sat Feb 23 04:55:48 2013 +0200 summary: #15438: merge with 3.3. files: Doc/library/math.rst | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/Doc/library/math.rst b/Doc/library/math.rst --- a/Doc/library/math.rst +++ b/Doc/library/math.rst @@ -215,6 +215,10 @@ ``x`` is negative, and ``y`` is not an integer then ``pow(x, y)`` is undefined, and raises :exc:`ValueError`. + Unlike the built-in ``**`` operator, :func:`math.pow` converts both + its arguments to type :class:`float`. Use ``**`` or the built-in + :func:`pow` function for computing exact integer powers. + .. function:: sqrt(x) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 04:33:17 2013 From: python-checkins at python.org (daniel.holth) Date: Sat, 23 Feb 2013 04:33:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_minor_extras_edit?= Message-ID: <3ZCZkj6r0CzRHj@mail.python.org> http://hg.python.org/peps/rev/f281aea5c0c1 changeset: 4762:f281aea5c0c1 parent: 4750:e8b120a12fc4 user: Daniel Holth date: Fri Feb 22 22:19:26 2013 -0500 summary: PEP 426: minor extras edit files: pep-0426.txt | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -338,10 +338,10 @@ Provides-Extra (multiple use) ----------------------------- -A string containing the name of an optional feature. Must be printable -ASCII, not containing whitespace, comma (,), or square brackets []. -May be used to make a dependency conditional on whether the optional -feature has been requested. +A string containing the name of an optional feature or "extra" that may +only be available when additional dependencies have been installed. Must +be printable ASCII, not containing whitespace, comma (,), or square +brackets []. See `Optional Features`_ for details on the use of this field. -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 04:33:19 2013 From: python-checkins at python.org (daniel.holth) Date: Sat, 23 Feb 2013 04:33:19 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps_=28merge_default_-=3E_default=29?= =?utf-8?q?=3A_merge?= Message-ID: <3ZCZkl6CCQzRbq@mail.python.org> http://hg.python.org/peps/rev/5d4979feefc6 changeset: 4763:5d4979feefc6 parent: 4762:f281aea5c0c1 parent: 4761:4de791923041 user: Daniel Holth date: Fri Feb 22 22:19:44 2013 -0500 summary: merge files: pep-0361.txt | 4 +- pep-0426.txt | 196 +++++++++++++++++------- pep-0426/pepsort.py | 249 ++++++++++++++++++------------- pep-0427.txt | 2 +- pep-0434.txt | 85 ++++++++++ 5 files changed, 371 insertions(+), 165 deletions(-) diff --git a/pep-0361.txt b/pep-0361.txt --- a/pep-0361.txt +++ b/pep-0361.txt @@ -80,8 +80,10 @@ Mar 19 2010: Python 2.6.5 final released Aug 24 2010: Python 2.6.6 final released Jun 03 2011: Python 2.6.7 final released (security-only) + Apr 10 2012: Python 2.6.8 final released (security-only) - Python 2.6.8 (security-only) planned for Feb 10-17 2012 + Python 2.6.9 (security-only) planned for October 2013. This + will be the last Python 2.6 release. See the public `Google calendar`_ diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -44,8 +44,9 @@ distribution. This format is parseable by the ``email`` module with an appropriate -``email.policy.Policy()``. When ``metadata`` is a Unicode string, -```email.parser.Parser().parsestr(metadata)`` is a serviceable parser. +``email.policy.Policy()`` (see `Appendix A`_). When ``metadata`` is a +Unicode string, ```email.parser.Parser().parsestr(metadata)`` is a +serviceable parser. There are three standard locations for these metadata files: @@ -89,9 +90,14 @@ Version of the file format; "2.0" is the only legal value. -Automated tools should warn if ``Metadata-Version`` is greater than the -highest version they support, and must fail if ``Metadata-Version`` has -a greater major version than the highest version they support. +Automated tools consuming metadata should warn if ``Metadata-Version`` is +greater than the highest version they support, and must fail if +``Metadata-Version`` has a greater major version than the highest +version they support. + +For broader compatibility, automated tools may choose to produce +distribution metadata using the lowest metadata version that includes +all of the needed fields. Example:: @@ -330,7 +336,6 @@ Examples:: - Provides-Dist: ThisProject Provides-Dist: AnotherProject (3.4) Provides-Dist: virtual_package @@ -892,19 +897,20 @@ acknowledges that the de facto standard for ordering them is the scheme used by the ``pkg_resources`` component of ``setuptools``. -Software that automatically processes distribution metadata may either -treat non-compliant version identifiers as an error, or attempt to normalize -them to the standard scheme. This means that projects using non-compliant -version identifiers may not be handled consistently across different tools, -even when correctly publishing the earlier metadata versions. +Software that automatically processes distribution metadata should attempt +to normalize non-compliant version identifiers to the standard scheme, and +ignore them if normalization fails. As any normalization scheme will be +implementation specific, this means that projects using non-compliant +version identifiers may not be handled consistently across different +tools, even when correctly publishing the earlier metadata versions. -Distribution developers can help ensure consistent automated handling by -marking non-compliant versions as "hidden" on the Python Package Index -(removing them is generally undesirable, as users may be depending on -those specific versions being available). +For distributions currently using non-compliant version identifiers, these +filtering guidelines mean that it should be enough for the project to +simply switch to the use of compliant version identifiers to ensure +consistent handling by automated tools. -Distribution users may also wish to remove non-compliant versions from any -private package indexes they control. +Distribution users may wish to explicitly remove non-compliant versions from +any private package indexes they control. For metadata v1.2 (PEP 345), the version ordering described in this PEP should be used in preference to the one defined in PEP 386. @@ -1283,9 +1289,19 @@ metadata specifications is unlikely to give the expected behaviour. Whenever the major version number of the specification is incremented, it -is expected that deployment will take some time, as metadata consuming tools -much be updated before other tools can safely start producing the new -format. +is expected that deployment will take some time, as either metadata +consuming tools must be updated before other tools can safely start +producing the new format, or else the sdist and wheel formats, along with +the installation database definition, will need to be updated to support +provision of multiple versions of the metadata in parallel. + +Existing tools won't abide by this guideline until they're updated to +support the new metadata standard, so the new semantics will first take +effect for a hypothetical 2.x -> 3.0 transition. For the 1.x -> 2.0 +transition, it is recommended that tools continue to produce the +existing supplementary files (such as ``entry_points.txt``) in addition +to any equivalents specified using the new features of the standard +metadata format (including the formal extension mechanism). Standard encoding and other format clarifications @@ -1344,25 +1360,41 @@ Finally, as the version scheme in use is dependent on the metadata version, it was deemed simpler to merge the scheme definition directly into -this PEP rather than continuing to maintain it as a separate PEP. This will -also allow all of the distutils-specific elements of PEP 386 to finally be -formally rejected. +this PEP rather than continuing to maintain it as a separate PEP. -The following statistics provide an analysis of the compatibility of existing -projects on PyPI with the specified versioning scheme (as of 16th February, -2013). +`Appendix B` shows detailed results of an analysis of PyPI distribution +version information, as collected on 19th February, 2013. This analysis +compares the behaviour of the explicitly ordered version schemes defined in +this PEP and PEP 386 with the de facto standard defined by the behaviour +of setuptools. These metrics are useful, as the intent of both PEPs is to +follow existing setuptools behaviour as closely as is feasible, while +still throwing exceptions for unorderable versions (rather than trying +to guess an appropriate order as setuptools does). -* Total number of distributions analysed: 28088 -* Distributions with no releases: 248 / 28088 (0.88 %) -* Fully compatible distributions: 24142 / 28088 (85.95 %) -* Compatible distributions after translation: 2830 / 28088 (10.08 %) -* Compatible distributions after filtering: 511 / 28088 (1.82 %) -* Distributions sorted differently after translation: 38 / 28088 (0.14 %) -* Distributions sorted differently without translation: 2 / 28088 (0.01 %) -* Distributions with no compatible releases: 317 / 28088 (1.13 %) +Overall, the percentage of compatible distributions improves from 97.7% +with PEP 386 to 98.7% with this PEP. While the number of projects affected +in practice was small, some of the affected projects are in widespread use +(such as Pinax and selenium). The surprising ordering discrepancy also +concerned developers and acted as an unnecessary barrier to adoption of +the new metadata standard. + +The data also shows that the pre-release sorting discrepancies are seen +only when analysing *all* versions from PyPI, rather than when analysing +public versions. This is largely due to the fact that PyPI normally reports +only the most recent version for each project (unless maintainers +explicitly configure their project to display additional versions). However, +installers that need to satisfy detailed version constraints often need +to look at all available versions, as they may need to retrieve an older +release. + +Even this PEP doesn't completely eliminate the sorting differences relative +to setuptools: + +* Sorts differently (after translations): 38 / 28194 (0.13 %) +* Sorts differently (no translations): 2 / 28194 (0.01 %) The two remaining sort order discrepancies picked up by the analysis are due -to a pair of projects which have published releases ending with a carriage +to a pair of projects which have PyPI releases ending with a carriage return, alongside releases with the same version number, only *without* the trailing carriage return. @@ -1376,26 +1408,6 @@ standard scheme will normalize both representations to ".devN" and sort them by the numeric component. -For comparison, here are the corresponding analysis results for PEP 386: - -* Total number of distributions analysed: 28088 -* Distributions with no releases: 248 / 28088 (0.88 %) -* Fully compatible distributions: 23874 / 28088 (85.00 %) -* Compatible distributions after translation: 2786 / 28088 (9.92 %) -* Compatible distributions after filtering: 527 / 28088 (1.88 %) -* Distributions sorted differently after translation: 96 / 28088 (0.34 %) -* Distributions sorted differently without translation: 14 / 28088 (0.05 %) -* Distributions with no compatible releases: 543 / 28088 (1.93 %) - -These figures make it clear that only a relatively small number of current -projects are affected by these changes. However, some of the affected -projects are in widespread use (such as Pinax and selenium). The -changes also serve to bring the standard scheme more into line with -developer's expectations, which is an important element in encouraging -adoption of the new metadata version. - -The script used for the above analysis is available at [3]_. - A more opinionated description of the versioning scheme ------------------------------------------------------- @@ -1536,8 +1548,10 @@ .. [3] Version compatibility analysis script: http://hg.python.org/peps/file/default/pep-0426/pepsort.py -Appendix -======== +Appendix A +========== + +The script used for this analysis is available at [3]_. Parsing and generating the Metadata 2.0 serialization format using Python 3.3:: @@ -1596,6 +1610,74 @@ # Correct if sys.stdout.encoding == 'UTF-8': Generator(sys.stdout, maxheaderlen=0).flatten(m) +Appendix B +========== + +Metadata v2.0 guidelines versus setuptools:: + + $ ./pepsort.py + Comparing PEP 426 version sort to setuptools. + + Analysing release versions + Compatible: 24477 / 28194 (86.82 %) + Compatible with translation: 247 / 28194 (0.88 %) + Compatible with filtering: 84 / 28194 (0.30 %) + No compatible versions: 420 / 28194 (1.49 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 2966 / 28194 (10.52 %) + + Analysing public versions + Compatible: 25600 / 28194 (90.80 %) + Compatible with translation: 1505 / 28194 (5.34 %) + Compatible with filtering: 13 / 28194 (0.05 %) + No compatible versions: 420 / 28194 (1.49 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 656 / 28194 (2.33 %) + + Analysing all versions + Compatible: 24239 / 28194 (85.97 %) + Compatible with translation: 2833 / 28194 (10.05 %) + Compatible with filtering: 513 / 28194 (1.82 %) + No compatible versions: 320 / 28194 (1.13 %) + Sorts differently (after translations): 38 / 28194 (0.13 %) + Sorts differently (no translations): 2 / 28194 (0.01 %) + No applicable versions: 249 / 28194 (0.88 %) + +Metadata v1.2 guidelines versus setuptools:: + + $ ./pepsort.py 386 + Comparing PEP 386 version sort to setuptools. + + Analysing release versions + Compatible: 24244 / 28194 (85.99 %) + Compatible with translation: 247 / 28194 (0.88 %) + Compatible with filtering: 84 / 28194 (0.30 %) + No compatible versions: 648 / 28194 (2.30 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 2971 / 28194 (10.54 %) + + Analysing public versions + Compatible: 25371 / 28194 (89.99 %) + Compatible with translation: 1507 / 28194 (5.35 %) + Compatible with filtering: 12 / 28194 (0.04 %) + No compatible versions: 648 / 28194 (2.30 %) + Sorts differently (after translations): 0 / 28194 (0.00 %) + Sorts differently (no translations): 0 / 28194 (0.00 %) + No applicable versions: 656 / 28194 (2.33 %) + + Analysing all versions + Compatible: 23969 / 28194 (85.01 %) + Compatible with translation: 2789 / 28194 (9.89 %) + Compatible with filtering: 530 / 28194 (1.88 %) + No compatible versions: 547 / 28194 (1.94 %) + Sorts differently (after translations): 96 / 28194 (0.34 %) + Sorts differently (no translations): 14 / 28194 (0.05 %) + No applicable versions: 249 / 28194 (0.88 %) + + Copyright ========= diff --git a/pep-0426/pepsort.py b/pep-0426/pepsort.py --- a/pep-0426/pepsort.py +++ b/pep-0426/pepsort.py @@ -20,6 +20,8 @@ PEP426_VERSION_RE = re.compile('^(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' '(\.(post)(\d+))?(\.(dev)(\d+))?$') +PEP426_PRERELEASE_RE = re.compile('(a|b|c|rc|dev)\d+') + def pep426_key(s): s = s.strip() m = PEP426_VERSION_RE.match(s) @@ -60,23 +62,28 @@ return nums, pre, post, dev +def is_release_version(s): + return not bool(PEP426_PRERELEASE_RE.search(s)) + def cache_projects(cache_name): logger.info("Retrieving package data from PyPI") client = xmlrpclib.ServerProxy('http://python.org/pypi') projects = dict.fromkeys(client.list_packages()) + public = projects.copy() failed = [] for pname in projects: - time.sleep(0.1) + time.sleep(0.01) logger.debug("Retrieving versions for %s", pname) try: projects[pname] = list(client.package_releases(pname, True)) + public[pname] = list(client.package_releases(pname)) except: failed.append(pname) logger.warn("Error retrieving versions for %s", failed) with open(cache_name, 'w') as f: - json.dump(projects, f, sort_keys=True, + json.dump([projects, public], f, sort_keys=True, indent=2, separators=(',', ': ')) - return projects + return projects, public def get_projects(cache_name): try: @@ -84,11 +91,11 @@ except IOError as exc: if exc.errno != errno.ENOENT: raise - projects = cache_projects(cache_name); + projects, public = cache_projects(cache_name); else: with f: - projects = json.load(f) - return projects + projects, public = json.load(f) + return projects, public VERSION_CACHE = "pepsort_cache.json" @@ -112,109 +119,139 @@ "426": pep426_key, } +class Analysis: + + def __init__(self, title, projects, releases_only=False): + self.title = title + self.projects = projects + + num_projects = len(projects) + + compatible_projects = Category("Compatible", num_projects) + translated_projects = Category("Compatible with translation", num_projects) + filtered_projects = Category("Compatible with filtering", num_projects) + incompatible_projects = Category("No compatible versions", num_projects) + sort_error_translated_projects = Category("Sorts differently (after translations)", num_projects) + sort_error_compatible_projects = Category("Sorts differently (no translations)", num_projects) + null_projects = Category("No applicable versions", num_projects) + + self.categories = [ + compatible_projects, + translated_projects, + filtered_projects, + incompatible_projects, + sort_error_translated_projects, + sort_error_compatible_projects, + null_projects, + ] + + sort_key = SORT_KEYS[pepno] + sort_failures = 0 + for i, (pname, versions) in enumerate(projects.items()): + if i % 100 == 0: + sys.stderr.write('%s / %s\r' % (i, num_projects)) + sys.stderr.flush() + if not versions: + logger.debug('%-15.15s has no versions', pname) + null_projects.add(pname) + continue + # list_legacy and list_pep will contain 2-tuples + # comprising a sortable representation according to either + # the setuptools (legacy) algorithm or the PEP algorithm. + # followed by the original version string + # Go through the PEP 386/426 stuff one by one, since + # we might get failures + list_pep = [] + release_versions = set() + prerelease_versions = set() + excluded_versions = set() + translated_versions = set() + for v in versions: + s = v + try: + k = sort_key(v) + except Exception: + s = suggest_normalized_version(v) + if not s: + good = False + logger.debug('%-15.15s failed for %r, no suggestions', pname, v) + excluded_versions.add(v) + continue + else: + try: + k = sort_key(s) + except ValueError: + logger.error('%-15.15s failed for %r, with suggestion %r', + pname, v, s) + excluded_versions.add(v) + continue + logger.debug('%-15.15s translated %r to %r', pname, v, s) + translated_versions.add(v) + if is_release_version(s): + release_versions.add(v) + else: + prerelease_versions.add(v) + if releases_only: + logger.debug('%-15.15s ignoring pre-release %r', pname, s) + continue + list_pep.append((k, v)) + if releases_only and prerelease_versions and not release_versions: + logger.debug('%-15.15s has no release versions', pname) + null_projects.add(pname) + continue + if not list_pep: + logger.debug('%-15.15s has no compatible versions', pname) + incompatible_projects.add(pname) + continue + # The legacy approach doesn't refuse the temptation to guess, + # so it *always* gives some kind of answer + if releases_only: + excluded_versions |= prerelease_versions + accepted_versions = set(versions) - excluded_versions + list_legacy = [(legacy_key(v), v) for v in accepted_versions] + assert len(list_legacy) == len(list_pep) + sorted_legacy = sorted(list_legacy) + sorted_pep = sorted(list_pep) + sv_legacy = [t[1] for t in sorted_legacy] + sv_pep = [t[1] for t in sorted_pep] + if sv_legacy != sv_pep: + if translated_versions: + logger.debug('%-15.15s translation creates sort differences', pname) + sort_error_translated_projects.add(pname) + else: + logger.debug('%-15.15s incompatible due to sort errors', pname) + sort_error_compatible_projects.add(pname) + logger.debug('%-15.15s unequal: legacy: %s', pname, sv_legacy) + logger.debug('%-15.15s unequal: pep%s: %s', pname, pepno, sv_pep) + continue + # The project is compatible to some degree, + if excluded_versions: + logger.debug('%-15.15s has some compatible versions', pname) + filtered_projects.add(pname) + continue + if translated_versions: + logger.debug('%-15.15s is compatible after translation', pname) + translated_projects.add(pname) + continue + logger.debug('%-15.15s is fully compatible', pname) + compatible_projects.add(pname) + + def print_report(self): + print("Analysing {}".format(self.title)) + for category in self.categories: + print(" ", category) + + def main(pepno = '426'): - sort_key = SORT_KEYS[pepno] print('Comparing PEP %s version sort to setuptools.' % pepno) - projects = get_projects(VERSION_CACHE) - num_projects = len(projects) - - null_projects = Category("No releases", num_projects) - compatible_projects = Category("Compatible", num_projects) - translated_projects = Category("Compatible with translation", num_projects) - filtered_projects = Category("Compatible with filtering", num_projects) - sort_error_translated_projects = Category("Translations sort differently", num_projects) - sort_error_compatible_projects = Category("Incompatible due to sorting errors", num_projects) - incompatible_projects = Category("Incompatible", num_projects) - - categories = [ - null_projects, - compatible_projects, - translated_projects, - filtered_projects, - sort_error_translated_projects, - sort_error_compatible_projects, - incompatible_projects, - ] - - sort_failures = 0 - for i, (pname, versions) in enumerate(projects.items()): - if i % 100 == 0: - sys.stderr.write('%s / %s\r' % (i, num_projects)) - sys.stderr.flush() - if not versions: - logger.debug('%-15.15s has no releases', pname) - null_projects.add(pname) - continue - # list_legacy and list_pep will contain 2-tuples - # comprising a sortable representation according to either - # the setuptools (legacy) algorithm or the PEP algorithm. - # followed by the original version string - list_legacy = [(legacy_key(v), v) for v in versions] - # Go through the PEP 386/426 stuff one by one, since - # we might get failures - list_pep = [] - excluded_versions = set() - translated_versions = set() - for v in versions: - try: - k = sort_key(v) - except Exception: - s = suggest_normalized_version(v) - if not s: - good = False - logger.debug('%-15.15s failed for %r, no suggestions', pname, v) - excluded_versions.add(v) - continue - else: - try: - k = sort_key(s) - except ValueError: - logger.error('%-15.15s failed for %r, with suggestion %r', - pname, v, s) - excluded_versions.add(v) - continue - logger.debug('%-15.15s translated %r to %r', pname, v, s) - translated_versions.add(v) - list_pep.append((k, v)) - if not list_pep: - logger.debug('%-15.15s has no compatible releases', pname) - incompatible_projects.add(pname) - continue - # Now check the versions sort as expected - if excluded_versions: - list_legacy = [(k, v) for k, v in list_legacy - if v not in excluded_versions] - assert len(list_legacy) == len(list_pep) - sorted_legacy = sorted(list_legacy) - sorted_pep = sorted(list_pep) - sv_legacy = [t[1] for t in sorted_legacy] - sv_pep = [t[1] for t in sorted_pep] - if sv_legacy != sv_pep: - if translated_versions: - logger.debug('%-15.15s translation creates sort differences', pname) - sort_error_translated_projects.add(pname) - else: - logger.debug('%-15.15s incompatible due to sort errors', pname) - sort_error_compatible_projects.add(pname) - logger.debug('%-15.15s unequal: legacy: %s', pname, sv_legacy) - logger.debug('%-15.15s unequal: pep%s: %s', pname, pepno, sv_pep) - continue - # The project is compatible to some degree, - if excluded_versions: - logger.debug('%-15.15s has some compatible releases', pname) - filtered_projects.add(pname) - continue - if translated_versions: - logger.debug('%-15.15s is compatible after translation', pname) - translated_projects.add(pname) - continue - logger.debug('%-15.15s is fully compatible', pname) - compatible_projects.add(pname) - - for category in categories: - print(category) - + projects, public = get_projects(VERSION_CACHE) + print() + Analysis("release versions", public, releases_only=True).print_report() + print() + Analysis("public versions", public).print_report() + print() + Analysis("all versions", projects).print_report() # Uncomment the line below to explore differences in details # import pdb; pdb.set_trace() # Grepping the log files is also informative diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -54,7 +54,7 @@ Details ======= -Installing a wheel 'distribution-1.0.py32.none.any.whl' +Installing a wheel 'distribution-1.0-py32-none-any.whl' ------------------------------------------------------- Wheel installation notionally consists of two phases: diff --git a/pep-0434.txt b/pep-0434.txt new file mode 100644 --- /dev/null +++ b/pep-0434.txt @@ -0,0 +1,85 @@ +PEP: 434 +Title: IDLE Enhancement Exception for All Branches +Version: +Last-Modified: +Author: Todd Rovito , Terry Reedy +BDFL-Delegate: Nick Coghlan +Status: Draft +Type: Informational +Content-Type: text/x-rst +Created: 16-Feb-2013 +Python-Version: 2.7 +Post-History: 16-Feb-2013 + + +Abstract +======== + +Generally only new features are applied to Python 3.4 but this PEP requests an +exception for IDLE [1]_. IDLE is part of the standard library and has numerous +outstanding issues [2]_. Since IDLE is often the first thing a new Python user +sees it desperately needs to be brought up to date with modern GUI standards +across the three major platforms Linux, Mac OS X, and Windows. + + +Rationale +========= + +Python does have many advanced features yet Python is well known for being a +easy computer language for beginners [3]_. A major Python philosophy is +"batteries included" which is best demonstrated in Python's standard library +with many modules that are not typically included with other programming +languages [4]_. IDLE is a important "battery" in the Python toolbox because it +allows a beginner to get started quickly without downloading and configuring a +third party IDE. IDLE is primarily used as an application that ships with +Python, rather than as a library module used to build Python applications, +hence a different standard should apply to IDLE enhancements. Additional +patches to IDLE cannot break any existing program/library because IDLE is used +by humans. + + +Details +======= + +Python 2.7 does not accept bug fixes, this rule can be ignored for IDLE if the +Python development team accepts this PEP [5]_. IDLE issues will be carefully +tested on the three major platforms Linux, Mac OS X, and Windows before any +commits are made. Since IDLE is segregated to a particular part of the source +tree this enhancement exception only applies to Lib/idlelib directory in +Python branches >= 2.7. + + +References +========== + +.. [1] IDLE: Right Click Context Menu, Foord, Michael + (http://bugs.python.org/issue1207589) + +.. [2] Meta-issue for "Invent with Python" IDLE feedback + (http://bugs.python.org/issue13504) + +.. [3] Getting Started with Python + (http://www.python.org/about/gettingstarted/) + +.. [4] Batteries Included + (http://docs.python.org/2/tutorial/stdlib.html#batteries-included) + +.. [5] Python 2.7 Release Schedule + (http://www.python.org/dev/peps/pep-0373/) + + +Copyright +========= + +This document has been placed in the public domain. + + + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 04:33:21 2013 From: python-checkins at python.org (daniel.holth) Date: Sat, 23 Feb 2013 04:33:21 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_replace_implied_?= =?utf-8?q?=27version_starts_with=27_with_new_=7E=3D_operator?= Message-ID: <3ZCZkn34Q5zRbq@mail.python.org> http://hg.python.org/peps/rev/de69fe61f300 changeset: 4764:de69fe61f300 user: Daniel Holth date: Fri Feb 22 22:33:09 2013 -0500 summary: PEP 426: replace implied 'version starts with' with new ~= operator files: pep-0426.txt | 46 +++++++++++++++++++-------------------- 1 files changed, 22 insertions(+), 24 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -987,40 +987,41 @@ commas. Each version clause consists of an optional comparison operator followed by a version identifier. For example:: - 0.9, >= 1.0, != 1.3.4, < 2.0 + 0.9, >= 1.0, != 1.3.4, < 2.0, ~= 2.0 Each version identifier must be in the standard format described in `Version scheme`_. The comma (",") is equivalent to a logical **and** operator. -Comparison operators must be one of ``<``, ``>``, ``<=``, ``>=``, ``==`` -or ``!=``. +Comparison operators must be one of ``<``, ``>``, ``<=``, ``>=``, ``==``, +``!=`` or ``~=``. The ``==`` and ``!=`` operators are strict - in order to match, the version supplied must exactly match the specified version, with no -additional trailing suffix. +additional trailing suffix. When no comparison operator is provided, +it is equivalent to ``==``. -However, when no comparison operator is provided along with a version -identifier ``V``, it is equivalent to using the following pair of version -clauses:: +The special ``~=`` operator is equivalent to using the following pair +of version clauses:: >= V, < V+1 where ``V+1`` is the next version after ``V``, as determined by -incrementing the last numeric component in ``V`` (for example, if -``V == 1.0a3``, then ``V+1 == 1.0a4``, while if ``V == 1.0``, then -``V+1 == 1.1``). +incrementing the last numeric component in ``V`` (for example, if ``V == +1.0a3``, then ``V+1 == 1.0a4``, while if ``V == 1.0``, then ``V+1 == +1.1``). In other words, this operator matches any release that starts +with the mentioned components. This approach makes it easy to depend on a particular release series simply by naming it in a version specifier, without requiring any additional annotation. For example, the following pairs of version specifiers are equivalent:: - 2 + ~= 2 >= 2, < 3 - 3.3 + ~= 3.3 >= 3.3, < 3.4 Whitespace between a conditional operator and the following version @@ -1053,32 +1054,29 @@ Post-releases and purely numeric releases receive no special treatment - they are always included unless explicitly excluded. -Given the above rules, projects which include the ``.0`` suffix for the -first release in a series, such as ``2.5.0``, can easily refer specifically -to that version with the clause ``2.5.0``, while the clause ``2.5`` refers -to that entire series. Projects which omit the ".0" suffix for the first -release of a series, by using a version string like ``2.5`` rather than -``2.5.0``, will need to use an explicit clause like ``>= 2.5, < 2.5.1`` to -refer specifically to that initial release. +Given the above rules, projects which include the ``.0`` suffix for +the first release in a series, such as ``2.5.0``, can easily refer +specifically to that version with the clause ``==2.5.0``, while the clause +``~=2.5`` refers to that entire series. Some examples: -* ``Requires-Dist: zope.interface (3.1)``: any version that starts with 3.1, +* ``Requires-Dist: zope.interface (~=3.1)``: any version that starts with 3.1, excluding pre-releases. * ``Requires-Dist: zope.interface (==3.1)``: equivalent to ``Requires-Dist: zope.interface (3.1)``. -* ``Requires-Dist: zope.interface (3.1.0)``: any version that starts with +* ``Requires-Dist: zope.interface (~=3.1.0)``: any version that starts with 3.1.0, excluding pre-releases. Since that particular project doesn't use more than 3 digits, it also means "only the 3.1.0 release". * ``Requires-Python: 3``: Any Python 3 version, excluding pre-releases. * ``Requires-Python: >=2.6,<3``: Any version of Python 2.6 or 2.7, including post-releases (if they were used for Python). It excludes pre releases of Python 3. -* ``Requires-Python: 2.6.2``: Equivalent to ">=2.6.2,<2.6.3". So this includes +* ``Requires-Python: ~=2.6.2``: Equivalent to ">=2.6.2,<2.6.3". So this includes only Python 2.6.2. Of course, if Python was numbered with 4 digits, it would include all versions of the 2.6.2 series, excluding pre-releases. -* ``Requires-Python: 2.5``: Equivalent to ">=2.5,<2.6". -* ``Requires-Dist: zope.interface (3.1,!=3.1.3)``: any version that starts +* ``Requires-Python: ~=2.5``: Equivalent to ">=2.5,<2.6". +* ``Requires-Dist: zope.interface (~=3.1,!=3.1.3)``: any version that starts with 3.1, excluding pre-releases of 3.1 *and* excluding any version that starts with "3.1.3". For this particular project, this means: "any version of the 3.1 series but not 3.1.3". This is equivalent to: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 04:59:52 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 04:59:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MjQ5OiBjb252?= =?utf-8?q?ert_a_test_in_test=5Fcapi_to_use_unittest_and_reap_threads=2E?= Message-ID: <3ZCbKN4cnzzQhK@mail.python.org> http://hg.python.org/cpython/rev/c6ca87fbea39 changeset: 82325:c6ca87fbea39 branch: 2.7 parent: 82321:ad0712f4b3e0 user: Ezio Melotti date: Sat Feb 23 05:45:37 2013 +0200 summary: #17249: convert a test in test_capi to use unittest and reap threads. files: Lib/test/test_capi.py | 55 +++++++++++++++--------------- Misc/NEWS | 2 + 2 files changed, 29 insertions(+), 28 deletions(-) diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py --- a/Lib/test/test_capi.py +++ b/Lib/test/test_capi.py @@ -4,6 +4,7 @@ from __future__ import with_statement import sys import time +import thread import random import unittest from test import test_support @@ -96,8 +97,32 @@ self.pendingcalls_wait(l, n) + at unittest.skipUnless(threading, 'Threading required for this test.') +class TestThreadState(unittest.TestCase): + + @test_support.reap_threads + def test_thread_state(self): + # some extra thread-state tests driven via _testcapi + def target(): + idents = [] + + def callback(): + idents.append(thread.get_ident()) + + _testcapi._test_thread_state(callback) + a = b = callback + time.sleep(1) + # Check our main thread is in the list exactly 3 times. + self.assertEqual(idents.count(thread.get_ident()), 3, + "Couldn't find main thread correctly in the list") + + target() + t = threading.Thread(target=target) + t.start() + t.join() + + def test_main(): - for name in dir(_testcapi): if name.startswith('test_'): test = getattr(_testcapi, name) @@ -108,33 +133,7 @@ except _testcapi.error: raise test_support.TestFailed, sys.exc_info()[1] - # some extra thread-state tests driven via _testcapi - def TestThreadState(): - if test_support.verbose: - print "auto-thread-state" - - idents = [] - - def callback(): - idents.append(thread.get_ident()) - - _testcapi._test_thread_state(callback) - a = b = callback - time.sleep(1) - # Check our main thread is in the list exactly 3 times. - if idents.count(thread.get_ident()) != 3: - raise test_support.TestFailed, \ - "Couldn't find main thread correctly in the list" - - if threading: - import thread - import time - TestThreadState() - t=threading.Thread(target=TestThreadState) - t.start() - t.join() - - test_support.run_unittest(TestPendingCalls) + test_support.run_unittest(TestPendingCalls, TestThreadState) if __name__ == "__main__": test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -798,6 +798,8 @@ Tests ----- +- Issue #17249: convert a test in test_capi to use unittest and reap threads. + - We now run both test_email.py and test_email_renamed.py when running the test_email regression test. test_email_renamed contains some tests that test_email does not. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 04:59:54 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 04:59:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MjQ5OiBjb252?= =?utf-8?q?ert_a_test_in_test=5Fcapi_to_use_unittest_and_reap_threads=2E?= Message-ID: <3ZCbKQ0dDxzQhd@mail.python.org> http://hg.python.org/cpython/rev/329732a1572f changeset: 82326:329732a1572f branch: 3.2 parent: 82322:7d95a0aa6b5a user: Ezio Melotti date: Sat Feb 23 05:52:46 2013 +0200 summary: #17249: convert a test in test_capi to use unittest and reap threads. files: Lib/test/test_capi.py | 56 +++++++++++++++--------------- Misc/NEWS | 2 + 2 files changed, 30 insertions(+), 28 deletions(-) diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py --- a/Lib/test/test_capi.py +++ b/Lib/test/test_capi.py @@ -8,6 +8,7 @@ import subprocess import sys import time +import _thread import unittest from test import support try: @@ -222,8 +223,34 @@ os.chdir(oldcwd) + at unittest.skipUnless(threading, 'Threading required for this test.') +class TestThreadState(unittest.TestCase): + + @support.reap_threads + def test_thread_state(self): + # some extra thread-state tests driven via _testcapi + def target(): + idents = [] + + def callback(): + idents.append(_thread.get_ident()) + + _testcapi._test_thread_state(callback) + a = b = callback + time.sleep(1) + # Check our main thread is in the list exactly 3 times. + self.assertEqual(idents.count(_thread.get_ident()), 3, + "Couldn't find main thread correctly in the list") + + target() + t = threading.Thread(target=target) + t.start() + t.join() + + def test_main(): - support.run_unittest(CAPITest, TestPendingCalls, Test6012, EmbeddingTest) + support.run_unittest(CAPITest, TestPendingCalls, Test6012, + EmbeddingTest, TestThreadState) for name in dir(_testcapi): if name.startswith('test_'): @@ -232,32 +259,5 @@ print("internal", name) test() - # some extra thread-state tests driven via _testcapi - def TestThreadState(): - if support.verbose: - print("auto-thread-state") - - idents = [] - - def callback(): - idents.append(_thread.get_ident()) - - _testcapi._test_thread_state(callback) - a = b = callback - time.sleep(1) - # Check our main thread is in the list exactly 3 times. - if idents.count(_thread.get_ident()) != 3: - raise support.TestFailed( - "Couldn't find main thread correctly in the list") - - if threading: - import _thread - import time - TestThreadState() - t = threading.Thread(target=TestThreadState) - t.start() - t.join() - - if __name__ == "__main__": test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -936,6 +936,8 @@ Tests ----- +- Issue #17249: convert a test in test_capi to use unittest and reap threads. + - Issue #17041: Fix testing when Python is configured with the --without-doc-strings. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 04:59:55 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 04:59:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317249=3A_merge_with_3=2E2=2E?= Message-ID: <3ZCbKR3Y4BzRdh@mail.python.org> http://hg.python.org/cpython/rev/81f98372f893 changeset: 82327:81f98372f893 branch: 3.3 parent: 82323:a305901366a6 parent: 82326:329732a1572f user: Ezio Melotti date: Sat Feb 23 05:58:38 2013 +0200 summary: #17249: merge with 3.2. files: Lib/test/test_capi.py | 55 +++++++++++++++--------------- Misc/NEWS | 2 + 2 files changed, 29 insertions(+), 28 deletions(-) diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py --- a/Lib/test/test_capi.py +++ b/Lib/test/test_capi.py @@ -326,9 +326,34 @@ self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, (), {}, b'', [42]) + at unittest.skipUnless(threading, 'Threading required for this test.') +class TestThreadState(unittest.TestCase): + + @support.reap_threads + def test_thread_state(self): + # some extra thread-state tests driven via _testcapi + def target(): + idents = [] + + def callback(): + idents.append(threading.get_ident()) + + _testcapi._test_thread_state(callback) + a = b = callback + time.sleep(1) + # Check our main thread is in the list exactly 3 times. + self.assertEqual(idents.count(threading.get_ident()), 3, + "Couldn't find main thread correctly in the list") + + target() + t = threading.Thread(target=target) + t.start() + t.join() + + def test_main(): - support.run_unittest(CAPITest, TestPendingCalls, - Test6012, EmbeddingTest, SkipitemTest) + support.run_unittest(CAPITest, TestPendingCalls, Test6012, + EmbeddingTest, SkipitemTest, TestThreadState) for name in dir(_testcapi): if name.startswith('test_'): @@ -337,31 +362,5 @@ print("internal", name) test() - # some extra thread-state tests driven via _testcapi - def TestThreadState(): - if support.verbose: - print("auto-thread-state") - - idents = [] - - def callback(): - idents.append(threading.get_ident()) - - _testcapi._test_thread_state(callback) - a = b = callback - time.sleep(1) - # Check our main thread is in the list exactly 3 times. - if idents.count(threading.get_ident()) != 3: - raise support.TestFailed( - "Couldn't find main thread correctly in the list") - - if threading: - import time - TestThreadState() - t = threading.Thread(target=TestThreadState) - t.start() - t.join() - - if __name__ == "__main__": test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -620,6 +620,8 @@ Tests ----- +- Issue #17249: convert a test in test_capi to use unittest and reap threads. + - Issue #17041: Fix testing when Python is configured with the --without-doc-strings. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 04:59:56 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 04:59:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MjQ5OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZCbKS6F6fzRj4@mail.python.org> http://hg.python.org/cpython/rev/f716a178b4e1 changeset: 82328:f716a178b4e1 parent: 82324:e0f940829eb6 parent: 82327:81f98372f893 user: Ezio Melotti date: Sat Feb 23 05:59:37 2013 +0200 summary: #17249: merge with 3.3. files: Lib/test/test_capi.py | 55 +++++++++++++++--------------- Misc/NEWS | 2 + 2 files changed, 29 insertions(+), 28 deletions(-) diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py --- a/Lib/test/test_capi.py +++ b/Lib/test/test_capi.py @@ -326,9 +326,34 @@ self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, (), {}, b'', [42]) + at unittest.skipUnless(threading, 'Threading required for this test.') +class TestThreadState(unittest.TestCase): + + @support.reap_threads + def test_thread_state(self): + # some extra thread-state tests driven via _testcapi + def target(): + idents = [] + + def callback(): + idents.append(threading.get_ident()) + + _testcapi._test_thread_state(callback) + a = b = callback + time.sleep(1) + # Check our main thread is in the list exactly 3 times. + self.assertEqual(idents.count(threading.get_ident()), 3, + "Couldn't find main thread correctly in the list") + + target() + t = threading.Thread(target=target) + t.start() + t.join() + + def test_main(): - support.run_unittest(CAPITest, TestPendingCalls, - Test6012, EmbeddingTest, SkipitemTest) + support.run_unittest(CAPITest, TestPendingCalls, Test6012, + EmbeddingTest, SkipitemTest, TestThreadState) for name in dir(_testcapi): if name.startswith('test_'): @@ -337,31 +362,5 @@ print("internal", name) test() - # some extra thread-state tests driven via _testcapi - def TestThreadState(): - if support.verbose: - print("auto-thread-state") - - idents = [] - - def callback(): - idents.append(threading.get_ident()) - - _testcapi._test_thread_state(callback) - a = b = callback - time.sleep(1) - # Check our main thread is in the list exactly 3 times. - if idents.count(threading.get_ident()) != 3: - raise support.TestFailed( - "Couldn't find main thread correctly in the list") - - if threading: - import time - TestThreadState() - t = threading.Thread(target=TestThreadState) - t.start() - t.join() - - if __name__ == "__main__": test_main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -857,6 +857,8 @@ Tests ----- +- Issue #17249: convert a test in test_capi to use unittest and reap threads. + - Issue #17107: Test client-side SNI support in urllib.request thanks to the new server-side SNI support in the ssl module. Initial patch by Daniel Black. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 05:53:57 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 05:53:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3MjQ5OiBjaGVj?= =?utf-8?q?k_for_the_availability_of_the_thread_module=2E?= Message-ID: <3ZCcWn28STzQLp@mail.python.org> http://hg.python.org/cpython/rev/041d0f68c67d changeset: 82329:041d0f68c67d branch: 2.7 parent: 82325:c6ca87fbea39 user: Ezio Melotti date: Sat Feb 23 06:33:51 2013 +0200 summary: #17249: check for the availability of the thread module. files: Lib/test/test_capi.py | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py --- a/Lib/test/test_capi.py +++ b/Lib/test/test_capi.py @@ -4,13 +4,14 @@ from __future__ import with_statement import sys import time -import thread import random import unittest from test import test_support try: + import thread import threading except ImportError: + thread = None threading = None import _testcapi @@ -97,7 +98,7 @@ self.pendingcalls_wait(l, n) - at unittest.skipUnless(threading, 'Threading required for this test.') + at unittest.skipUnless(threading and thread, 'Threading required for this test.') class TestThreadState(unittest.TestCase): @test_support.reap_threads -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 05:53:58 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 05:53:58 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MjQ5OiBjaGVj?= =?utf-8?q?k_for_the_availability_of_the_thread_module=2E?= Message-ID: <3ZCcWp4cDCzQPd@mail.python.org> http://hg.python.org/cpython/rev/01fdf24c9d75 changeset: 82330:01fdf24c9d75 branch: 3.2 parent: 82326:329732a1572f user: Ezio Melotti date: Sat Feb 23 06:42:19 2013 +0200 summary: #17249: check for the availability of the thread module. files: Lib/test/test_capi.py | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py --- a/Lib/test/test_capi.py +++ b/Lib/test/test_capi.py @@ -8,7 +8,6 @@ import subprocess import sys import time -import _thread import unittest from test import support try: @@ -16,8 +15,10 @@ except ImportError: _posixsubprocess = None try: + import _thread import threading except ImportError: + _thread = None threading = None import _testcapi @@ -223,7 +224,7 @@ os.chdir(oldcwd) - at unittest.skipUnless(threading, 'Threading required for this test.') + at unittest.skipUnless(threading and _thread, 'Threading required for this test.') class TestThreadState(unittest.TestCase): @support.reap_threads -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 05:54:00 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 05:54:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317249=3A_null_merge=2E?= Message-ID: <3ZCcWr0JBxzR08@mail.python.org> http://hg.python.org/cpython/rev/eb9edac39751 changeset: 82331:eb9edac39751 branch: 3.3 parent: 82327:81f98372f893 parent: 82330:01fdf24c9d75 user: Ezio Melotti date: Sat Feb 23 06:53:21 2013 +0200 summary: #17249: null merge. files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 05:54:01 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 05:54:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_=2317249=3A_null_merge=2E?= Message-ID: <3ZCcWs3SHTzRXZ@mail.python.org> http://hg.python.org/cpython/rev/cb46ccdc226a changeset: 82332:cb46ccdc226a parent: 82328:f716a178b4e1 parent: 82331:eb9edac39751 user: Ezio Melotti date: Sat Feb 23 06:53:41 2013 +0200 summary: #17249: null merge. files: -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Sat Feb 23 06:01:48 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Sat, 23 Feb 2013 06:01:48 +0100 Subject: [Python-checkins] Daily reference leaks (fcc61327c86c): sum=5 Message-ID: results for fcc61327c86c on branch "default" -------------------------------------------- test_unittest leaked [0, -1, 2] memory blocks, sum=1 test_dbm leaked [0, 0, 2] references, sum=2 test_dbm leaked [0, 0, 2] memory blocks, sum=2 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogtHQ_3n', '-x'] From python-checkins at python.org Sat Feb 23 06:58:44 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 06:58:44 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3MjE3OiBmaXgg?= =?utf-8?q?UnicodeEncodeErrors_errors_in_test=5Fformat_by_printing_ASCII_o?= =?utf-8?q?nly=2E?= Message-ID: <3ZCdyX0P7LzQhK@mail.python.org> http://hg.python.org/cpython/rev/831be7dc260a changeset: 82333:831be7dc260a branch: 3.2 parent: 82330:01fdf24c9d75 user: Ezio Melotti date: Sat Feb 23 07:53:56 2013 +0200 summary: #17217: fix UnicodeEncodeErrors errors in test_format by printing ASCII only. files: Lib/test/test_format.py | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/test/test_format.py b/Lib/test/test_format.py --- a/Lib/test/test_format.py +++ b/Lib/test/test_format.py @@ -13,10 +13,10 @@ def testformat(formatstr, args, output=None, limit=None, overflowok=False): if verbose: if output: - print("%r %% %r =? %r ..." %\ - (formatstr, args, output), end=' ') + print("{!a} % {!a} =? {!a} ...".format(formatstr, args, output), + end=' ') else: - print("%r %% %r works? ..." % (formatstr, args), end=' ') + print("{!a} % {!a} works? ...".format(formatstr, args), end=' ') try: result = formatstr % args except OverflowError: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 06:58:45 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 06:58:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_=2317217=3A_merge_with_3=2E2=2E?= Message-ID: <3ZCdyY30t1zRVJ@mail.python.org> http://hg.python.org/cpython/rev/3eb693462891 changeset: 82334:3eb693462891 branch: 3.3 parent: 82331:eb9edac39751 parent: 82333:831be7dc260a user: Ezio Melotti date: Sat Feb 23 07:57:53 2013 +0200 summary: #17217: merge with 3.2. files: Lib/test/test_format.py | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/test/test_format.py b/Lib/test/test_format.py --- a/Lib/test/test_format.py +++ b/Lib/test/test_format.py @@ -14,10 +14,10 @@ def testformat(formatstr, args, output=None, limit=None, overflowok=False): if verbose: if output: - print("%r %% %r =? %r ..." %\ - (formatstr, args, output), end=' ') + print("{!a} % {!a} =? {!a} ...".format(formatstr, args, output), + end=' ') else: - print("%r %% %r works? ..." % (formatstr, args), end=' ') + print("{!a} % {!a} works? ...".format(formatstr, args), end=' ') try: result = formatstr % args except OverflowError: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 06:58:46 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 06:58:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MjE3OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZCdyZ5b2NzRVk@mail.python.org> http://hg.python.org/cpython/rev/562ba95dd4c9 changeset: 82335:562ba95dd4c9 parent: 82332:cb46ccdc226a parent: 82334:3eb693462891 user: Ezio Melotti date: Sat Feb 23 07:58:28 2013 +0200 summary: #17217: merge with 3.3. files: Lib/test/test_format.py | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Lib/test/test_format.py b/Lib/test/test_format.py --- a/Lib/test/test_format.py +++ b/Lib/test/test_format.py @@ -14,10 +14,10 @@ def testformat(formatstr, args, output=None, limit=None, overflowok=False): if verbose: if output: - print("%r %% %r =? %r ..." %\ - (formatstr, args, output), end=' ') + print("{!a} % {!a} =? {!a} ...".format(formatstr, args, output), + end=' ') else: - print("%r %% %r works? ..." % (formatstr, args), end=' ') + print("{!a} % {!a} works? ...".format(formatstr, args), end=' ') try: result = formatstr % args except OverflowError: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 07:16:18 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 07:16:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_deprecatio?= =?utf-8?q?n_warning_in_tempfile=2E?= Message-ID: <3ZCfLp5cSKzRmf@mail.python.org> http://hg.python.org/cpython/rev/4fefe12a70f9 changeset: 82336:4fefe12a70f9 branch: 2.7 parent: 82329:041d0f68c67d user: Ezio Melotti date: Sat Feb 23 08:16:07 2013 +0200 summary: Fix deprecation warning in tempfile. files: Lib/tempfile.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/tempfile.py b/Lib/tempfile.py --- a/Lib/tempfile.py +++ b/Lib/tempfile.py @@ -205,7 +205,7 @@ _os.unlink(filename) return dir except (OSError, IOError) as e: - if e[0] != _errno.EEXIST: + if e.args[0] != _errno.EEXIST: break # no point trying more names in this directory pass raise IOError, (_errno.ENOENT, -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 07:19:10 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 07:19:10 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_deprecatio?= =?utf-8?q?n_warning_in_test=5Ftcl=2E?= Message-ID: <3ZCfQ60tcBzQLW@mail.python.org> http://hg.python.org/cpython/rev/dec10a3eb95f changeset: 82337:dec10a3eb95f branch: 2.7 user: Ezio Melotti date: Sat Feb 23 08:19:00 2013 +0200 summary: Fix deprecation warning in test_tcl. files: Lib/test/test_tcl.py | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Lib/test/test_tcl.py b/Lib/test/test_tcl.py --- a/Lib/test/test_tcl.py +++ b/Lib/test/test_tcl.py @@ -163,7 +163,7 @@ self.assertEqual(passValue(u'string\u20ac'), u'string\u20ac') for i in (0, 1, -1, int(2**31-1), int(-2**31)): self.assertEqual(passValue(i), i) - for f in (0.0, 1.0, -1.0, 1/3, + for f in (0.0, 1.0, -1.0, 1//3, 1/3.0, sys.float_info.min, sys.float_info.max, -sys.float_info.min, -sys.float_info.max): self.assertEqual(passValue(f), f) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 07:40:53 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 07:40:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzEyNzQ5OiBhZGQg?= =?utf-8?q?a_test_for_non-BMP_ranges_in_character_classes=2E?= Message-ID: <3ZCfv94LFczQZP@mail.python.org> http://hg.python.org/cpython/rev/489cfa062442 changeset: 82338:489cfa062442 branch: 3.3 parent: 82334:3eb693462891 user: Ezio Melotti date: Sat Feb 23 08:40:07 2013 +0200 summary: #12749: add a test for non-BMP ranges in character classes. files: Lib/test/test_re.py | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -600,6 +600,7 @@ self.assertIsNotNone(re.match(r"[\U%08x]" % i, chr(i))) self.assertIsNotNone(re.match(r"[\U%08x0]" % i, chr(i)+"0")) self.assertIsNotNone(re.match(r"[\U%08xz]" % i, chr(i)+"z")) + self.assertIsNotNone(re.match(r"[\U0001d49c-\U0001d4b5]", "\U0001d49e")) self.assertRaises(re.error, re.match, r"[\911]", "") self.assertRaises(re.error, re.match, r"[\x1z]", "") self.assertRaises(re.error, re.match, r"[\u123z]", "") -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 07:40:54 2013 From: python-checkins at python.org (ezio.melotti) Date: Sat, 23 Feb 2013 07:40:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzEyNzQ5OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZCfvB6sbRzRM8@mail.python.org> http://hg.python.org/cpython/rev/c3a09c535001 changeset: 82339:c3a09c535001 parent: 82335:562ba95dd4c9 parent: 82338:489cfa062442 user: Ezio Melotti date: Sat Feb 23 08:40:39 2013 +0200 summary: #12749: merge with 3.3. files: Lib/test/test_re.py | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -600,6 +600,7 @@ self.assertIsNotNone(re.match(r"[\U%08x]" % i, chr(i))) self.assertIsNotNone(re.match(r"[\U%08x0]" % i, chr(i)+"0")) self.assertIsNotNone(re.match(r"[\U%08xz]" % i, chr(i)+"z")) + self.assertIsNotNone(re.match(r"[\U0001d49c-\U0001d4b5]", "\U0001d49e")) self.assertRaises(re.error, re.match, r"[\911]", "") self.assertRaises(re.error, re.match, r"[\x1z]", "") self.assertRaises(re.error, re.match, r"[\u123z]", "") -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 13:50:09 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 23 Feb 2013 13:50:09 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Remove_unused_?= =?utf-8?q?defines=2E?= Message-ID: <3ZCq5F5gFfzQct@mail.python.org> http://hg.python.org/cpython/rev/629af342b189 changeset: 82340:629af342b189 branch: 3.3 parent: 82338:489cfa062442 user: Serhiy Storchaka date: Sat Feb 23 14:48:16 2013 +0200 summary: Remove unused defines. files: Objects/stringlib/unicode_format.h | 6 ------ 1 files changed, 0 insertions(+), 6 deletions(-) diff --git a/Objects/stringlib/unicode_format.h b/Objects/stringlib/unicode_format.h --- a/Objects/stringlib/unicode_format.h +++ b/Objects/stringlib/unicode_format.h @@ -2,12 +2,6 @@ unicode_format.h -- implementation of str.format(). */ -/* Defines for more efficiently reallocating the string buffer */ -#define INITIAL_SIZE_INCREMENT 100 -#define SIZE_MULTIPLIER 2 -#define MAX_SIZE_INCREMENT 3200 - - /************************************************************************/ /*********** Global data structures and forward declarations *********/ /************************************************************************/ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 13:50:11 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Sat, 23 Feb 2013 13:50:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Remove_unused_defines=2E?= Message-ID: <3ZCq5H1CQRzQkd@mail.python.org> http://hg.python.org/cpython/rev/04b5bef5c8d3 changeset: 82341:04b5bef5c8d3 parent: 82339:c3a09c535001 parent: 82340:629af342b189 user: Serhiy Storchaka date: Sat Feb 23 14:49:09 2013 +0200 summary: Remove unused defines. files: Objects/stringlib/unicode_format.h | 6 ------ 1 files changed, 0 insertions(+), 6 deletions(-) diff --git a/Objects/stringlib/unicode_format.h b/Objects/stringlib/unicode_format.h --- a/Objects/stringlib/unicode_format.h +++ b/Objects/stringlib/unicode_format.h @@ -2,12 +2,6 @@ unicode_format.h -- implementation of str.format(). */ -/* Defines for more efficiently reallocating the string buffer */ -#define INITIAL_SIZE_INCREMENT 100 -#define SIZE_MULTIPLIER 2 -#define MAX_SIZE_INCREMENT 3200 - - /************************************************************************/ /*********** Global data structures and forward declarations *********/ /************************************************************************/ -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 15:36:53 2013 From: python-checkins at python.org (eli.bendersky) Date: Sat, 23 Feb 2013 15:36:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Pre-alpha_draft_for_PEP_435_?= =?utf-8?q?=28enum=29=2E_The_name_is_not_important_at_the_moment=2C_as?= Message-ID: <3ZCsSP0rSXzQZS@mail.python.org> http://hg.python.org/peps/rev/3787abec0166 changeset: 4765:3787abec0166 user: Eli Bendersky date: Sat Feb 23 06:36:35 2013 -0800 summary: Pre-alpha draft for PEP 435 (enum). The name is not important at the moment, as this file will be renamed into final form when the PEP is ready. Pushing to main PEPs repo for safekeeping & easy collaboration. files: pepdraft-0435.txt | 445 ++++++++++++++++++++++++++++++++++ 1 files changed, 445 insertions(+), 0 deletions(-) diff --git a/pepdraft-0435.txt b/pepdraft-0435.txt new file mode 100644 --- /dev/null +++ b/pepdraft-0435.txt @@ -0,0 +1,445 @@ +PEP: 435 +Title: Adding an Enum type to the Python standard library +Version: $Revision$ +Last-Modified: $Date$ +Author: Barry Warsaw , + Eli Bendersky +Status: Draft +Type: Standards Track +Content-Type: text/x-rst +Created: 2013-02-23 +Python-Version: 3.4 +Post-History: 2013-02-23 + + +Abstract +======== + +This PEP proposes adding an enumeration type to the Python standard library. +Specifically, it proposes moving the existing ``flufl.enum`` package by +Barry Warsaw into the standard library. Much of this PEP is based on the +"using" document from the documentation of ``flufl.enum``. + +An enumeration is a set of symbolic names bound to unique, constant integer +values. Within an enumeration, the values can be compared by identity, and +the enumeration itself can be iterated over. Enumeration items can be +converted to and from their integer equivalents, supporting use cases such as +storing enumeration values in a database. + + +Status of discussions +===================== + +The idea of adding an enum type to Python is not new - PEP 354 is a previous +attempt that was rejected in 2005. Recently a new set of discussions was +initiated [#]_ on the ``python-ideas`` mailing list. Many new ideas were +proposed in several threads; after a lengthy discussion Guido proposed +adding ``flufl.enum`` to the standard library [#]_. This PEP is an attempt to +formalize this decision as well as discuss a number of variations that can +be considered for inclusion. + +Motivation +========== + +*[Based partly on the Motivation stated in PEP 354]* + +The properties of an enumeration are useful for defining an immutable, +related set of constant values that have a defined sequence but no +inherent semantic meaning. Classic examples are days of the week +(Sunday through Saturday) and school assessment grades ('A' through +'D', and 'F'). Other examples include error status values and states +within a defined process. + +It is possible to simply define a sequence of values of some other +basic type, such as ``int`` or ``str``, to represent discrete +arbitrary values. However, an enumeration ensures that such values +are distinct from any others including, importantly, values within other +enumerations, and that operations without meaning ("Wednesday times two") +are not defined for these values. It also provides a convenient printable +representation of enum values without requiring tedious repetition while +defining them (i.e. no ``GREEN = 'green'``). + + +Module & type name +================== + +We propose to add a module named ``enum`` to the standard library. The main +type exposed by this module is ``Enum``. + + +Proposed semantics for the new enumeration type +=============================================== + +Creating an Enum +---------------- + +Enumerations are created using the class syntax, which makes them easy to read +and write. Every enumeration value must have a unique integer value and the +only restriction on their names is that they must be valid Python identifiers. +To define an enumeration, derive from the Enum class and add attributes with +assignment to their integer values. + + >>> from enum import Enum + >>> class Colors(Enum): + ... red = 1 + ... green = 2 + ... blue = 3 + +Enumeration values are compared by identity. + + >>> Colors.red is Colors.red + True + >>> Colors.blue is Colors.blue + True + >>> Colors.red is not Colors.blue + True + >>> Colors.blue is Colors.red + False + +Enumeration values have nice, human readable string representations... + + >>> print(Colors.red) + Colors.red + +...while their repr has more information. + + >>> print(repr(Colors.red)) + + +The enumeration value names are available through the class members. + + >>> for member in Colors.__members__: + ... print(member) + red + green + blue + +Let's say you wanted to encode an enumeration value in a database. You might +want to get the enumeration class object from an enumeration value. + + >>> cls = Colors.red.enum + >>> print(cls.__name__) + Colors + +Enums also have a property that contains just their item name. + + >>> print(Colors.red.name) + red + >>> print(Colors.green.name) + green + >>> print(Colors.blue.name) + blue + +The str and repr of the enumeration class also provides useful information. + + >>> print(Colors) + + >>> print(repr(Colors)) + + +You can extend previously defined Enums by subclassing. + + >>> class MoreColors(Colors): + ... pink = 4 + ... cyan = 5 + +When extended in this way, the base enumeration's values are identical to the +same named values in the derived class. + + >>> Colors.red is MoreColors.red + True + >>> Colors.blue is MoreColors.blue + True + +However, these are not doing comparisons against the integer equivalent +values, because if you define an enumeration with similar item names and +integer values, they will not be identical. + + >>> class OtherColors(Enum): + ... red = 1 + ... blue = 2 + ... yellow = 3 + >>> Colors.red is OtherColors.red + False + >>> Colors.blue is not OtherColors.blue + True + +These enumeration values are not equal, nor do they hash equally. + + >>> Colors.red == OtherColors.red + False + >>> len(set((Colors.red, OtherColors.red))) + 2 + +Ordered comparisons between enumeration values are *not* supported. Enums are +not integers! + + >>> Colors.red < Colors.blue + Traceback (most recent call last): + ... + NotImplementedError + >>> Colors.red <= Colors.blue + Traceback (most recent call last): + ... + NotImplementedError + >>> Colors.blue > Colors.green + Traceback (most recent call last): + ... + NotImplementedError + >>> Colors.blue >= Colors.green + Traceback (most recent call last): + ... + NotImplementedError + +Equality comparisons are defined though. + + >>> Colors.blue == Colors.blue + True + >>> Colors.green != Colors.blue + True + +Enumeration values do not support ordered comparisons. + + >>> Colors.red < Colors.blue + Traceback (most recent call last): + ... + NotImplementedError + >>> Colors.red < 3 + Traceback (most recent call last): + ... + NotImplementedError + >>> Colors.red <= 3 + Traceback (most recent call last): + ... + NotImplementedError + >>> Colors.blue > 2 + Traceback (most recent call last): + ... + NotImplementedError + >>> Colors.blue >= 2 + Traceback (most recent call last): + ... + NotImplementedError + +While equality comparisons are allowed, comparisons against non-enumeration +values will always compare not equal. + + >>> Colors.green == 2 + False + >>> Colors.blue == 3 + False + >>> Colors.green != 3 + True + >>> Colors.green == 'green' + False + +If you really want the integer equivalent values, you can convert enumeration +values explicitly using the ``int()`` built-in. This is quite convenient for +storing enums in a database for example. + + >>> int(Colors.red) + 1 + >>> int(Colors.green) + 2 + >>> int(Colors.blue) + 3 + +You can also convert back to the enumeration value by calling the Enum class, +passing in the integer value for the item you want. + + >>> Colors(1) + + >>> Colors(2) + + >>> Colors(3) + + >>> Colors(1) is Colors.red + True + +The Enum class also accepts the string name of the enumeration value. + + >>> Colors('red') + + >>> Colors('blue') is Colors.blue + True + +You get exceptions though, if you try to use invalid arguments. + + >>> Colors('magenta') + Traceback (most recent call last): + ... + ValueError: magenta + >>> Colors(99) + Traceback (most recent call last): + ... + ValueError: 99 + +The Enum base class also supports getitem syntax, exactly equivalent to the +class's call semantics. + + >>> Colors[1] + + >>> Colors[2] + + >>> Colors[3] + + >>> Colors[1] is Colors.red + True + >>> Colors['red'] + + >>> Colors['blue'] is Colors.blue + True + >>> Colors['magenta'] + Traceback (most recent call last): + ... + ValueError: magenta + >>> Colors[99] + Traceback (most recent call last): + ... + ValueError: 99 + +The integer equivalent values serve another purpose. You may not define two +enumeration values with the same integer value. + + >>> class Bad(Enum): + ... cartman = 1 + ... stan = 2 + ... kyle = 3 + ... kenny = 3 # Oops! + ... butters = 4 + Traceback (most recent call last): + ... + TypeError: Multiple enum values: 3 + +You also may not duplicate values in derived enumerations. + + >>> class BadColors(Colors): + ... yellow = 4 + ... chartreuse = 2 # Oops! + Traceback (most recent call last): + ... + TypeError: Multiple enum values: 2 + +The Enum class support iteration. Enumeration values are returned in the +sorted order of their integer equivalent values. + + >>> [v.name for v in MoreColors] + ['red', 'green', 'blue', 'pink', 'cyan'] + >>> [int(v) for v in MoreColors] + [1, 2, 3, 4, 5] + +Enumeration values are hashable, so they can be used in dictionaries and sets. + + >>> apples = {} + >>> apples[Colors.red] = 'red delicious' + >>> apples[Colors.green] = 'granny smith' + >>> for color in sorted(apples, key=int): + ... print(color.name, '->', apples[color]) + red -> red delicious + green -> granny smith + + +Pickling +-------- + +Enumerations created with the class syntax can also be pickled and unpickled: + + >>> from enum.tests.fruit import Fruit + >>> from pickle import dumps, loads + >>> Fruit.tomato is loads(dumps(Fruit.tomato)) + True + + +Convenience API +--------------- + +You can also create enumerations using the convenience function ``make()``, +which takes an iterable object or dictionary to provide the item names and +values. ``make()`` is a static method. + +The first argument to ``make()`` is the name of the enumeration, and it returns +the so-named `Enum` subclass. The second argument is a `source` which can be +either an iterable or a dictionary. In the most basic usage, `source` returns +a sequence of strings which name the enumeration items. In this case, the +values are automatically assigned starting from 1:: + + >>> from enum import make + >>> make('Animals', ('ant', 'bee', 'cat', 'dog')) + + +The items in source can also be 2-tuples, where the first item is the +enumeration value name and the second is the integer value to assign to the +value. If 2-tuples are used, all items must be 2-tuples. + + >>> def enumiter(): + ... start = 1 + ... while True: + ... yield start + ... start <<= 1 + >>> make('Flags', zip(list('abcdefg'), enumiter())) + + + +Differences from PEP 354 +======================== + +Unlike PEP 354, enumeration values are not defined as a sequence of strings, +but as attributes of a class. This design was chosen because it was felt that +class syntax is more readable. + +Unlike PEP 354, enumeration values require an explicit integer value. This +difference recognizes that enumerations often represent real-world values, or +must interoperate with external real-world systems. For example, to store an +enumeration in a database, it is better to convert it to an integer on the way +in and back to an enumeration on the way out. Providing an integer value also +provides an explicit ordering. However, there is no automatic conversion to +and from the integer values, because explicit is better than implicit. + +Unlike PEP 354, this implementation does use a metaclass to define the +enumeration's syntax, and allows for extended base-enumerations so that the +common values in derived classes are identical (a singleton model). While PEP +354 dismisses this approach for its complexity, in practice any perceived +complexity, though minimal, is hidden from users of the enumeration. + +Unlike PEP 354, enumeration values can only be tested by identity comparison. +This is to emphasis the fact that enumeration values are singletons, much like +``None``. + + +Acknowledgments +=============== + +The ``flufl.enum`` implementation is based on an example by Jeremy Hylton. It +has been modified and extended by Barry Warsaw for use in the `GNU Mailman`_ +project. Ben Finney is the author of the earlier enumeration PEP 354. + +.. _`GNU Mailman`: http://www.list.org + +References +========== + +.. [#] http://mail.python.org/pipermail/python-ideas/2013-January/019003.html +.. [#] http://mail.python.org/pipermail/python-ideas/2013-February/019373.html + +Copyright +========= + +This document has been placed in the public domain. + +Todo +==== + + * Mark PEP 354 "superseded by" this one + * New package name within stdlib + * ``from enum import make`` creates a not-very-descriptive "make" name. Maybe + ``make_enum`` or ``enum`` is better? + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: + -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 15:37:42 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 15:37:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Add_generated_python-confi?= =?utf-8?q?g_files_to_=2Egitignore?= Message-ID: <3ZCsTL1FZDzQZS@mail.python.org> http://hg.python.org/cpython/rev/a729917bbf55 changeset: 82342:a729917bbf55 user: Petri Lehtinen date: Sat Feb 23 15:35:42 2013 +0100 summary: Add generated python-config files to .gitignore files: .gitignore | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/.gitignore b/.gitignore --- a/.gitignore +++ b/.gitignore @@ -20,6 +20,7 @@ Makefile Makefile.pre Misc/python.pc +Misc/python-config.sh Modules/Setup Modules/Setup.config Modules/Setup.local @@ -57,6 +58,8 @@ pybuilddir.txt pyconfig.h python +python-config +python-config.py python.exe python-gdb.py python.exe-gdb.py -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 15:58:37 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 23 Feb 2013 15:58:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_Propose_Ruby=27s_?= =?utf-8?q?=22pessimistic_version_constraints=22?= Message-ID: <3ZCsxT4f3czRl6@mail.python.org> http://hg.python.org/peps/rev/348628d8ea1f changeset: 4766:348628d8ea1f user: Nick Coghlan date: Sun Feb 24 00:55:28 2013 +1000 summary: PEP 426: Propose Ruby's "pessimistic version constraints" - makes the default handling of version specifiers match Ruby's ~> operator - explicitly base == and != on string prefix matching - cleaned up various examples related to the version specifiers - give the version specifiers section more structure files: pep-0426.txt | 183 ++++++++++++++++++++++++-------------- 1 files changed, 117 insertions(+), 66 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -861,7 +861,7 @@ Within a post-release (``1.0.post1``), the following suffixes are permitted and are ordered as shown:: - devN, + .devN, Note that ``devN`` and ``postN`` must always be preceded by a dot, even when used immediately following a numeric version (e.g. ``1.0.dev456``, @@ -976,8 +976,9 @@ As with other incompatible version schemes, date based versions can be stored in the ``Private-Version`` field. Translating them to a compliant -version is straightforward: the simplest approach is to subtract the year -of the first release from the major component in the release number. +public version is straightforward: the simplest approach is to subtract +the year before the first release from the major component in the release +number. Version specifiers @@ -987,61 +988,106 @@ commas. Each version clause consists of an optional comparison operator followed by a version identifier. For example:: - 0.9, >= 1.0, != 1.3.4, < 2.0, ~= 2.0 + 0.9, >= 1.0, != 1.3.4, < 2.0 Each version identifier must be in the standard format described in `Version scheme`_. The comma (",") is equivalent to a logical **and** operator. -Comparison operators must be one of ``<``, ``>``, ``<=``, ``>=``, ``==``, -``!=`` or ``~=``. - -The ``==`` and ``!=`` operators are strict - in order to match, the -version supplied must exactly match the specified version, with no -additional trailing suffix. When no comparison operator is provided, -it is equivalent to ``==``. - -The special ``~=`` operator is equivalent to using the following pair -of version clauses:: - - >= V, < V+1 - -where ``V+1`` is the next version after ``V``, as determined by -incrementing the last numeric component in ``V`` (for example, if ``V == -1.0a3``, then ``V+1 == 1.0a4``, while if ``V == 1.0``, then ``V+1 == -1.1``). In other words, this operator matches any release that starts -with the mentioned components. - -This approach makes it easy to depend on a particular release series -simply by naming it in a version specifier, without requiring any -additional annotation. For example, the following pairs of version -specifiers are equivalent:: - - ~= 2 - >= 2, < 3 - - ~= 3.3 - >= 3.3, < 3.4 - Whitespace between a conditional operator and the following version identifier is optional, as is the whitespace around the commas. + +Compatible release +------------------ + +A compatible release clause omits the comparison operator and matches any +version that is expected to be compatible with the specified version. + +For a given release identifier ``V.N``, the compatible release clause is +equivalent to the pair of comparison clauses:: + + >= V.N, < V+1 + +where ``V+1`` is the next version after ``V``, as determined by +incrementing the last numeric component in ``V``. For example, +the following version clauses are approximately equivalent:: + + 2.2 + >= 2.2, < 3.dev0 + + 1.4.5 + >= 1.4.5, < 1.5.dev0 + +The difference between the two is that using a compatible release clause +does *not* count as `explicitly mentioning a pre-release`__. + +__ `Handling of pre-releases`_ + +If a pre-release, post-release or developmental release is named in a +compatible release clause as ``V.N.suffix``, then the suffix is ignored +when determining the upper limit of compatibility:: + + 2.2.post3 + >= 2.2.post3, < 3.dev0 + + 1.4.5a4 + >= 1.4.5a4, < 1.5.dev0 + + +Version comparisons +------------------- + +A version comparison clause includes a comparison operator and a version +identifier, and will match any version where the comparison is true. + +Comparison clauses are only needed to cover cases which cannot be handled +with an appropriate compatible release clause, including coping with +dependencies which do not have a robust backwards compatibility policy +and thus break the assumptions of a compatible release clause. + +The defined comparison operators are ``<``, ``>``, ``<=``, ``>=``, ``==``, +and ``!=``. + +The ordered comparison operators ``<``, ``>``, ``<=``, ``>=`` are based +on the consistent ordering defined by the standard `Version scheme`_. + +The ``==`` and ``!=`` operators are based on string comparisons - in order +to match, the version being checker must start with exactly that sequence of +characters. + +.. note:: + + The use of ``==`` when defining dependencies for published distributions + is strongly discouraged, as it greatly complicates the deployment of + security fixes (the strict version comparison operator is intended + primarily for use when defining dependencies for particular + applications while using a shared distribution index). + + +Handling of pre-releases +------------------------ + Pre-releases of any kind, including developmental releases, are implicitly excluded from all version specifiers, *unless* a pre-release or developmental -developmental release is explicitly mentioned in one of the clauses. For -example, this specifier implicitly excludes all pre-releases and development +release is explicitly mentioned in one of the clauses. For example, these +specifiers implicitly exclude all pre-releases and development releases of later versions:: + 2.2 >= 1.0 -While these specifiers would include them:: +While these specifiers would include at least some of them:: + 2.2.dev0 + 2.2, != 2.3b2 >= 1.0a1 >= 1.0c1 >= 1.0, != 1.0b2 >= 1.0, < 2.0.dev123 + Dependency resolution tools should use the above rules by default, but should also allow users to request the following alternative behaviours: @@ -1054,34 +1100,26 @@ Post-releases and purely numeric releases receive no special treatment - they are always included unless explicitly excluded. -Given the above rules, projects which include the ``.0`` suffix for -the first release in a series, such as ``2.5.0``, can easily refer -specifically to that version with the clause ``==2.5.0``, while the clause -``~=2.5`` refers to that entire series. -Some examples: +Examples +-------- -* ``Requires-Dist: zope.interface (~=3.1)``: any version that starts with 3.1, +* ``Requires-Dist: zope.interface (3.1)``: version 3.1 or later, but not + version 4.0 or later. Excludes pre-releases and developmental releases. +* ``Requires-Dist: zope.interface (3.1.0)``: version 3.1.0 or later, but not + version 3.2.0 or later. Excludes pre-releases and developmental releases. +* ``Requires-Dist: zope.interface (==3.1)``: any version that starts + with 3.1, excluding pre-releases and developmental releases. +* ``Requires-Dist: zope.interface (3.1.0,!=3.1.3)``: version 3.1.0 or later, + but not version 3.1.3 and not version 3.2.0 or later. Excludes pre-releases + and developmental releases. For this particular project, this means: "any + version of the 3.1 series but not 3.1.3". This is equivalent to: + ``>=3.1, !=3.1.3, <3.2``. +* ``Requires-Python: 2.6``: Any version of Python 2.6 or 2.7. It + automatically excludes Python 3 or later. +* ``Requires-Python: 3.2, < 3.3``: Specifically requires Python 3.2, excluding pre-releases. -* ``Requires-Dist: zope.interface (==3.1)``: equivalent to ``Requires-Dist: - zope.interface (3.1)``. -* ``Requires-Dist: zope.interface (~=3.1.0)``: any version that starts with - 3.1.0, excluding pre-releases. Since that particular project doesn't - use more than 3 digits, it also means "only the 3.1.0 release". -* ``Requires-Python: 3``: Any Python 3 version, excluding pre-releases. -* ``Requires-Python: >=2.6,<3``: Any version of Python 2.6 or 2.7, including - post-releases (if they were used for Python). It excludes pre releases of - Python 3. -* ``Requires-Python: ~=2.6.2``: Equivalent to ">=2.6.2,<2.6.3". So this includes - only Python 2.6.2. Of course, if Python was numbered with 4 digits, it would - include all versions of the 2.6.2 series, excluding pre-releases. -* ``Requires-Python: ~=2.5``: Equivalent to ">=2.5,<2.6". -* ``Requires-Dist: zope.interface (~=3.1,!=3.1.3)``: any version that starts - with 3.1, excluding pre-releases of 3.1 *and* excluding any version that - starts with "3.1.3". For this particular project, this means: "any version - of the 3.1 series but not 3.1.3". This is equivalent to: - ">=3.1,!=3.1.3,<3.2". -* ``Requires-Python: >=3.3a1``: Any version of Python 3.3+, including +* ``Requires-Python: 3.3a1``: Any version of Python 3.3+, including pre-releases like 3.4a1. @@ -1437,10 +1475,10 @@ The previous interpretation of version specifiers made it very easy to accidentally download a pre-release version of a dependency. This in turn made it difficult for developers to publish pre-release versions -of software to the Python Package Index, as leaving the package set as -public would lead to users inadvertently downloading pre-release software, -while hiding it would defeat the purpose of publishing it for user -testing. +of software to the Python Package Index, as even marking the package as +hidden wasn't enough to keep automated tools from downloading it, and also +made it harder for users to obtain the test release manually through the +main PyPI web interface. The previous interpretation also excluded post-releases from some version specifiers for no adequately justified reason. @@ -1449,6 +1487,16 @@ accept a pre-release version as satisfying a dependency, while allowing pre-release versions to be explicitly requested when needed. +The "some forward compatibility assumed" default version constraint is +taken directly from the Ruby community's "pessimistic version constraint" +operator [4]_ to allow projects to take a cautious approach to forward +compatibility promises, while still easily setting a minimum required +version for their dependencies. It is made the default behaviour rather +than needing a separate operator in order to explicitly discourage +overspecification of dependencies by library developers. The explicit +comparison operators remain available to cope with dependencies with +unreliable or non-existent backwards compatibility policies. + Packaging, build and installation dependencies ---------------------------------------------- @@ -1546,6 +1594,9 @@ .. [3] Version compatibility analysis script: http://hg.python.org/peps/file/default/pep-0426/pepsort.py +.. [4] Pessimistic version constraint + http://docs.rubygems.org/read/chapter/16 + Appendix A ========== -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 16:10:57 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 23 Feb 2013 16:10:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_fix_editing_oversi?= =?utf-8?q?ght?= Message-ID: <3ZCtCj19bKzRl6@mail.python.org> http://hg.python.org/peps/rev/bc9f75975818 changeset: 4767:bc9f75975818 user: Nick Coghlan date: Sun Feb 24 01:10:49 2013 +1000 summary: PEP 426: fix editing oversight files: pep-0426.txt | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -1006,9 +1006,9 @@ version that is expected to be compatible with the specified version. For a given release identifier ``V.N``, the compatible release clause is -equivalent to the pair of comparison clauses:: +approximately equivalent to the pair of comparison clauses:: - >= V.N, < V+1 + >= V.N, < V+1.dev0 where ``V+1`` is the next version after ``V``, as determined by incrementing the last numeric component in ``V``. For example, -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 16:14:26 2013 From: python-checkins at python.org (nick.coghlan) Date: Sat, 23 Feb 2013 16:14:26 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_426=3A_fix_typo?= Message-ID: <3ZCtHk5QCjzScW@mail.python.org> http://hg.python.org/peps/rev/08bec77d40c3 changeset: 4768:08bec77d40c3 user: Nick Coghlan date: Sun Feb 24 01:14:18 2013 +1000 summary: PEP 426: fix typo files: pep-0426.txt | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -1054,7 +1054,7 @@ on the consistent ordering defined by the standard `Version scheme`_. The ``==`` and ``!=`` operators are based on string comparisons - in order -to match, the version being checker must start with exactly that sequence of +to match, the version being checked must start with exactly that sequence of characters. .. note:: @@ -1087,7 +1087,6 @@ >= 1.0, != 1.0b2 >= 1.0, < 2.0.dev123 - Dependency resolution tools should use the above rules by default, but should also allow users to request the following alternative behaviours: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 17:28:35 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 17:28:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzUwMzM6?= =?utf-8?q?_Fix_building_of_the_sqlite3_extension_module?= Message-ID: <3ZCvxH1ML5zShN@mail.python.org> http://hg.python.org/cpython/rev/8b177aea9ddd changeset: 82343:8b177aea9ddd branch: 2.7 parent: 82337:dec10a3eb95f user: Petri Lehtinen date: Sat Feb 23 17:05:28 2013 +0100 summary: Issue #5033: Fix building of the sqlite3 extension module files: Misc/ACKS | 1 + Misc/NEWS | 3 +++ setup.py | 2 +- 3 files changed, 5 insertions(+), 1 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -763,6 +763,7 @@ Samuele Pedroni Marcel van der Peijl Berker Peksag +Andreas Pelme Steven Pemberton Bo Peng Santiago Peres?n diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -858,6 +858,9 @@ Build ----- +- Issue #5033: Fix building of the sqlite3 extension module when the + SQLite library version has "beta" in it. Patch by Andreas Pelme. + - Issue #17228: Fix building without pymalloc. - Issue #17086: Backport the patches from the 3.3 branch to cross-build diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1114,7 +1114,7 @@ if sqlite_setup_debug: print "sqlite: found %s"%f incf = open(f).read() m = re.search( - r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"(.*)"', incf) + r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"([\d\.]*)"', incf) if m: sqlite_version = m.group(1) sqlite_version_tuple = tuple([int(x) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 17:28:36 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 17:28:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzUwMzM6?= =?utf-8?q?_Fix_building_of_the_sqlite3_extension_module?= Message-ID: <3ZCvxJ480SzNXY@mail.python.org> http://hg.python.org/cpython/rev/73d5dd480558 changeset: 82344:73d5dd480558 branch: 3.2 parent: 82333:831be7dc260a user: Petri Lehtinen date: Sat Feb 23 17:05:28 2013 +0100 summary: Issue #5033: Fix building of the sqlite3 extension module files: Misc/ACKS | 1 + Misc/NEWS | 3 +++ setup.py | 2 +- 3 files changed, 5 insertions(+), 1 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -820,6 +820,7 @@ William Park Harri Pasanen Berker Peksag +Andreas Pelme Bo Peng Joe Peterson Randy Pausch diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1009,6 +1009,9 @@ Build ----- +- Issue #5033: Fix building of the sqlite3 extension module when the + SQLite library version has "beta" in it. Patch by Andreas Pelme. + - Issue #3754: fix typo in pthread AC_CACHE_VAL. - Issue #17029: Let h2py search the multiarch system include directory. diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1012,7 +1012,7 @@ with open(f) as file: incf = file.read() m = re.search( - r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"(.*)"', incf) + r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"([\d\.]*)"', incf) if m: sqlite_version = m.group(1) sqlite_version_tuple = tuple([int(x) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 17:28:37 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 17:28:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=235033=3A_Fix_building_of_the_sqlite3_extension_module?= Message-ID: <3ZCvxK6vP4zShH@mail.python.org> http://hg.python.org/cpython/rev/c613eb716c8e changeset: 82345:c613eb716c8e branch: 3.3 parent: 82340:629af342b189 parent: 82344:73d5dd480558 user: Petri Lehtinen date: Sat Feb 23 17:24:00 2013 +0100 summary: Issue #5033: Fix building of the sqlite3 extension module files: Misc/ACKS | 1 + Misc/NEWS | 3 +++ setup.py | 2 +- 3 files changed, 5 insertions(+), 1 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -908,6 +908,7 @@ Justin Peel Marcel van der Peijl Berker Peksag +Andreas Pelme Steven Pemberton Bo Peng Santiago Peres?n diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -694,6 +694,9 @@ Build ----- +- Issue #5033: Fix building of the sqlite3 extension module when the + SQLite library version has "beta" in it. Patch by Andreas Pelme. + - Issue #17228: Fix building without pymalloc. - Issue #3718: Use AC_ARG_VAR to set MACHDEP in configure.ac. diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1062,7 +1062,7 @@ with open(f) as file: incf = file.read() m = re.search( - r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"(.*)"', incf) + r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"([\d\.]*)"', incf) if m: sqlite_version = m.group(1) sqlite_version_tuple = tuple([int(x) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 17:28:39 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 17:28:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=235033=3A_Fix_building_of_the_sqlite3_extension_m?= =?utf-8?q?odule?= Message-ID: <3ZCvxM2ddgzQTd@mail.python.org> http://hg.python.org/cpython/rev/19b3aaf79e45 changeset: 82346:19b3aaf79e45 parent: 82342:a729917bbf55 parent: 82345:c613eb716c8e user: Petri Lehtinen date: Sat Feb 23 17:24:44 2013 +0100 summary: Issue #5033: Fix building of the sqlite3 extension module files: Misc/ACKS | 1 + Misc/NEWS | 3 +++ setup.py | 2 +- 3 files changed, 5 insertions(+), 1 deletions(-) diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -917,6 +917,7 @@ Justin Peel Marcel van der Peijl Berker Peksag +Andreas Pelme Steven Pemberton Bo Peng Santiago Peres?n diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -945,6 +945,9 @@ Build ----- +- Issue #5033: Fix building of the sqlite3 extension module when the + SQLite library version has "beta" in it. Patch by Andreas Pelme. + - Issue #17228: Fix building without pymalloc. - Issue #3718: Use AC_ARG_VAR to set MACHDEP in configure.ac. diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1073,7 +1073,7 @@ with open(f) as file: incf = file.read() m = re.search( - r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"(.*)"', incf) + r'\s*.*#\s*.*define\s.*SQLITE_VERSION\W*"([\d\.]*)"', incf) if m: sqlite_version = m.group(1) sqlite_version_tuple = tuple([int(x) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 18:17:00 2013 From: python-checkins at python.org (eli.bendersky) Date: Sat, 23 Feb 2013 18:17:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Consistent_formatting_=26_cle?= =?utf-8?q?anup=2C_explicit_namespacing_of_make=2E_Updated_todo?= Message-ID: <3ZCx181qfRzSf4@mail.python.org> http://hg.python.org/peps/rev/0e10b8b6ecb3 changeset: 4769:0e10b8b6ecb3 parent: 4765:3787abec0166 user: Eli Bendersky date: Sat Feb 23 09:16:16 2013 -0800 summary: Consistent formatting & cleanup, explicit namespacing of make. Updated todo files: pepdraft-0435.txt | 97 ++++++++++++++++++---------------- 1 files changed, 52 insertions(+), 45 deletions(-) diff --git a/pepdraft-0435.txt b/pepdraft-0435.txt --- a/pepdraft-0435.txt +++ b/pepdraft-0435.txt @@ -64,8 +64,10 @@ ================== We propose to add a module named ``enum`` to the standard library. The main -type exposed by this module is ``Enum``. +type exposed by this module is ``Enum``. Hence, to import the ``Enum`` type +user code will run:: + >>> from enum import Enum Proposed semantics for the new enumeration type =============================================== @@ -76,8 +78,8 @@ Enumerations are created using the class syntax, which makes them easy to read and write. Every enumeration value must have a unique integer value and the only restriction on their names is that they must be valid Python identifiers. -To define an enumeration, derive from the Enum class and add attributes with -assignment to their integer values. +To define an enumeration, derive from the ``Enum`` class and add attributes with +assignment to their integer values:: >>> from enum import Enum >>> class Colors(Enum): @@ -85,7 +87,7 @@ ... green = 2 ... blue = 3 -Enumeration values are compared by identity. +Enumeration values are compared by identity:: >>> Colors.red is Colors.red True @@ -96,17 +98,17 @@ >>> Colors.blue is Colors.red False -Enumeration values have nice, human readable string representations... +Enumeration values have nice, human readable string representations:: >>> print(Colors.red) Colors.red -...while their repr has more information. +...while their repr has more information:: >>> print(repr(Colors.red)) -The enumeration value names are available through the class members. +The enumeration value names are available through the class members:: >>> for member in Colors.__members__: ... print(member) @@ -115,13 +117,13 @@ blue Let's say you wanted to encode an enumeration value in a database. You might -want to get the enumeration class object from an enumeration value. +want to get the enumeration class object from an enumeration value:: >>> cls = Colors.red.enum >>> print(cls.__name__) Colors -Enums also have a property that contains just their item name. +Enums also have a property that contains just their item name:: >>> print(Colors.red.name) red @@ -130,21 +132,21 @@ >>> print(Colors.blue.name) blue -The str and repr of the enumeration class also provides useful information. +The str and repr of the enumeration class also provides useful information:: >>> print(Colors) >>> print(repr(Colors)) -You can extend previously defined Enums by subclassing. +You can extend previously defined Enums by subclassing:: >>> class MoreColors(Colors): ... pink = 4 ... cyan = 5 When extended in this way, the base enumeration's values are identical to the -same named values in the derived class. +same named values in the derived class:: >>> Colors.red is MoreColors.red True @@ -153,7 +155,7 @@ However, these are not doing comparisons against the integer equivalent values, because if you define an enumeration with similar item names and -integer values, they will not be identical. +integer values, they will not be identical:: >>> class OtherColors(Enum): ... red = 1 @@ -164,7 +166,7 @@ >>> Colors.blue is not OtherColors.blue True -These enumeration values are not equal, nor do they hash equally. +These enumeration values are not equal, nor do they hash equally:: >>> Colors.red == OtherColors.red False @@ -172,7 +174,7 @@ 2 Ordered comparisons between enumeration values are *not* supported. Enums are -not integers! +not integers:: >>> Colors.red < Colors.blue Traceback (most recent call last): @@ -191,14 +193,14 @@ ... NotImplementedError -Equality comparisons are defined though. +Equality comparisons are defined though:: >>> Colors.blue == Colors.blue True >>> Colors.green != Colors.blue True -Enumeration values do not support ordered comparisons. +Enumeration values do not support ordered comparisons:: >>> Colors.red < Colors.blue Traceback (most recent call last): @@ -222,7 +224,7 @@ NotImplementedError While equality comparisons are allowed, comparisons against non-enumeration -values will always compare not equal. +values will always compare not equal:: >>> Colors.green == 2 False @@ -235,7 +237,7 @@ If you really want the integer equivalent values, you can convert enumeration values explicitly using the ``int()`` built-in. This is quite convenient for -storing enums in a database for example. +storing enums in a database for example:: >>> int(Colors.red) 1 @@ -244,8 +246,8 @@ >>> int(Colors.blue) 3 -You can also convert back to the enumeration value by calling the Enum class, -passing in the integer value for the item you want. +You can also convert back to the enumeration value by calling the Enum subclass, +passing in the integer value for the item you want:: >>> Colors(1) @@ -256,14 +258,14 @@ >>> Colors(1) is Colors.red True -The Enum class also accepts the string name of the enumeration value. +The Enum subclass also accepts the string name of the enumeration value:: >>> Colors('red') >>> Colors('blue') is Colors.blue True -You get exceptions though, if you try to use invalid arguments. +You get exceptions though, if you try to use invalid arguments:: >>> Colors('magenta') Traceback (most recent call last): @@ -275,7 +277,7 @@ ValueError: 99 The Enum base class also supports getitem syntax, exactly equivalent to the -class's call semantics. +class's call semantics:: >>> Colors[1] @@ -299,7 +301,7 @@ ValueError: 99 The integer equivalent values serve another purpose. You may not define two -enumeration values with the same integer value. +enumeration values with the same integer value:: >>> class Bad(Enum): ... cartman = 1 @@ -311,7 +313,7 @@ ... TypeError: Multiple enum values: 3 -You also may not duplicate values in derived enumerations. +You also may not duplicate values in derived enumerations:: >>> class BadColors(Colors): ... yellow = 4 @@ -321,14 +323,14 @@ TypeError: Multiple enum values: 2 The Enum class support iteration. Enumeration values are returned in the -sorted order of their integer equivalent values. +sorted order of their integer equivalent values:: >>> [v.name for v in MoreColors] ['red', 'green', 'blue', 'pink', 'cyan'] >>> [int(v) for v in MoreColors] [1, 2, 3, 4, 5] -Enumeration values are hashable, so they can be used in dictionaries and sets. +Enumeration values are hashable, so they can be used in dictionaries and sets:: >>> apples = {} >>> apples[Colors.red] = 'red delicious' @@ -342,7 +344,7 @@ Pickling -------- -Enumerations created with the class syntax can also be pickled and unpickled: +Enumerations created with the class syntax can also be pickled and unpickled:: >>> from enum.tests.fruit import Fruit >>> from pickle import dumps, loads @@ -358,25 +360,25 @@ values. ``make()`` is a static method. The first argument to ``make()`` is the name of the enumeration, and it returns -the so-named `Enum` subclass. The second argument is a `source` which can be -either an iterable or a dictionary. In the most basic usage, `source` returns +the so-named `Enum` subclass. The second argument is a *source* which can be +either an iterable or a dictionary. In the most basic usage, *source* returns a sequence of strings which name the enumeration items. In this case, the values are automatically assigned starting from 1:: - >>> from enum import make - >>> make('Animals', ('ant', 'bee', 'cat', 'dog')) + >>> import enum + >>> enum.make('Animals', ('ant', 'bee', 'cat', 'dog')) The items in source can also be 2-tuples, where the first item is the enumeration value name and the second is the integer value to assign to the -value. If 2-tuples are used, all items must be 2-tuples. +value. If 2-tuples are used, all items must be 2-tuples:: >>> def enumiter(): ... start = 1 ... while True: ... yield start ... start <<= 1 - >>> make('Flags', zip(list('abcdefg'), enumiter())) + >>> enum.make('Flags', zip(list('abcdefg'), enumiter())) @@ -402,24 +404,24 @@ complexity, though minimal, is hidden from users of the enumeration. Unlike PEP 354, enumeration values can only be tested by identity comparison. -This is to emphasis the fact that enumeration values are singletons, much like +This is to emphasise the fact that enumeration values are singletons, much like ``None``. Acknowledgments =============== -The ``flufl.enum`` implementation is based on an example by Jeremy Hylton. It -has been modified and extended by Barry Warsaw for use in the `GNU Mailman`_ -project. Ben Finney is the author of the earlier enumeration PEP 354. - -.. _`GNU Mailman`: http://www.list.org +This PEP describes the ``flufl.enum`` package by Barry Warsaw. ``flufl.enum`` +is based on an example by Jeremy Hylton. It has been modified and extended +by Barry Warsaw for use in the GNU Mailman [#]_ project. Ben Finney is the +author of the earlier enumeration PEP 354. References ========== .. [#] http://mail.python.org/pipermail/python-ideas/2013-January/019003.html .. [#] http://mail.python.org/pipermail/python-ideas/2013-February/019373.html +.. [#] http://www.list.org Copyright ========= @@ -429,10 +431,15 @@ Todo ==== - * Mark PEP 354 "superseded by" this one - * New package name within stdlib - * ``from enum import make`` creates a not-very-descriptive "make" name. Maybe - ``make_enum`` or ``enum`` is better? + * Mark PEP 354 "superseded by" this one, if accepted + * New package name within stdlib - enum? (top-level) + * "Convenience API" says "make() is a static method" - what does this mean? + make seems to be a simple module-level function in the implementation. + * For make, can we add an API like namedtuple's? + make('Animals, 'ant bee cat dog') + I.e. when make sees a string argument it splits it, making it similar to a + tuple but with far less manual quote typing. OTOH, it just saves a ".split" + so may not be worth the effort ? .. Local Variables: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 18:17:01 2013 From: python-checkins at python.org (eli.bendersky) Date: Sat, 23 Feb 2013 18:17:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps_=28merge_default_-=3E_default=29?= =?utf-8?q?=3A_merge_heads?= Message-ID: <3ZCx195WcJzSgJ@mail.python.org> http://hg.python.org/peps/rev/ebd1e8e35394 changeset: 4770:ebd1e8e35394 parent: 4769:0e10b8b6ecb3 parent: 4768:08bec77d40c3 user: Eli Bendersky date: Sat Feb 23 09:16:43 2013 -0800 summary: merge heads files: pep-0426.txt | 182 ++++++++++++++++++++++++-------------- 1 files changed, 116 insertions(+), 66 deletions(-) diff --git a/pep-0426.txt b/pep-0426.txt --- a/pep-0426.txt +++ b/pep-0426.txt @@ -861,7 +861,7 @@ Within a post-release (``1.0.post1``), the following suffixes are permitted and are ordered as shown:: - devN, + .devN, Note that ``devN`` and ``postN`` must always be preceded by a dot, even when used immediately following a numeric version (e.g. ``1.0.dev456``, @@ -976,8 +976,9 @@ As with other incompatible version schemes, date based versions can be stored in the ``Private-Version`` field. Translating them to a compliant -version is straightforward: the simplest approach is to subtract the year -of the first release from the major component in the release number. +public version is straightforward: the simplest approach is to subtract +the year before the first release from the major component in the release +number. Version specifiers @@ -987,56 +988,100 @@ commas. Each version clause consists of an optional comparison operator followed by a version identifier. For example:: - 0.9, >= 1.0, != 1.3.4, < 2.0, ~= 2.0 + 0.9, >= 1.0, != 1.3.4, < 2.0 Each version identifier must be in the standard format described in `Version scheme`_. The comma (",") is equivalent to a logical **and** operator. -Comparison operators must be one of ``<``, ``>``, ``<=``, ``>=``, ``==``, -``!=`` or ``~=``. - -The ``==`` and ``!=`` operators are strict - in order to match, the -version supplied must exactly match the specified version, with no -additional trailing suffix. When no comparison operator is provided, -it is equivalent to ``==``. - -The special ``~=`` operator is equivalent to using the following pair -of version clauses:: - - >= V, < V+1 - -where ``V+1`` is the next version after ``V``, as determined by -incrementing the last numeric component in ``V`` (for example, if ``V == -1.0a3``, then ``V+1 == 1.0a4``, while if ``V == 1.0``, then ``V+1 == -1.1``). In other words, this operator matches any release that starts -with the mentioned components. - -This approach makes it easy to depend on a particular release series -simply by naming it in a version specifier, without requiring any -additional annotation. For example, the following pairs of version -specifiers are equivalent:: - - ~= 2 - >= 2, < 3 - - ~= 3.3 - >= 3.3, < 3.4 - Whitespace between a conditional operator and the following version identifier is optional, as is the whitespace around the commas. + +Compatible release +------------------ + +A compatible release clause omits the comparison operator and matches any +version that is expected to be compatible with the specified version. + +For a given release identifier ``V.N``, the compatible release clause is +approximately equivalent to the pair of comparison clauses:: + + >= V.N, < V+1.dev0 + +where ``V+1`` is the next version after ``V``, as determined by +incrementing the last numeric component in ``V``. For example, +the following version clauses are approximately equivalent:: + + 2.2 + >= 2.2, < 3.dev0 + + 1.4.5 + >= 1.4.5, < 1.5.dev0 + +The difference between the two is that using a compatible release clause +does *not* count as `explicitly mentioning a pre-release`__. + +__ `Handling of pre-releases`_ + +If a pre-release, post-release or developmental release is named in a +compatible release clause as ``V.N.suffix``, then the suffix is ignored +when determining the upper limit of compatibility:: + + 2.2.post3 + >= 2.2.post3, < 3.dev0 + + 1.4.5a4 + >= 1.4.5a4, < 1.5.dev0 + + +Version comparisons +------------------- + +A version comparison clause includes a comparison operator and a version +identifier, and will match any version where the comparison is true. + +Comparison clauses are only needed to cover cases which cannot be handled +with an appropriate compatible release clause, including coping with +dependencies which do not have a robust backwards compatibility policy +and thus break the assumptions of a compatible release clause. + +The defined comparison operators are ``<``, ``>``, ``<=``, ``>=``, ``==``, +and ``!=``. + +The ordered comparison operators ``<``, ``>``, ``<=``, ``>=`` are based +on the consistent ordering defined by the standard `Version scheme`_. + +The ``==`` and ``!=`` operators are based on string comparisons - in order +to match, the version being checked must start with exactly that sequence of +characters. + +.. note:: + + The use of ``==`` when defining dependencies for published distributions + is strongly discouraged, as it greatly complicates the deployment of + security fixes (the strict version comparison operator is intended + primarily for use when defining dependencies for particular + applications while using a shared distribution index). + + +Handling of pre-releases +------------------------ + Pre-releases of any kind, including developmental releases, are implicitly excluded from all version specifiers, *unless* a pre-release or developmental -developmental release is explicitly mentioned in one of the clauses. For -example, this specifier implicitly excludes all pre-releases and development +release is explicitly mentioned in one of the clauses. For example, these +specifiers implicitly exclude all pre-releases and development releases of later versions:: + 2.2 >= 1.0 -While these specifiers would include them:: +While these specifiers would include at least some of them:: + 2.2.dev0 + 2.2, != 2.3b2 >= 1.0a1 >= 1.0c1 >= 1.0, != 1.0b2 @@ -1054,34 +1099,26 @@ Post-releases and purely numeric releases receive no special treatment - they are always included unless explicitly excluded. -Given the above rules, projects which include the ``.0`` suffix for -the first release in a series, such as ``2.5.0``, can easily refer -specifically to that version with the clause ``==2.5.0``, while the clause -``~=2.5`` refers to that entire series. -Some examples: +Examples +-------- -* ``Requires-Dist: zope.interface (~=3.1)``: any version that starts with 3.1, +* ``Requires-Dist: zope.interface (3.1)``: version 3.1 or later, but not + version 4.0 or later. Excludes pre-releases and developmental releases. +* ``Requires-Dist: zope.interface (3.1.0)``: version 3.1.0 or later, but not + version 3.2.0 or later. Excludes pre-releases and developmental releases. +* ``Requires-Dist: zope.interface (==3.1)``: any version that starts + with 3.1, excluding pre-releases and developmental releases. +* ``Requires-Dist: zope.interface (3.1.0,!=3.1.3)``: version 3.1.0 or later, + but not version 3.1.3 and not version 3.2.0 or later. Excludes pre-releases + and developmental releases. For this particular project, this means: "any + version of the 3.1 series but not 3.1.3". This is equivalent to: + ``>=3.1, !=3.1.3, <3.2``. +* ``Requires-Python: 2.6``: Any version of Python 2.6 or 2.7. It + automatically excludes Python 3 or later. +* ``Requires-Python: 3.2, < 3.3``: Specifically requires Python 3.2, excluding pre-releases. -* ``Requires-Dist: zope.interface (==3.1)``: equivalent to ``Requires-Dist: - zope.interface (3.1)``. -* ``Requires-Dist: zope.interface (~=3.1.0)``: any version that starts with - 3.1.0, excluding pre-releases. Since that particular project doesn't - use more than 3 digits, it also means "only the 3.1.0 release". -* ``Requires-Python: 3``: Any Python 3 version, excluding pre-releases. -* ``Requires-Python: >=2.6,<3``: Any version of Python 2.6 or 2.7, including - post-releases (if they were used for Python). It excludes pre releases of - Python 3. -* ``Requires-Python: ~=2.6.2``: Equivalent to ">=2.6.2,<2.6.3". So this includes - only Python 2.6.2. Of course, if Python was numbered with 4 digits, it would - include all versions of the 2.6.2 series, excluding pre-releases. -* ``Requires-Python: ~=2.5``: Equivalent to ">=2.5,<2.6". -* ``Requires-Dist: zope.interface (~=3.1,!=3.1.3)``: any version that starts - with 3.1, excluding pre-releases of 3.1 *and* excluding any version that - starts with "3.1.3". For this particular project, this means: "any version - of the 3.1 series but not 3.1.3". This is equivalent to: - ">=3.1,!=3.1.3,<3.2". -* ``Requires-Python: >=3.3a1``: Any version of Python 3.3+, including +* ``Requires-Python: 3.3a1``: Any version of Python 3.3+, including pre-releases like 3.4a1. @@ -1437,10 +1474,10 @@ The previous interpretation of version specifiers made it very easy to accidentally download a pre-release version of a dependency. This in turn made it difficult for developers to publish pre-release versions -of software to the Python Package Index, as leaving the package set as -public would lead to users inadvertently downloading pre-release software, -while hiding it would defeat the purpose of publishing it for user -testing. +of software to the Python Package Index, as even marking the package as +hidden wasn't enough to keep automated tools from downloading it, and also +made it harder for users to obtain the test release manually through the +main PyPI web interface. The previous interpretation also excluded post-releases from some version specifiers for no adequately justified reason. @@ -1449,6 +1486,16 @@ accept a pre-release version as satisfying a dependency, while allowing pre-release versions to be explicitly requested when needed. +The "some forward compatibility assumed" default version constraint is +taken directly from the Ruby community's "pessimistic version constraint" +operator [4]_ to allow projects to take a cautious approach to forward +compatibility promises, while still easily setting a minimum required +version for their dependencies. It is made the default behaviour rather +than needing a separate operator in order to explicitly discourage +overspecification of dependencies by library developers. The explicit +comparison operators remain available to cope with dependencies with +unreliable or non-existent backwards compatibility policies. + Packaging, build and installation dependencies ---------------------------------------------- @@ -1546,6 +1593,9 @@ .. [3] Version compatibility analysis script: http://hg.python.org/peps/file/default/pep-0426/pepsort.py +.. [4] Pessimistic version constraint + http://docs.rubygems.org/read/chapter/16 + Appendix A ========== -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Sat Feb 23 18:56:32 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 18:56:32 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2315132=3A_Allow_a_?= =?utf-8?q?list_for_the_defaultTest_argument_of_unittest=2ETestProgram?= Message-ID: <3ZCxtm2TYqzSgS@mail.python.org> http://hg.python.org/cpython/rev/4e2bfe6b227a changeset: 82347:4e2bfe6b227a user: Petri Lehtinen date: Sat Feb 23 18:52:51 2013 +0100 summary: Issue #15132: Allow a list for the defaultTest argument of unittest.TestProgram Patch by Jyrki Pulliainen files: Lib/unittest/main.py | 5 ++- Lib/unittest/test/test_program.py | 35 +++++++++++++++++++ Misc/NEWS | 3 + 3 files changed, 42 insertions(+), 1 deletions(-) diff --git a/Lib/unittest/main.py b/Lib/unittest/main.py --- a/Lib/unittest/main.py +++ b/Lib/unittest/main.py @@ -164,7 +164,10 @@ # to support python -m unittest ... self.module = None else: - self.testNames = (self.defaultTest,) + if isinstance(self.defaultTest, str): + self.testNames = (self.defaultTest,) + else: + self.testNames = list(self.defaultTest) self.createTests() def createTests(self): diff --git a/Lib/unittest/test/test_program.py b/Lib/unittest/test/test_program.py --- a/Lib/unittest/test/test_program.py +++ b/Lib/unittest/test/test_program.py @@ -64,6 +64,41 @@ return self.suiteClass( [self.loadTestsFromTestCase(Test_TestProgram.FooBar)]) + def loadTestsFromNames(self, names, module): + return self.suiteClass( + [self.loadTestsFromTestCase(Test_TestProgram.FooBar)]) + + def test_defaultTest_with_string(self): + class FakeRunner(object): + def run(self, test): + self.test = test + return True + + old_argv = sys.argv + sys.argv = ['faketest'] + runner = FakeRunner() + program = unittest.TestProgram(testRunner=runner, exit=False, + defaultTest='unittest.test', + testLoader=self.FooBarLoader()) + sys.argv = old_argv + self.assertEquals(('unittest.test',), program.testNames) + + def test_defaultTest_with_iterable(self): + class FakeRunner(object): + def run(self, test): + self.test = test + return True + + old_argv = sys.argv + sys.argv = ['faketest'] + runner = FakeRunner() + program = unittest.TestProgram( + testRunner=runner, exit=False, + defaultTest=['unittest.test', 'unittest.test2'], + testLoader=self.FooBarLoader()) + sys.argv = old_argv + self.assertEquals(['unittest.test', 'unittest.test2'], + program.testNames) def test_NonExit(self): program = unittest.main(exit=False, diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -260,6 +260,9 @@ Library ------- +- Issue #15132: Allow a list for the defaultTest argument of + unittest.TestProgram. Patch by Jyrki Pulliainen. + - Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:12:45 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:12:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE0NzIw?= =?utf-8?q?=3A_sqlite3=3A_Convert_datetime_microseconds_correctly?= Message-ID: <3ZCyFT3QjBzShT@mail.python.org> http://hg.python.org/cpython/rev/6911df35b7b6 changeset: 82348:6911df35b7b6 branch: 2.7 parent: 82343:8b177aea9ddd user: Petri Lehtinen date: Sat Feb 23 19:05:09 2013 +0100 summary: Issue #14720: sqlite3: Convert datetime microseconds correctly Patch by Lowe Thiderman files: Lib/sqlite3/dbapi2.py | 2 +- Lib/sqlite3/test/regression.py | 19 ++++++++++++++++++- Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 23 insertions(+), 2 deletions(-) diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -68,7 +68,7 @@ timepart_full = timepart.split(".") hours, minutes, seconds = map(int, timepart_full[0].split(":")) if len(timepart_full) == 2: - microseconds = int(timepart_full[1]) + microseconds = int('{:0<6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -1,4 +1,4 @@ -#-*- coding: ISO-8859-1 -*- +#-*- coding: iso-8859-1 -*- # pysqlite2/test/regression.py: pysqlite regression tests # # Copyright (C) 2006-2007 Gerhard H?ring @@ -285,6 +285,23 @@ cur.executemany("insert into b (baz) values (?)", ((i,) for i in foo())) + def CheckConvertTimestampMicrosecondPadding(self): + """ + http://bugs.python.org/issue14720 + + The microsecond parsing of convert_timestamp() should pad with zeros, + since the microsecond string "456" actually represents "456000". + """ + + con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) + cur = con.cursor() + cur.execute("CREATE TABLE t (x TIMESTAMP)") + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + cur.execute("SELECT * FROM t") + date = cur.fetchall()[0][0] + + self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -990,6 +990,7 @@ Mikhail Terekhov Richard M. Tew Tobias Thelen +Lowe Thiderman Nicolas M. Thi??ry James Thomas Robin Thomas diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -208,6 +208,9 @@ Library ------- +- Issue #14720: sqlite3: Convert datetime microseconds correctly. + Patch by Lowe Thiderman. + - Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:12:46 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:12:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE0NzIw?= =?utf-8?q?=3A_sqlite3=3A_Convert_datetime_microseconds_correctly?= Message-ID: <3ZCyFV6MJ6zSjv@mail.python.org> http://hg.python.org/cpython/rev/46d5317a51fb changeset: 82349:46d5317a51fb branch: 3.2 parent: 82344:73d5dd480558 user: Petri Lehtinen date: Sat Feb 23 19:05:09 2013 +0100 summary: Issue #14720: sqlite3: Convert datetime microseconds correctly Patch by Lowe Thiderman files: Lib/sqlite3/dbapi2.py | 2 +- Lib/sqlite3/test/regression.py | 19 ++++++++++++++++++- Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 23 insertions(+), 2 deletions(-) diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -67,7 +67,7 @@ timepart_full = timepart.split(b".") hours, minutes, seconds = map(int, timepart_full[0].split(b":")) if len(timepart_full) == 2: - microseconds = int(timepart_full[1]) + microseconds = int('{:0<6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -1,4 +1,4 @@ -#-*- coding: ISO-8859-1 -*- +#-*- coding: iso-8859-1 -*- # pysqlite2/test/regression.py: pysqlite regression tests # # Copyright (C) 2006-2010 Gerhard H?ring @@ -302,6 +302,23 @@ cur.executemany("insert into b (baz) values (?)", ((i,) for i in foo())) + def CheckConvertTimestampMicrosecondPadding(self): + """ + http://bugs.python.org/issue14720 + + The microsecond parsing of convert_timestamp() should pad with zeros, + since the microsecond string "456" actually represents "456000". + """ + + con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) + cur = con.cursor() + cur.execute("CREATE TABLE t (x TIMESTAMP)") + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + cur.execute("SELECT * FROM t") + date = cur.fetchall()[0][0] + + self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1068,6 +1068,7 @@ Mikhail Terekhov Richard M. Tew Tobias Thelen +Lowe Thiderman Nicolas M. Thi??ry James Thomas Robin Thomas diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -227,6 +227,9 @@ Library ------- +- Issue #14720: sqlite3: Convert datetime microseconds correctly. + Patch by Lowe Thiderman. + - Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:12:48 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:12:48 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2314720=3A_sqlite3=3A_Convert_datetime_microseconds_cor?= =?utf-8?q?rectly?= Message-ID: <3ZCyFX2C10zSjc@mail.python.org> http://hg.python.org/cpython/rev/46c96693296f changeset: 82350:46c96693296f branch: 3.3 parent: 82345:c613eb716c8e parent: 82349:46d5317a51fb user: Petri Lehtinen date: Sat Feb 23 19:07:02 2013 +0100 summary: Issue #14720: sqlite3: Convert datetime microseconds correctly files: Lib/sqlite3/dbapi2.py | 2 +- Lib/sqlite3/test/regression.py | 17 +++++++++++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 22 insertions(+), 1 deletions(-) diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -67,7 +67,7 @@ timepart_full = timepart.split(b".") hours, minutes, seconds = map(int, timepart_full[0].split(b":")) if len(timepart_full) == 2: - microseconds = int(timepart_full[1]) + microseconds = int('{:0<6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -302,6 +302,23 @@ cur.executemany("insert into b (baz) values (?)", ((i,) for i in foo())) + def CheckConvertTimestampMicrosecondPadding(self): + """ + http://bugs.python.org/issue14720 + + The microsecond parsing of convert_timestamp() should pad with zeros, + since the microsecond string "456" actually represents "456000". + """ + + con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) + cur = con.cursor() + cur.execute("CREATE TABLE t (x TIMESTAMP)") + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + cur.execute("SELECT * FROM t") + date = cur.fetchall()[0][0] + + self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1180,6 +1180,7 @@ Mikhail Terekhov Richard M. Tew Tobias Thelen +Lowe Thiderman Nicolas M. Thi?ry James Thomas Robin Thomas diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -181,6 +181,9 @@ Library ------- +- Issue #14720: sqlite3: Convert datetime microseconds correctly. + Patch by Lowe Thiderman. + - Issue #17225: JSON decoder now counts columns in the first line starting with 1, as in other lines. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:12:49 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:12:49 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2314720=3A_sqlite3=3A_Convert_datetime_microsecon?= =?utf-8?q?ds_correctly?= Message-ID: <3ZCyFY5l3tzSl5@mail.python.org> http://hg.python.org/cpython/rev/6342055ac220 changeset: 82351:6342055ac220 parent: 82347:4e2bfe6b227a parent: 82350:46c96693296f user: Petri Lehtinen date: Sat Feb 23 19:10:29 2013 +0100 summary: Issue #14720: sqlite3: Convert datetime microseconds correctly files: Lib/sqlite3/dbapi2.py | 2 +- Lib/sqlite3/test/regression.py | 17 +++++++++++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 22 insertions(+), 1 deletions(-) diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -67,7 +67,7 @@ timepart_full = timepart.split(b".") hours, minutes, seconds = map(int, timepart_full[0].split(b":")) if len(timepart_full) == 2: - microseconds = int(timepart_full[1]) + microseconds = int('{:0<6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -302,6 +302,23 @@ cur.executemany("insert into b (baz) values (?)", ((i,) for i in foo())) + def CheckConvertTimestampMicrosecondPadding(self): + """ + http://bugs.python.org/issue14720 + + The microsecond parsing of convert_timestamp() should pad with zeros, + since the microsecond string "456" actually represents "456000". + """ + + con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) + cur = con.cursor() + cur.execute("CREATE TABLE t (x TIMESTAMP)") + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + cur.execute("SELECT * FROM t") + date = cur.fetchall()[0][0] + + self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1192,6 +1192,7 @@ Mikhail Terekhov Richard M. Tew Tobias Thelen +Lowe Thiderman Nicolas M. Thi?ry James Thomas Robin Thomas diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -260,6 +260,9 @@ Library ------- +- Issue #14720: sqlite3: Convert datetime microseconds correctly. + Patch by Lowe Thiderman. + - Issue #15132: Allow a list for the defaultTest argument of unittest.TestProgram. Patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:39:05 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:39:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzg4OTA6?= =?utf-8?q?_Stop_advertising_an_insecure_use_of_/tmp_in_docs?= Message-ID: <3ZCyqs1vkJzSly@mail.python.org> http://hg.python.org/cpython/rev/488957f9b664 changeset: 82352:488957f9b664 branch: 2.7 parent: 82348:6911df35b7b6 user: Petri Lehtinen date: Sat Feb 23 19:24:08 2013 +0100 summary: Issue #8890: Stop advertising an insecure use of /tmp in docs files: Doc/install/index.rst | 2 +- Doc/library/atexit.rst | 4 ++-- Doc/library/bsddb.rst | 2 +- Doc/library/cgi.rst | 2 +- Doc/library/compiler.rst | 4 ++-- Doc/library/gzip.rst | 8 ++++---- Doc/library/imghdr.rst | 2 +- Doc/library/mailcap.rst | 4 ++-- Doc/library/nntplib.rst | 2 +- Doc/library/optparse.rst | 4 ++-- Doc/library/pipes.rst | 6 +++--- Doc/library/posixfile.rst | 2 +- Doc/library/trace.rst | 4 ++-- Doc/library/zipimport.rst | 10 +++++----- Doc/tutorial/inputoutput.rst | 8 ++++---- Misc/ACKS | 1 + Misc/NEWS | 4 ++++ 17 files changed, 37 insertions(+), 32 deletions(-) diff --git a/Doc/install/index.rst b/Doc/install/index.rst --- a/Doc/install/index.rst +++ b/Doc/install/index.rst @@ -189,7 +189,7 @@ to keep the source tree pristine, you can change the build directory with the :option:`--build-base` option. For example:: - python setup.py build --build-base=/tmp/pybuild/foo-1.0 + python setup.py build --build-base=/path/to/pybuild/foo-1.0 (Or you could do this permanently with a directive in your system or personal Distutils configuration file; see section :ref:`inst-config-files`.) Normally, this diff --git a/Doc/library/atexit.rst b/Doc/library/atexit.rst --- a/Doc/library/atexit.rst +++ b/Doc/library/atexit.rst @@ -76,7 +76,7 @@ making an explicit call into this module at termination. :: try: - _count = int(open("/tmp/counter").read()) + _count = int(open("counter").read()) except IOError: _count = 0 @@ -85,7 +85,7 @@ _count = _count + n def savecounter(): - open("/tmp/counter", "w").write("%d" % _count) + open("counter", "w").write("%d" % _count) import atexit atexit.register(savecounter) diff --git a/Doc/library/bsddb.rst b/Doc/library/bsddb.rst --- a/Doc/library/bsddb.rst +++ b/Doc/library/bsddb.rst @@ -170,7 +170,7 @@ Example:: >>> import bsddb - >>> db = bsddb.btopen('/tmp/spam.db', 'c') + >>> db = bsddb.btopen('spam.db', 'c') >>> for i in range(10): db['%d'%i] = '%d'% (i*i) ... >>> db['3'] diff --git a/Doc/library/cgi.rst b/Doc/library/cgi.rst --- a/Doc/library/cgi.rst +++ b/Doc/library/cgi.rst @@ -81,7 +81,7 @@ instead, with code like this:: import cgitb - cgitb.enable(display=0, logdir="/tmp") + cgitb.enable(display=0, logdir="/path/to/logdir") It's very helpful to use this feature during script development. The reports produced by :mod:`cgitb` provide information that can save you a lot of time in diff --git a/Doc/library/compiler.rst b/Doc/library/compiler.rst --- a/Doc/library/compiler.rst +++ b/Doc/library/compiler.rst @@ -540,7 +540,7 @@ AST looks like, and how to access attributes of an AST node. The first module defines a single function. Assume it is stored in -:file:`/tmp/doublelib.py`. :: +:file:`doublelib.py`. :: """This is an example module. @@ -557,7 +557,7 @@ :mod:`compiler.ast` module. :: >>> import compiler - >>> mod = compiler.parseFile("/tmp/doublelib.py") + >>> mod = compiler.parseFile("doublelib.py") >>> mod Module('This is an example module.\n\nThis is the docstring.\n', Stmt([Function(None, 'double', ['x'], [], 0, diff --git a/Doc/library/gzip.rst b/Doc/library/gzip.rst --- a/Doc/library/gzip.rst +++ b/Doc/library/gzip.rst @@ -93,7 +93,7 @@ Example of how to read a compressed file:: import gzip - f = gzip.open('/home/joe/file.txt.gz', 'rb') + f = gzip.open('file.txt.gz', 'rb') file_content = f.read() f.close() @@ -101,15 +101,15 @@ import gzip content = "Lots of content here" - f = gzip.open('/home/joe/file.txt.gz', 'wb') + f = gzip.open('file.txt.gz', 'wb') f.write(content) f.close() Example of how to GZIP compress an existing file:: import gzip - f_in = open('/home/joe/file.txt', 'rb') - f_out = gzip.open('/home/joe/file.txt.gz', 'wb') + f_in = open('file.txt', 'rb') + f_out = gzip.open('file.txt.gz', 'wb') f_out.writelines(f_in) f_out.close() f_in.close() diff --git a/Doc/library/imghdr.rst b/Doc/library/imghdr.rst --- a/Doc/library/imghdr.rst +++ b/Doc/library/imghdr.rst @@ -68,6 +68,6 @@ Example:: >>> import imghdr - >>> imghdr.what('/tmp/bass.gif') + >>> imghdr.what('bass.gif') 'gif' diff --git a/Doc/library/mailcap.rst b/Doc/library/mailcap.rst --- a/Doc/library/mailcap.rst +++ b/Doc/library/mailcap.rst @@ -71,6 +71,6 @@ >>> import mailcap >>> d=mailcap.getcaps() - >>> mailcap.findmatch(d, 'video/mpeg', filename='/tmp/tmp1223') - ('xmpeg /tmp/tmp1223', {'view': 'xmpeg %s'}) + >>> mailcap.findmatch(d, 'video/mpeg', filename='tmp1223') + ('xmpeg tmp1223', {'view': 'xmpeg %s'}) diff --git a/Doc/library/nntplib.rst b/Doc/library/nntplib.rst --- a/Doc/library/nntplib.rst +++ b/Doc/library/nntplib.rst @@ -46,7 +46,7 @@ headers, and that you have right to post on the particular newsgroup):: >>> s = NNTP('news.gmane.org') - >>> f = open('/tmp/article') + >>> f = open('articlefile') >>> s.post(f) '240 Article posted successfully.' >>> s.quit() diff --git a/Doc/library/optparse.rst b/Doc/library/optparse.rst --- a/Doc/library/optparse.rst +++ b/Doc/library/optparse.rst @@ -173,10 +173,10 @@ For example, consider this hypothetical command-line:: - prog -v --report /tmp/report.txt foo bar + prog -v --report report.txt foo bar ``-v`` and ``--report`` are both options. Assuming that ``--report`` -takes one argument, ``/tmp/report.txt`` is an option argument. ``foo`` and +takes one argument, ``report.txt`` is an option argument. ``foo`` and ``bar`` are positional arguments. diff --git a/Doc/library/pipes.rst b/Doc/library/pipes.rst --- a/Doc/library/pipes.rst +++ b/Doc/library/pipes.rst @@ -24,12 +24,12 @@ Example:: >>> import pipes - >>> t=pipes.Template() + >>> t = pipes.Template() >>> t.append('tr a-z A-Z', '--') - >>> f=t.open('/tmp/1', 'w') + >>> f = t.open('pipefile', 'w') >>> f.write('hello world') >>> f.close() - >>> open('/tmp/1').read() + >>> open('pipefile').read() 'HELLO WORLD' diff --git a/Doc/library/posixfile.rst b/Doc/library/posixfile.rst --- a/Doc/library/posixfile.rst +++ b/Doc/library/posixfile.rst @@ -181,7 +181,7 @@ import posixfile - file = posixfile.open('/tmp/test', 'w') + file = posixfile.open('testfile', 'w') file.lock('w|') ... file.lock('u') diff --git a/Doc/library/trace.rst b/Doc/library/trace.rst --- a/Doc/library/trace.rst +++ b/Doc/library/trace.rst @@ -200,7 +200,7 @@ # run the new command using the given tracer tracer.run('main()') - # make a report, placing output in /tmp + # make a report, placing output in the current directory r = tracer.results() - r.write_results(show_missing=True, coverdir="/tmp") + r.write_results(show_missing=True, coverdir=".") diff --git a/Doc/library/zipimport.rst b/Doc/library/zipimport.rst --- a/Doc/library/zipimport.rst +++ b/Doc/library/zipimport.rst @@ -19,7 +19,7 @@ also allows an item of :data:`sys.path` to be a string naming a ZIP file archive. The ZIP archive can contain a subdirectory structure to support package imports, and a path within the archive can be specified to only import from a -subdirectory. For example, the path :file:`/tmp/example.zip/lib/` would only +subdirectory. For example, the path :file:`example.zip/lib/` would only import from the :file:`lib/` subdirectory within the archive. Any files may be present in the ZIP archive, but only files :file:`.py` and @@ -151,8 +151,8 @@ Here is an example that imports a module from a ZIP archive - note that the :mod:`zipimport` module is not explicitly used. :: - $ unzip -l /tmp/example.zip - Archive: /tmp/example.zip + $ unzip -l example.zip + Archive: example.zip Length Date Time Name -------- ---- ---- ---- 8467 11-26-02 22:30 jwzthreading.py @@ -161,8 +161,8 @@ $ ./python Python 2.3 (#1, Aug 1 2003, 19:54:32) >>> import sys - >>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path + >>> sys.path.insert(0, 'example.zip') # Add .zip file to front of path >>> import jwzthreading >>> jwzthreading.__file__ - '/tmp/example.zip/jwzthreading.py' + 'example.zip/jwzthreading.py' diff --git a/Doc/tutorial/inputoutput.rst b/Doc/tutorial/inputoutput.rst --- a/Doc/tutorial/inputoutput.rst +++ b/Doc/tutorial/inputoutput.rst @@ -236,9 +236,9 @@ :: - >>> f = open('/tmp/workfile', 'w') + >>> f = open('workfile', 'w') >>> print f - + The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file @@ -339,7 +339,7 @@ the reference point. *from_what* can be omitted and defaults to 0, using the beginning of the file as the reference point. :: - >>> f = open('/tmp/workfile', 'r+') + >>> f = open('workfile', 'r+') >>> f.write('0123456789abcdef') >>> f.seek(5) # Go to the 6th byte in the file >>> f.read(1) @@ -363,7 +363,7 @@ suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent :keyword:`try`\ -\ :keyword:`finally` blocks:: - >>> with open('/tmp/workfile', 'r') as f: + >>> with open('workfile', 'r') as f: ... read_data = f.read() >>> f.closed True diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1081,6 +1081,7 @@ Gerald S. Williams Steven Willis Frank Willison +Geoff Wilson Greg V. Wilson J Derek Wilson Paul Winkler diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -926,6 +926,10 @@ Documentation ------------- +- Issue #8890: Stop advertising an insecure practice by replacing uses + of the /tmp directory with better alternatives in the documentation. + Patch by Geoff Wilson. + - Issue #17203: add long option names to unittest discovery docs. - Issue #13094: add "Why do lambdas defined in a loop with different values -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:39:06 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:39:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzg4OTA6?= =?utf-8?q?_Stop_advertising_an_insecure_use_of_/tmp_in_docs?= Message-ID: <3ZCyqt5tYdzSmF@mail.python.org> http://hg.python.org/cpython/rev/7556601180c8 changeset: 82353:7556601180c8 branch: 3.2 parent: 82349:46d5317a51fb user: Petri Lehtinen date: Sat Feb 23 19:26:56 2013 +0100 summary: Issue #8890: Stop advertising an insecure use of /tmp in docs files: Doc/install/index.rst | 2 +- Doc/library/atexit.rst | 4 ++-- Doc/library/cgi.rst | 2 +- Doc/library/imghdr.rst | 2 +- Doc/library/mailcap.rst | 4 ++-- Doc/library/nntplib.rst | 2 +- Doc/library/optparse.rst | 4 ++-- Doc/library/pipes.rst | 6 +++--- Doc/library/sqlite3.rst | 4 ++-- Doc/library/trace.rst | 4 ++-- Doc/library/zipimport.rst | 10 +++++----- Doc/tutorial/inputoutput.rst | 8 ++++---- Misc/ACKS | 1 + Misc/NEWS | 4 ++++ 14 files changed, 31 insertions(+), 26 deletions(-) diff --git a/Doc/install/index.rst b/Doc/install/index.rst --- a/Doc/install/index.rst +++ b/Doc/install/index.rst @@ -189,7 +189,7 @@ to keep the source tree pristine, you can change the build directory with the :option:`--build-base` option. For example:: - python setup.py build --build-base=/tmp/pybuild/foo-1.0 + python setup.py build --build-base=/path/to/pybuild/foo-1.0 (Or you could do this permanently with a directive in your system or personal Distutils configuration file; see section :ref:`inst-config-files`.) Normally, this diff --git a/Doc/library/atexit.rst b/Doc/library/atexit.rst --- a/Doc/library/atexit.rst +++ b/Doc/library/atexit.rst @@ -67,7 +67,7 @@ making an explicit call into this module at termination. :: try: - _count = int(open("/tmp/counter").read()) + _count = int(open("counter").read()) except IOError: _count = 0 @@ -76,7 +76,7 @@ _count = _count + n def savecounter(): - open("/tmp/counter", "w").write("%d" % _count) + open("counter", "w").write("%d" % _count) import atexit atexit.register(savecounter) diff --git a/Doc/library/cgi.rst b/Doc/library/cgi.rst --- a/Doc/library/cgi.rst +++ b/Doc/library/cgi.rst @@ -79,7 +79,7 @@ instead, with code like this:: import cgitb - cgitb.enable(display=0, logdir="/tmp") + cgitb.enable(display=0, logdir="/path/to/logdir") It's very helpful to use this feature during script development. The reports produced by :mod:`cgitb` provide information that can save you a lot of time in diff --git a/Doc/library/imghdr.rst b/Doc/library/imghdr.rst --- a/Doc/library/imghdr.rst +++ b/Doc/library/imghdr.rst @@ -65,6 +65,6 @@ Example:: >>> import imghdr - >>> imghdr.what('/tmp/bass.gif') + >>> imghdr.what('bass.gif') 'gif' diff --git a/Doc/library/mailcap.rst b/Doc/library/mailcap.rst --- a/Doc/library/mailcap.rst +++ b/Doc/library/mailcap.rst @@ -71,6 +71,6 @@ >>> import mailcap >>> d=mailcap.getcaps() - >>> mailcap.findmatch(d, 'video/mpeg', filename='/tmp/tmp1223') - ('xmpeg /tmp/tmp1223', {'view': 'xmpeg %s'}) + >>> mailcap.findmatch(d, 'video/mpeg', filename='tmp1223') + ('xmpeg tmp1223', {'view': 'xmpeg %s'}) diff --git a/Doc/library/nntplib.rst b/Doc/library/nntplib.rst --- a/Doc/library/nntplib.rst +++ b/Doc/library/nntplib.rst @@ -47,7 +47,7 @@ headers, and that you have right to post on the particular newsgroup):: >>> s = nntplib.NNTP('news.gmane.org') - >>> f = open('/tmp/article.txt', 'rb') + >>> f = open('article.txt', 'rb') >>> s.post(f) '240 Article posted successfully.' >>> s.quit() diff --git a/Doc/library/optparse.rst b/Doc/library/optparse.rst --- a/Doc/library/optparse.rst +++ b/Doc/library/optparse.rst @@ -171,10 +171,10 @@ For example, consider this hypothetical command-line:: - prog -v --report /tmp/report.txt foo bar + prog -v --report report.txt foo bar ``-v`` and ``--report`` are both options. Assuming that ``--report`` -takes one argument, ``/tmp/report.txt`` is an option argument. ``foo`` and +takes one argument, ``report.txt`` is an option argument. ``foo`` and ``bar`` are positional arguments. diff --git a/Doc/library/pipes.rst b/Doc/library/pipes.rst --- a/Doc/library/pipes.rst +++ b/Doc/library/pipes.rst @@ -26,12 +26,12 @@ Example:: >>> import pipes - >>> t=pipes.Template() + >>> t = pipes.Template() >>> t.append('tr a-z A-Z', '--') - >>> f=t.open('/tmp/1', 'w') + >>> f = t.open('pipefile', 'w') >>> f.write('hello world') >>> f.close() - >>> open('/tmp/1').read() + >>> open('pipefile').read() 'HELLO WORLD' diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -18,10 +18,10 @@ To use the module, you must first create a :class:`Connection` object that represents the database. Here the data will be stored in the -:file:`/tmp/example` file:: +:file:`example.db` file:: import sqlite3 - conn = sqlite3.connect('/tmp/example') + conn = sqlite3.connect('example.db') You can also supply the special name ``:memory:`` to create a database in RAM. diff --git a/Doc/library/trace.rst b/Doc/library/trace.rst --- a/Doc/library/trace.rst +++ b/Doc/library/trace.rst @@ -201,7 +201,7 @@ # run the new command using the given tracer tracer.run('main()') - # make a report, placing output in /tmp + # make a report, placing output in the current directory r = tracer.results() - r.write_results(show_missing=True, coverdir="/tmp") + r.write_results(show_missing=True, coverdir=".") diff --git a/Doc/library/zipimport.rst b/Doc/library/zipimport.rst --- a/Doc/library/zipimport.rst +++ b/Doc/library/zipimport.rst @@ -16,7 +16,7 @@ also allows an item of :data:`sys.path` to be a string naming a ZIP file archive. The ZIP archive can contain a subdirectory structure to support package imports, and a path within the archive can be specified to only import from a -subdirectory. For example, the path :file:`/tmp/example.zip/lib/` would only +subdirectory. For example, the path :file:`example.zip/lib/` would only import from the :file:`lib/` subdirectory within the archive. Any files may be present in the ZIP archive, but only files :file:`.py` and @@ -144,8 +144,8 @@ Here is an example that imports a module from a ZIP archive - note that the :mod:`zipimport` module is not explicitly used. :: - $ unzip -l /tmp/example.zip - Archive: /tmp/example.zip + $ unzip -l example.zip + Archive: example.zip Length Date Time Name -------- ---- ---- ---- 8467 11-26-02 22:30 jwzthreading.py @@ -154,8 +154,8 @@ $ ./python Python 2.3 (#1, Aug 1 2003, 19:54:32) >>> import sys - >>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path + >>> sys.path.insert(0, 'example.zip') # Add .zip file to front of path >>> import jwzthreading >>> jwzthreading.__file__ - '/tmp/example.zip/jwzthreading.py' + 'example.zip/jwzthreading.py' diff --git a/Doc/tutorial/inputoutput.rst b/Doc/tutorial/inputoutput.rst --- a/Doc/tutorial/inputoutput.rst +++ b/Doc/tutorial/inputoutput.rst @@ -234,12 +234,12 @@ :: - >>> f = open('/tmp/workfile', 'w') + >>> f = open('workfile', 'w') .. XXX str(f) is >>> print(f) - + The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file @@ -346,7 +346,7 @@ the reference point. *from_what* can be omitted and defaults to 0, using the beginning of the file as the reference point. :: - >>> f = open('/tmp/workfile', 'rb+') + >>> f = open('workfile', 'rb+') >>> f.write(b'0123456789abcdef') 16 >>> f.seek(5) # Go to the 6th byte in the file @@ -377,7 +377,7 @@ suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent :keyword:`try`\ -\ :keyword:`finally` blocks:: - >>> with open('/tmp/workfile', 'r') as f: + >>> with open('workfile', 'r') as f: ... read_data = f.read() >>> f.closed True diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1170,6 +1170,7 @@ Gerald S. Williams Steven Willis Frank Willison +Geoff Wilson Greg V. Wilson J Derek Wilson Paul Winkler diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1078,6 +1078,10 @@ Documentation ------------- +- Issue #8890: Stop advertising an insecure practice by replacing uses + of the /tmp directory with better alternatives in the documentation. + Patch by Geoff Wilson. + - Issue #17203: add long option names to unittest discovery docs. - Issue #13094: add "Why do lambdas defined in a loop with different values -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:39:08 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:39:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=238890=3A_Stop_advertising_an_insecure_use_of_/tmp_in_d?= =?utf-8?q?ocs?= Message-ID: <3ZCyqw31gPzPkT@mail.python.org> http://hg.python.org/cpython/rev/18e20e146396 changeset: 82354:18e20e146396 branch: 3.3 parent: 82350:46c96693296f parent: 82353:7556601180c8 user: Petri Lehtinen date: Sat Feb 23 19:34:15 2013 +0100 summary: Issue #8890: Stop advertising an insecure use of /tmp in docs files: Doc/install/index.rst | 2 +- Doc/library/atexit.rst | 4 ++-- Doc/library/cgi.rst | 2 +- Doc/library/imghdr.rst | 2 +- Doc/library/mailcap.rst | 4 ++-- Doc/library/nntplib.rst | 2 +- Doc/library/optparse.rst | 4 ++-- Doc/library/pipes.rst | 6 +++--- Doc/library/sqlite3.rst | 4 ++-- Doc/library/trace.rst | 4 ++-- Doc/library/zipimport.rst | 10 +++++----- Doc/tutorial/inputoutput.rst | 8 ++++---- Misc/ACKS | 1 + Misc/NEWS | 4 ++++ 14 files changed, 31 insertions(+), 26 deletions(-) diff --git a/Doc/install/index.rst b/Doc/install/index.rst --- a/Doc/install/index.rst +++ b/Doc/install/index.rst @@ -189,7 +189,7 @@ to keep the source tree pristine, you can change the build directory with the :option:`--build-base` option. For example:: - python setup.py build --build-base=/tmp/pybuild/foo-1.0 + python setup.py build --build-base=/path/to/pybuild/foo-1.0 (Or you could do this permanently with a directive in your system or personal Distutils configuration file; see section :ref:`inst-config-files`.) Normally, this diff --git a/Doc/library/atexit.rst b/Doc/library/atexit.rst --- a/Doc/library/atexit.rst +++ b/Doc/library/atexit.rst @@ -68,7 +68,7 @@ making an explicit call into this module at termination. :: try: - with open("/tmp/counter") as infile: + with open("counterfile") as infile: _count = int(infile.read()) except FileNotFoundError: _count = 0 @@ -78,7 +78,7 @@ _count = _count + n def savecounter(): - with open("/tmp/counter", "w") as outfile: + with open("counterfile", "w") as outfile: outfile.write("%d" % _count) import atexit diff --git a/Doc/library/cgi.rst b/Doc/library/cgi.rst --- a/Doc/library/cgi.rst +++ b/Doc/library/cgi.rst @@ -79,7 +79,7 @@ instead, with code like this:: import cgitb - cgitb.enable(display=0, logdir="/tmp") + cgitb.enable(display=0, logdir="/path/to/logdir") It's very helpful to use this feature during script development. The reports produced by :mod:`cgitb` provide information that can save you a lot of time in diff --git a/Doc/library/imghdr.rst b/Doc/library/imghdr.rst --- a/Doc/library/imghdr.rst +++ b/Doc/library/imghdr.rst @@ -65,6 +65,6 @@ Example:: >>> import imghdr - >>> imghdr.what('/tmp/bass.gif') + >>> imghdr.what('bass.gif') 'gif' diff --git a/Doc/library/mailcap.rst b/Doc/library/mailcap.rst --- a/Doc/library/mailcap.rst +++ b/Doc/library/mailcap.rst @@ -71,6 +71,6 @@ >>> import mailcap >>> d=mailcap.getcaps() - >>> mailcap.findmatch(d, 'video/mpeg', filename='/tmp/tmp1223') - ('xmpeg /tmp/tmp1223', {'view': 'xmpeg %s'}) + >>> mailcap.findmatch(d, 'video/mpeg', filename='tmp1223') + ('xmpeg tmp1223', {'view': 'xmpeg %s'}) diff --git a/Doc/library/nntplib.rst b/Doc/library/nntplib.rst --- a/Doc/library/nntplib.rst +++ b/Doc/library/nntplib.rst @@ -47,7 +47,7 @@ headers, and that you have right to post on the particular newsgroup):: >>> s = nntplib.NNTP('news.gmane.org') - >>> f = open('/tmp/article.txt', 'rb') + >>> f = open('article.txt', 'rb') >>> s.post(f) '240 Article posted successfully.' >>> s.quit() diff --git a/Doc/library/optparse.rst b/Doc/library/optparse.rst --- a/Doc/library/optparse.rst +++ b/Doc/library/optparse.rst @@ -171,10 +171,10 @@ For example, consider this hypothetical command-line:: - prog -v --report /tmp/report.txt foo bar + prog -v --report report.txt foo bar ``-v`` and ``--report`` are both options. Assuming that ``--report`` -takes one argument, ``/tmp/report.txt`` is an option argument. ``foo`` and +takes one argument, ``report.txt`` is an option argument. ``foo`` and ``bar`` are positional arguments. diff --git a/Doc/library/pipes.rst b/Doc/library/pipes.rst --- a/Doc/library/pipes.rst +++ b/Doc/library/pipes.rst @@ -26,12 +26,12 @@ Example:: >>> import pipes - >>> t=pipes.Template() + >>> t = pipes.Template() >>> t.append('tr a-z A-Z', '--') - >>> f=t.open('/tmp/1', 'w') + >>> f = t.open('pipefile', 'w') >>> f.write('hello world') >>> f.close() - >>> open('/tmp/1').read() + >>> open('pipefile').read() 'HELLO WORLD' diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -18,10 +18,10 @@ To use the module, you must first create a :class:`Connection` object that represents the database. Here the data will be stored in the -:file:`/tmp/example` file:: +:file:`example.db` file:: import sqlite3 - conn = sqlite3.connect('/tmp/example') + conn = sqlite3.connect('example.db') You can also supply the special name ``:memory:`` to create a database in RAM. diff --git a/Doc/library/trace.rst b/Doc/library/trace.rst --- a/Doc/library/trace.rst +++ b/Doc/library/trace.rst @@ -201,7 +201,7 @@ # run the new command using the given tracer tracer.run('main()') - # make a report, placing output in /tmp + # make a report, placing output in the current directory r = tracer.results() - r.write_results(show_missing=True, coverdir="/tmp") + r.write_results(show_missing=True, coverdir=".") diff --git a/Doc/library/zipimport.rst b/Doc/library/zipimport.rst --- a/Doc/library/zipimport.rst +++ b/Doc/library/zipimport.rst @@ -16,7 +16,7 @@ also allows an item of :data:`sys.path` to be a string naming a ZIP file archive. The ZIP archive can contain a subdirectory structure to support package imports, and a path within the archive can be specified to only import from a -subdirectory. For example, the path :file:`/tmp/example.zip/lib/` would only +subdirectory. For example, the path :file:`example.zip/lib/` would only import from the :file:`lib/` subdirectory within the archive. Any files may be present in the ZIP archive, but only files :file:`.py` and @@ -147,8 +147,8 @@ Here is an example that imports a module from a ZIP archive - note that the :mod:`zipimport` module is not explicitly used. :: - $ unzip -l /tmp/example.zip - Archive: /tmp/example.zip + $ unzip -l example.zip + Archive: example.zip Length Date Time Name -------- ---- ---- ---- 8467 11-26-02 22:30 jwzthreading.py @@ -157,8 +157,8 @@ $ ./python Python 2.3 (#1, Aug 1 2003, 19:54:32) >>> import sys - >>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path + >>> sys.path.insert(0, 'example.zip') # Add .zip file to front of path >>> import jwzthreading >>> jwzthreading.__file__ - '/tmp/example.zip/jwzthreading.py' + 'example.zip/jwzthreading.py' diff --git a/Doc/tutorial/inputoutput.rst b/Doc/tutorial/inputoutput.rst --- a/Doc/tutorial/inputoutput.rst +++ b/Doc/tutorial/inputoutput.rst @@ -234,12 +234,12 @@ :: - >>> f = open('/tmp/workfile', 'w') + >>> f = open('workfile', 'w') .. XXX str(f) is >>> print(f) - + The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file @@ -346,7 +346,7 @@ the reference point. *from_what* can be omitted and defaults to 0, using the beginning of the file as the reference point. :: - >>> f = open('/tmp/workfile', 'rb+') + >>> f = open('workfile', 'rb+') >>> f.write(b'0123456789abcdef') 16 >>> f.seek(5) # Go to the 6th byte in the file @@ -377,7 +377,7 @@ suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent :keyword:`try`\ -\ :keyword:`finally` blocks:: - >>> with open('/tmp/workfile', 'r') as f: + >>> with open('workfile', 'r') as f: ... read_data = f.read() >>> f.closed True diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1289,6 +1289,7 @@ Sue Williams Steven Willis Frank Willison +Geoff Wilson Greg V. Wilson J Derek Wilson Paul Winkler diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -756,6 +756,10 @@ Documentation ------------- +- Issue #8890: Stop advertising an insecure practice by replacing uses + of the /tmp directory with better alternatives in the documentation. + Patch by Geoff Wilson. + - Issue #17203: add long option names to unittest discovery docs. - Issue #13094: add "Why do lambdas defined in a loop with different values -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:39:09 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:39:09 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=238890=3A_Stop_advertising_an_insecure_use_of_/tm?= =?utf-8?q?p_in_docs?= Message-ID: <3ZCyqx74WkzSkj@mail.python.org> http://hg.python.org/cpython/rev/6b0ca4cb7e4e changeset: 82355:6b0ca4cb7e4e parent: 82351:6342055ac220 parent: 82354:18e20e146396 user: Petri Lehtinen date: Sat Feb 23 19:37:01 2013 +0100 summary: Issue #8890: Stop advertising an insecure use of /tmp in docs files: Doc/install/index.rst | 2 +- Doc/library/atexit.rst | 4 ++-- Doc/library/cgi.rst | 2 +- Doc/library/imghdr.rst | 2 +- Doc/library/mailcap.rst | 4 ++-- Doc/library/nntplib.rst | 2 +- Doc/library/optparse.rst | 4 ++-- Doc/library/pipes.rst | 6 +++--- Doc/library/sqlite3.rst | 4 ++-- Doc/library/trace.rst | 4 ++-- Doc/library/zipimport.rst | 10 +++++----- Doc/tutorial/inputoutput.rst | 8 ++++---- Misc/ACKS | 1 + Misc/NEWS | 4 ++++ 14 files changed, 31 insertions(+), 26 deletions(-) diff --git a/Doc/install/index.rst b/Doc/install/index.rst --- a/Doc/install/index.rst +++ b/Doc/install/index.rst @@ -189,7 +189,7 @@ to keep the source tree pristine, you can change the build directory with the :option:`--build-base` option. For example:: - python setup.py build --build-base=/tmp/pybuild/foo-1.0 + python setup.py build --build-base=/path/to/pybuild/foo-1.0 (Or you could do this permanently with a directive in your system or personal Distutils configuration file; see section :ref:`inst-config-files`.) Normally, this diff --git a/Doc/library/atexit.rst b/Doc/library/atexit.rst --- a/Doc/library/atexit.rst +++ b/Doc/library/atexit.rst @@ -68,7 +68,7 @@ making an explicit call into this module at termination. :: try: - with open("/tmp/counter") as infile: + with open("counterfile") as infile: _count = int(infile.read()) except FileNotFoundError: _count = 0 @@ -78,7 +78,7 @@ _count = _count + n def savecounter(): - with open("/tmp/counter", "w") as outfile: + with open("counterfile", "w") as outfile: outfile.write("%d" % _count) import atexit diff --git a/Doc/library/cgi.rst b/Doc/library/cgi.rst --- a/Doc/library/cgi.rst +++ b/Doc/library/cgi.rst @@ -79,7 +79,7 @@ instead, with code like this:: import cgitb - cgitb.enable(display=0, logdir="/tmp") + cgitb.enable(display=0, logdir="/path/to/logdir") It's very helpful to use this feature during script development. The reports produced by :mod:`cgitb` provide information that can save you a lot of time in diff --git a/Doc/library/imghdr.rst b/Doc/library/imghdr.rst --- a/Doc/library/imghdr.rst +++ b/Doc/library/imghdr.rst @@ -65,6 +65,6 @@ Example:: >>> import imghdr - >>> imghdr.what('/tmp/bass.gif') + >>> imghdr.what('bass.gif') 'gif' diff --git a/Doc/library/mailcap.rst b/Doc/library/mailcap.rst --- a/Doc/library/mailcap.rst +++ b/Doc/library/mailcap.rst @@ -71,6 +71,6 @@ >>> import mailcap >>> d=mailcap.getcaps() - >>> mailcap.findmatch(d, 'video/mpeg', filename='/tmp/tmp1223') - ('xmpeg /tmp/tmp1223', {'view': 'xmpeg %s'}) + >>> mailcap.findmatch(d, 'video/mpeg', filename='tmp1223') + ('xmpeg tmp1223', {'view': 'xmpeg %s'}) diff --git a/Doc/library/nntplib.rst b/Doc/library/nntplib.rst --- a/Doc/library/nntplib.rst +++ b/Doc/library/nntplib.rst @@ -46,7 +46,7 @@ headers, and that you have right to post on the particular newsgroup):: >>> s = nntplib.NNTP('news.gmane.org') - >>> f = open('/tmp/article.txt', 'rb') + >>> f = open('article.txt', 'rb') >>> s.post(f) '240 Article posted successfully.' >>> s.quit() diff --git a/Doc/library/optparse.rst b/Doc/library/optparse.rst --- a/Doc/library/optparse.rst +++ b/Doc/library/optparse.rst @@ -171,10 +171,10 @@ For example, consider this hypothetical command-line:: - prog -v --report /tmp/report.txt foo bar + prog -v --report report.txt foo bar ``-v`` and ``--report`` are both options. Assuming that ``--report`` -takes one argument, ``/tmp/report.txt`` is an option argument. ``foo`` and +takes one argument, ``report.txt`` is an option argument. ``foo`` and ``bar`` are positional arguments. diff --git a/Doc/library/pipes.rst b/Doc/library/pipes.rst --- a/Doc/library/pipes.rst +++ b/Doc/library/pipes.rst @@ -26,12 +26,12 @@ Example:: >>> import pipes - >>> t=pipes.Template() + >>> t = pipes.Template() >>> t.append('tr a-z A-Z', '--') - >>> f=t.open('/tmp/1', 'w') + >>> f = t.open('pipefile', 'w') >>> f.write('hello world') >>> f.close() - >>> open('/tmp/1').read() + >>> open('pipefile').read() 'HELLO WORLD' diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -18,10 +18,10 @@ To use the module, you must first create a :class:`Connection` object that represents the database. Here the data will be stored in the -:file:`/tmp/example` file:: +:file:`example.db` file:: import sqlite3 - conn = sqlite3.connect('/tmp/example') + conn = sqlite3.connect('example.db') You can also supply the special name ``:memory:`` to create a database in RAM. diff --git a/Doc/library/trace.rst b/Doc/library/trace.rst --- a/Doc/library/trace.rst +++ b/Doc/library/trace.rst @@ -201,7 +201,7 @@ # run the new command using the given tracer tracer.run('main()') - # make a report, placing output in /tmp + # make a report, placing output in the current directory r = tracer.results() - r.write_results(show_missing=True, coverdir="/tmp") + r.write_results(show_missing=True, coverdir=".") diff --git a/Doc/library/zipimport.rst b/Doc/library/zipimport.rst --- a/Doc/library/zipimport.rst +++ b/Doc/library/zipimport.rst @@ -16,7 +16,7 @@ also allows an item of :data:`sys.path` to be a string naming a ZIP file archive. The ZIP archive can contain a subdirectory structure to support package imports, and a path within the archive can be specified to only import from a -subdirectory. For example, the path :file:`/tmp/example.zip/lib/` would only +subdirectory. For example, the path :file:`example.zip/lib/` would only import from the :file:`lib/` subdirectory within the archive. Any files may be present in the ZIP archive, but only files :file:`.py` and @@ -147,8 +147,8 @@ Here is an example that imports a module from a ZIP archive - note that the :mod:`zipimport` module is not explicitly used. :: - $ unzip -l /tmp/example.zip - Archive: /tmp/example.zip + $ unzip -l example.zip + Archive: example.zip Length Date Time Name -------- ---- ---- ---- 8467 11-26-02 22:30 jwzthreading.py @@ -157,8 +157,8 @@ $ ./python Python 2.3 (#1, Aug 1 2003, 19:54:32) >>> import sys - >>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path + >>> sys.path.insert(0, 'example.zip') # Add .zip file to front of path >>> import jwzthreading >>> jwzthreading.__file__ - '/tmp/example.zip/jwzthreading.py' + 'example.zip/jwzthreading.py' diff --git a/Doc/tutorial/inputoutput.rst b/Doc/tutorial/inputoutput.rst --- a/Doc/tutorial/inputoutput.rst +++ b/Doc/tutorial/inputoutput.rst @@ -234,12 +234,12 @@ :: - >>> f = open('/tmp/workfile', 'w') + >>> f = open('workfile', 'w') .. XXX str(f) is >>> print(f) - + The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file @@ -346,7 +346,7 @@ the reference point. *from_what* can be omitted and defaults to 0, using the beginning of the file as the reference point. :: - >>> f = open('/tmp/workfile', 'rb+') + >>> f = open('workfile', 'rb+') >>> f.write(b'0123456789abcdef') 16 >>> f.seek(5) # Go to the 6th byte in the file @@ -377,7 +377,7 @@ suite finishes, even if an exception is raised on the way. It is also much shorter than writing equivalent :keyword:`try`\ -\ :keyword:`finally` blocks:: - >>> with open('/tmp/workfile', 'r') as f: + >>> with open('workfile', 'r') as f: ... read_data = f.read() >>> f.closed True diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1302,6 +1302,7 @@ Sue Williams Steven Willis Frank Willison +Geoff Wilson Greg V. Wilson J Derek Wilson Paul Winkler diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1012,6 +1012,10 @@ Documentation ------------- +- Issue #8890: Stop advertising an insecure practice by replacing uses + of the /tmp directory with better alternatives in the documentation. + Patch by Geoff Wilson. + - Issue #17203: add long option names to unittest discovery docs. - Issue #13094: add "Why do lambdas defined in a loop with different values -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:58:20 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:58:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2Njk1?= =?utf-8?q?=3A_Document_how_glob_handles_filenames_starting_with_a_dot?= Message-ID: <3ZCzG46NYFzSnj@mail.python.org> http://hg.python.org/cpython/rev/2b96dcdac419 changeset: 82356:2b96dcdac419 branch: 2.7 parent: 82352:488957f9b664 user: Petri Lehtinen date: Sat Feb 23 19:53:03 2013 +0100 summary: Issue #16695: Document how glob handles filenames starting with a dot files: Doc/library/glob.rst | 15 +++++++++++++-- Lib/glob.py | 10 ++++++++-- Misc/NEWS | 3 +++ 3 files changed, 24 insertions(+), 4 deletions(-) diff --git a/Doc/library/glob.rst b/Doc/library/glob.rst --- a/Doc/library/glob.rst +++ b/Doc/library/glob.rst @@ -16,8 +16,10 @@ ``*``, ``?``, and character ranges expressed with ``[]`` will be correctly matched. This is done by using the :func:`os.listdir` and :func:`fnmatch.fnmatch` functions in concert, and not by actually invoking a -subshell. (For tilde and shell variable expansion, use -:func:`os.path.expanduser` and :func:`os.path.expandvars`.) +subshell. Note that unlike :func:`fnmatch.fnmatch`, :mod:`glob` treats +filenames beginning with a dot (``.``) as special cases. (For tilde and shell +variable expansion, use :func:`os.path.expanduser` and +:func:`os.path.expandvars`.) For a literal match, wrap the meta-characters in brackets. For example, ``'[?]'`` matches the character ``'?'``. @@ -52,6 +54,15 @@ >>> glob.glob('?.gif') ['1.gif'] +If the directory contains files starting with ``.`` they won't be matched by +default. For example, consider a directory containing :file:`card.gif` and +:file:`.card.gif`:: + + >>> import glob + >>> glob.glob('*.gif') + ['card.gif'] + >>> glob.glob('.c*') + ['.card.gif'] .. seealso:: diff --git a/Lib/glob.py b/Lib/glob.py --- a/Lib/glob.py +++ b/Lib/glob.py @@ -18,7 +18,10 @@ def glob(pathname): """Return a list of paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ return list(iglob(pathname)) @@ -26,7 +29,10 @@ def iglob(pathname): """Return an iterator which yields the paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ if not has_magic(pathname): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -926,6 +926,9 @@ Documentation ------------- +- Issue #16695: Document how glob handles filenames starting with a + dot. Initial patch by Jyrki Pulliainen. + - Issue #8890: Stop advertising an insecure practice by replacing uses of the /tmp directory with better alternatives in the documentation. Patch by Geoff Wilson. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:58:22 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:58:22 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2Njk1?= =?utf-8?q?=3A_Document_how_glob_handles_filenames_starting_with_a_dot?= Message-ID: <3ZCzG62gBTzSqB@mail.python.org> http://hg.python.org/cpython/rev/b4434cbca953 changeset: 82357:b4434cbca953 branch: 3.2 parent: 82353:7556601180c8 user: Petri Lehtinen date: Sat Feb 23 19:53:03 2013 +0100 summary: Issue #16695: Document how glob handles filenames starting with a dot files: Doc/library/glob.rst | 15 +++++++++++++-- Lib/glob.py | 10 ++++++++-- Misc/NEWS | 3 +++ 3 files changed, 24 insertions(+), 4 deletions(-) diff --git a/Doc/library/glob.rst b/Doc/library/glob.rst --- a/Doc/library/glob.rst +++ b/Doc/library/glob.rst @@ -16,8 +16,10 @@ ``*``, ``?``, and character ranges expressed with ``[]`` will be correctly matched. This is done by using the :func:`os.listdir` and :func:`fnmatch.fnmatch` functions in concert, and not by actually invoking a -subshell. (For tilde and shell variable expansion, use -:func:`os.path.expanduser` and :func:`os.path.expandvars`.) +subshell. Note that unlike :func:`fnmatch.fnmatch`, :mod:`glob` treats +filenames beginning with a dot (``.``) as special cases. (For tilde and shell +variable expansion, use :func:`os.path.expanduser` and +:func:`os.path.expandvars`.) For a literal match, wrap the meta-characters in brackets. For example, ``'[?]'`` matches the character ``'?'``. @@ -51,6 +53,15 @@ >>> glob.glob('?.gif') ['1.gif'] +If the directory contains files starting with ``.`` they won't be matched by +default. For example, consider a directory containing :file:`card.gif` and +:file:`.card.gif`:: + + >>> import glob + >>> glob.glob('*.gif') + ['card.gif'] + >>> glob.glob('.c*') + ['.card.gif'] .. seealso:: diff --git a/Lib/glob.py b/Lib/glob.py --- a/Lib/glob.py +++ b/Lib/glob.py @@ -9,7 +9,10 @@ def glob(pathname): """Return a list of paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ return list(iglob(pathname)) @@ -17,7 +20,10 @@ def iglob(pathname): """Return an iterator which yields the paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ if not has_magic(pathname): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1078,6 +1078,9 @@ Documentation ------------- +- Issue #16695: Document how glob handles filenames starting with a + dot. Initial patch by Jyrki Pulliainen. + - Issue #8890: Stop advertising an insecure practice by replacing uses of the /tmp directory with better alternatives in the documentation. Patch by Geoff Wilson. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:58:23 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:58:23 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316695=3A_Document_how_glob_handles_filenames_starting?= =?utf-8?q?_with_a_dot?= Message-ID: <3ZCzG75dxBzSqC@mail.python.org> http://hg.python.org/cpython/rev/3e8b29512b2e changeset: 82358:3e8b29512b2e branch: 3.3 parent: 82354:18e20e146396 parent: 82357:b4434cbca953 user: Petri Lehtinen date: Sat Feb 23 19:55:36 2013 +0100 summary: Issue #16695: Document how glob handles filenames starting with a dot files: Doc/library/glob.rst | 15 +++++++++++++-- Lib/glob.py | 10 ++++++++-- Misc/NEWS | 3 +++ 3 files changed, 24 insertions(+), 4 deletions(-) diff --git a/Doc/library/glob.rst b/Doc/library/glob.rst --- a/Doc/library/glob.rst +++ b/Doc/library/glob.rst @@ -16,8 +16,10 @@ ``*``, ``?``, and character ranges expressed with ``[]`` will be correctly matched. This is done by using the :func:`os.listdir` and :func:`fnmatch.fnmatch` functions in concert, and not by actually invoking a -subshell. (For tilde and shell variable expansion, use -:func:`os.path.expanduser` and :func:`os.path.expandvars`.) +subshell. Note that unlike :func:`fnmatch.fnmatch`, :mod:`glob` treats +filenames beginning with a dot (``.``) as special cases. (For tilde and shell +variable expansion, use :func:`os.path.expanduser` and +:func:`os.path.expandvars`.) For a literal match, wrap the meta-characters in brackets. For example, ``'[?]'`` matches the character ``'?'``. @@ -51,6 +53,15 @@ >>> glob.glob('?.gif') ['1.gif'] +If the directory contains files starting with ``.`` they won't be matched by +default. For example, consider a directory containing :file:`card.gif` and +:file:`.card.gif`:: + + >>> import glob + >>> glob.glob('*.gif') + ['card.gif'] + >>> glob.glob('.c*') + ['.card.gif'] .. seealso:: diff --git a/Lib/glob.py b/Lib/glob.py --- a/Lib/glob.py +++ b/Lib/glob.py @@ -9,7 +9,10 @@ def glob(pathname): """Return a list of paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ return list(iglob(pathname)) @@ -17,7 +20,10 @@ def iglob(pathname): """Return an iterator which yields the paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ if not has_magic(pathname): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -756,6 +756,9 @@ Documentation ------------- +- Issue #16695: Document how glob handles filenames starting with a + dot. Initial patch by Jyrki Pulliainen. + - Issue #8890: Stop advertising an insecure practice by replacing uses of the /tmp directory with better alternatives in the documentation. Patch by Geoff Wilson. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 19:58:25 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 19:58:25 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316695=3A_Document_how_glob_handles_filenames_st?= =?utf-8?q?arting_with_a_dot?= Message-ID: <3ZCzG91GPrzSp9@mail.python.org> http://hg.python.org/cpython/rev/3fd9970d9e65 changeset: 82359:3fd9970d9e65 parent: 82355:6b0ca4cb7e4e parent: 82358:3e8b29512b2e user: Petri Lehtinen date: Sat Feb 23 19:56:15 2013 +0100 summary: Issue #16695: Document how glob handles filenames starting with a dot files: Doc/library/glob.rst | 15 +++++++++++++-- Lib/glob.py | 10 ++++++++-- Misc/NEWS | 3 +++ 3 files changed, 24 insertions(+), 4 deletions(-) diff --git a/Doc/library/glob.rst b/Doc/library/glob.rst --- a/Doc/library/glob.rst +++ b/Doc/library/glob.rst @@ -16,8 +16,10 @@ ``*``, ``?``, and character ranges expressed with ``[]`` will be correctly matched. This is done by using the :func:`os.listdir` and :func:`fnmatch.fnmatch` functions in concert, and not by actually invoking a -subshell. (For tilde and shell variable expansion, use -:func:`os.path.expanduser` and :func:`os.path.expandvars`.) +subshell. Note that unlike :func:`fnmatch.fnmatch`, :mod:`glob` treats +filenames beginning with a dot (``.``) as special cases. (For tilde and shell +variable expansion, use :func:`os.path.expanduser` and +:func:`os.path.expandvars`.) For a literal match, wrap the meta-characters in brackets. For example, ``'[?]'`` matches the character ``'?'``. @@ -51,6 +53,15 @@ >>> glob.glob('?.gif') ['1.gif'] +If the directory contains files starting with ``.`` they won't be matched by +default. For example, consider a directory containing :file:`card.gif` and +:file:`.card.gif`:: + + >>> import glob + >>> glob.glob('*.gif') + ['card.gif'] + >>> glob.glob('.c*') + ['.card.gif'] .. seealso:: diff --git a/Lib/glob.py b/Lib/glob.py --- a/Lib/glob.py +++ b/Lib/glob.py @@ -9,7 +9,10 @@ def glob(pathname): """Return a list of paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ return list(iglob(pathname)) @@ -17,7 +20,10 @@ def iglob(pathname): """Return an iterator which yields the paths matching a pathname pattern. - The pattern may contain simple shell-style wildcards a la fnmatch. + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. """ if not has_magic(pathname): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1012,6 +1012,9 @@ Documentation ------------- +- Issue #16695: Document how glob handles filenames starting with a + dot. Initial patch by Jyrki Pulliainen. + - Issue #8890: Stop advertising an insecure practice by replacing uses of the /tmp directory with better alternatives in the documentation. Patch by Geoff Wilson. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 21:12:04 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 21:12:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2NDAz?= =?utf-8?q?=3A_Document_how_distutils_uses_the_maintainer_field_in_PKG-INF?= =?utf-8?q?O?= Message-ID: <3ZD0v81d6jzSnS@mail.python.org> http://hg.python.org/cpython/rev/14144373fdcd changeset: 82360:14144373fdcd branch: 2.7 parent: 82356:2b96dcdac419 user: Petri Lehtinen date: Sat Feb 23 21:05:27 2013 +0100 summary: Issue #16403: Document how distutils uses the maintainer field in PKG-INFO files: Doc/distutils/apiref.rst | 5 ++++- Doc/distutils/setupscript.rst | 3 ++- Misc/NEWS | 3 +++ 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/Doc/distutils/apiref.rst b/Doc/distutils/apiref.rst --- a/Doc/distutils/apiref.rst +++ b/Doc/distutils/apiref.rst @@ -48,7 +48,10 @@ +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer* | The name of the current | a string | | | maintainer, if different from | | - | | the author | | + | | the author. Note that if | | + | | the maintainer is provided, | | + | | distutils will use it as the | | + | | author in :file:`PKG-INFO` | | +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer_email* | The email address of the | a string | | | current maintainer, if | | diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -598,7 +598,8 @@ It is recommended that versions take the form *major.minor[.patch[.sub]]*. (3) - Either the author or the maintainer must be identified. + Either the author or the maintainer must be identified. If maintainer is + provided, distutils lists it as the author in :file:`PKG-INFO`. (4) These fields should not be used if your package is to be compatible with Python diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -926,6 +926,9 @@ Documentation ------------- +- Issue #16403: Document how distutils uses the maintainer field in + PKG-INFO. Patch by Jyrki Pulliainen. + - Issue #16695: Document how glob handles filenames starting with a dot. Initial patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 21:12:05 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 21:12:05 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2NDAz?= =?utf-8?q?=3A_Document_how_distutils_uses_the_maintainer_field_in_PKG-INF?= =?utf-8?q?O?= Message-ID: <3ZD0v94NdDzSnW@mail.python.org> http://hg.python.org/cpython/rev/b65b6a0ebd44 changeset: 82361:b65b6a0ebd44 branch: 3.2 parent: 82357:b4434cbca953 user: Petri Lehtinen date: Sat Feb 23 21:05:27 2013 +0100 summary: Issue #16403: Document how distutils uses the maintainer field in PKG-INFO files: Doc/distutils/apiref.rst | 5 ++++- Doc/distutils/setupscript.rst | 3 ++- Misc/NEWS | 3 +++ 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/Doc/distutils/apiref.rst b/Doc/distutils/apiref.rst --- a/Doc/distutils/apiref.rst +++ b/Doc/distutils/apiref.rst @@ -48,7 +48,10 @@ +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer* | The name of the current | a string | | | maintainer, if different from | | - | | the author | | + | | the author. Note that if | | + | | the maintainer is provided, | | + | | distutils will use it as the | | + | | author in :file:`PKG-INFO` | | +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer_email* | The email address of the | a string | | | current maintainer, if | | diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -601,7 +601,8 @@ It is recommended that versions take the form *major.minor[.patch[.sub]]*. (3) - Either the author or the maintainer must be identified. + Either the author or the maintainer must be identified. If maintainer is + provided, distutils lists it as the author in :file:`PKG-INFO`. (4) These fields should not be used if your package is to be compatible with Python diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1078,6 +1078,9 @@ Documentation ------------- +- Issue #16403: Document how distutils uses the maintainer field in + PKG-INFO. Patch by Jyrki Pulliainen. + - Issue #16695: Document how glob handles filenames starting with a dot. Initial patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 21:12:07 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 21:12:07 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316403=3A_Document_how_distutils_uses_the_maintainer_f?= =?utf-8?q?ield_in_PKG-INFO?= Message-ID: <3ZD0vC05mmzSpP@mail.python.org> http://hg.python.org/cpython/rev/af4c08b10702 changeset: 82362:af4c08b10702 branch: 3.3 parent: 82358:3e8b29512b2e parent: 82361:b65b6a0ebd44 user: Petri Lehtinen date: Sat Feb 23 21:09:12 2013 +0100 summary: Issue #16403: Document how distutils uses the maintainer field in PKG-INFO files: Doc/distutils/apiref.rst | 5 ++++- Doc/distutils/setupscript.rst | 3 ++- Misc/NEWS | 3 +++ 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/Doc/distutils/apiref.rst b/Doc/distutils/apiref.rst --- a/Doc/distutils/apiref.rst +++ b/Doc/distutils/apiref.rst @@ -48,7 +48,10 @@ +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer* | The name of the current | a string | | | maintainer, if different from | | - | | the author | | + | | the author. Note that if | | + | | the maintainer is provided, | | + | | distutils will use it as the | | + | | author in :file:`PKG-INFO` | | +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer_email* | The email address of the | a string | | | current maintainer, if | | diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -601,7 +601,8 @@ It is recommended that versions take the form *major.minor[.patch[.sub]]*. (3) - Either the author or the maintainer must be identified. + Either the author or the maintainer must be identified. If maintainer is + provided, distutils lists it as the author in :file:`PKG-INFO`. (4) These fields should not be used if your package is to be compatible with Python diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -756,6 +756,9 @@ Documentation ------------- +- Issue #16403: Document how distutils uses the maintainer field in + PKG-INFO. Patch by Jyrki Pulliainen. + - Issue #16695: Document how glob handles filenames starting with a dot. Initial patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 21:12:08 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 21:12:08 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316403=3A_Document_how_distutils_uses_the_mainta?= =?utf-8?q?iner_field_in_PKG-INFO?= Message-ID: <3ZD0vD35rYzSqP@mail.python.org> http://hg.python.org/cpython/rev/9de4602a80b9 changeset: 82363:9de4602a80b9 parent: 82359:3fd9970d9e65 parent: 82362:af4c08b10702 user: Petri Lehtinen date: Sat Feb 23 21:10:18 2013 +0100 summary: Issue #16403: Document how distutils uses the maintainer field in PKG-INFO files: Doc/distutils/apiref.rst | 5 ++++- Doc/distutils/setupscript.rst | 3 ++- Misc/NEWS | 3 +++ 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/Doc/distutils/apiref.rst b/Doc/distutils/apiref.rst --- a/Doc/distutils/apiref.rst +++ b/Doc/distutils/apiref.rst @@ -48,7 +48,10 @@ +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer* | The name of the current | a string | | | maintainer, if different from | | - | | the author | | + | | the author. Note that if | | + | | the maintainer is provided, | | + | | distutils will use it as the | | + | | author in :file:`PKG-INFO` | | +--------------------+--------------------------------+-------------------------------------------------------------+ | *maintainer_email* | The email address of the | a string | | | current maintainer, if | | diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -601,7 +601,8 @@ It is recommended that versions take the form *major.minor[.patch[.sub]]*. (3) - Either the author or the maintainer must be identified. + Either the author or the maintainer must be identified. If maintainer is + provided, distutils lists it as the author in :file:`PKG-INFO`. (4) These fields should not be used if your package is to be compatible with Python diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1012,6 +1012,9 @@ Documentation ------------- +- Issue #16403: Document how distutils uses the maintainer field in + PKG-INFO. Patch by Jyrki Pulliainen. + - Issue #16695: Document how glob handles filenames starting with a dot. Initial patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 22:15:43 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 22:15:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2MTIx?= =?utf-8?q?=3A_Fix_line_number_accounting_in_shlex?= Message-ID: <3ZD2Jb4Vd1zNRp@mail.python.org> http://hg.python.org/cpython/rev/e54ee8d2c16b changeset: 82364:e54ee8d2c16b branch: 2.7 parent: 82360:14144373fdcd user: Petri Lehtinen date: Sat Feb 23 22:07:39 2013 +0100 summary: Issue #16121: Fix line number accounting in shlex files: Lib/shlex.py | 16 +++++++++++++++- Lib/test/test_shlex.py | 9 +++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 28 insertions(+), 1 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -48,6 +48,7 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 + self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -118,12 +119,23 @@ return raw def read_token(self): + if self._lines_found: + self.lineno += self._lines_found + self._lines_found = 0 + + i = 0 quoted = False escapedstate = ' ' while True: + i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - self.lineno = self.lineno + 1 + # In case newline is the first character increment lineno + if i == 1: + self.lineno += 1 + else: + self._lines_found += 1 + if self.debug >= 3: print "shlex: in state", repr(self.state), \ "I see character:", repr(nextchar) @@ -143,6 +155,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -210,6 +223,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -178,6 +178,15 @@ "%s: %s != %s" % (self.data[i][0], l, self.data[i][1:])) + def testLineNumbers(self): + data = '"a \n b \n c"\n"x"\n"y"' + for is_posix in (True, False): + s = shlex.shlex(data, posix=is_posix) + for i in (1, 4, 5): + s.read_token() + self.assertEqual(s.lineno, i) + + # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): for methname in dir(ShlexTest): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -717,6 +717,7 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic +Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -208,6 +208,9 @@ Library ------- +- Issue #16121: Fix line number accounting in shlex. Patch by Birk + Nilson. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 22:15:45 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 22:15:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2MTIx?= =?utf-8?q?=3A_Fix_line_number_accounting_in_shlex?= Message-ID: <3ZD2Jd0P6CzSh9@mail.python.org> http://hg.python.org/cpython/rev/f1d19fdb254f changeset: 82365:f1d19fdb254f branch: 3.2 parent: 82361:b65b6a0ebd44 user: Petri Lehtinen date: Sat Feb 23 22:07:39 2013 +0100 summary: Issue #16121: Fix line number accounting in shlex files: Lib/shlex.py | 16 +++++++++++++++- Lib/test/test_shlex.py | 9 +++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 28 insertions(+), 1 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -44,6 +44,7 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 + self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -114,12 +115,23 @@ return raw def read_token(self): + if self._lines_found: + self.lineno += self._lines_found + self._lines_found = 0 + + i = 0 quoted = False escapedstate = ' ' while True: + i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - self.lineno = self.lineno + 1 + # In case newline is the first character increment lineno + if i == 1: + self.lineno += 1 + else: + self._lines_found += 1 + if self.debug >= 3: print("shlex: in state", repr(self.state), \ "I see character:", repr(nextchar)) @@ -139,6 +151,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -206,6 +219,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -173,6 +173,15 @@ "%s: %s != %s" % (self.data[i][0], l, self.data[i][1:])) + def testLineNumbers(self): + data = '"a \n b \n c"\n"x"\n"y"' + for is_posix in (True, False): + s = shlex.shlex(data, posix=is_posix) + for i in (1, 4, 5): + s.read_token() + self.assertEqual(s.lineno, i) + + # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): for methname in dir(ShlexTest): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -774,6 +774,7 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic +Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -227,6 +227,9 @@ Library ------- +- Issue #16121: Fix line number accounting in shlex. Patch by Birk + Nilson. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 22:15:46 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 22:15:46 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316121=3A_Fix_line_number_accounting_in_shlex?= Message-ID: <3ZD2Jf38S6zQ97@mail.python.org> http://hg.python.org/cpython/rev/560e53fcf2b0 changeset: 82366:560e53fcf2b0 branch: 3.3 parent: 82362:af4c08b10702 parent: 82365:f1d19fdb254f user: Petri Lehtinen date: Sat Feb 23 22:09:51 2013 +0100 summary: Issue #16121: Fix line number accounting in shlex files: Lib/shlex.py | 16 +++++++++++++++- Lib/test/test_shlex.py | 8 ++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 27 insertions(+), 1 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -45,6 +45,7 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 + self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -115,12 +116,23 @@ return raw def read_token(self): + if self._lines_found: + self.lineno += self._lines_found + self._lines_found = 0 + + i = 0 quoted = False escapedstate = ' ' while True: + i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - self.lineno = self.lineno + 1 + # In case newline is the first character increment lineno + if i == 1: + self.lineno += 1 + else: + self._lines_found += 1 + if self.debug >= 3: print("shlex: in state", repr(self.state), \ "I see character:", repr(nextchar)) @@ -140,6 +152,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -207,6 +220,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -189,6 +189,14 @@ self.assertEqual(shlex.quote("test%s'name'" % u), "'test%s'\"'\"'name'\"'\"''" % u) + def testLineNumbers(self): + data = '"a \n b \n c"\n"x"\n"y"' + for is_posix in (True, False): + s = shlex.shlex(data, posix=is_posix) + for i in (1, 4, 5): + s.read_token() + self.assertEqual(s.lineno, i) + # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -855,6 +855,7 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic +Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -181,6 +181,9 @@ Library ------- +- Issue #16121: Fix line number accounting in shlex. Patch by Birk + Nilson. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 22:15:47 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 22:15:47 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316121=3A_Fix_line_number_accounting_in_shlex?= Message-ID: <3ZD2Jg6GG4zSpY@mail.python.org> http://hg.python.org/cpython/rev/f48c3c7a3205 changeset: 82367:f48c3c7a3205 parent: 82363:9de4602a80b9 parent: 82366:560e53fcf2b0 user: Petri Lehtinen date: Sat Feb 23 22:11:06 2013 +0100 summary: Issue #16121: Fix line number accounting in shlex files: Lib/shlex.py | 16 +++++++++++++++- Lib/test/test_shlex.py | 8 ++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ 4 files changed, 27 insertions(+), 1 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -45,6 +45,7 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 + self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -115,12 +116,23 @@ return raw def read_token(self): + if self._lines_found: + self.lineno += self._lines_found + self._lines_found = 0 + + i = 0 quoted = False escapedstate = ' ' while True: + i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - self.lineno = self.lineno + 1 + # In case newline is the first character increment lineno + if i == 1: + self.lineno += 1 + else: + self._lines_found += 1 + if self.debug >= 3: print("shlex: in state", repr(self.state), \ "I see character:", repr(nextchar)) @@ -140,6 +152,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -207,6 +220,7 @@ continue elif nextchar in self.commenters: self.instream.readline() + # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -189,6 +189,14 @@ self.assertEqual(shlex.quote("test%s'name'" % u), "'test%s'\"'\"'name'\"'\"''" % u) + def testLineNumbers(self): + data = '"a \n b \n c"\n"x"\n"y"' + for is_posix in (True, False): + s = shlex.shlex(data, posix=is_posix) + for i in (1, 4, 5): + s.read_token() + self.assertEqual(s.lineno, i) + # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -862,6 +862,7 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic +Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -260,6 +260,9 @@ Library ------- +- Issue #16121: Fix line number accounting in shlex. Patch by Birk + Nilson. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 23:15:12 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 23:15:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogUmV2ZXJ0ICJJc3N1?= =?utf-8?q?e_=2316121=3A_Fix_line_number_accounting_in_shlex=22?= Message-ID: <3ZD3dD2rpdzNgy@mail.python.org> http://hg.python.org/cpython/rev/34f759fa5484 changeset: 82368:34f759fa5484 branch: 2.7 parent: 82364:e54ee8d2c16b user: Petri Lehtinen date: Sat Feb 23 23:02:55 2013 +0100 summary: Revert "Issue #16121: Fix line number accounting in shlex" files: Lib/shlex.py | 16 +--------------- Lib/test/test_shlex.py | 9 --------- Misc/ACKS | 1 - Misc/NEWS | 3 --- 4 files changed, 1 insertions(+), 28 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -48,7 +48,6 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 - self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -119,23 +118,12 @@ return raw def read_token(self): - if self._lines_found: - self.lineno += self._lines_found - self._lines_found = 0 - - i = 0 quoted = False escapedstate = ' ' while True: - i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - # In case newline is the first character increment lineno - if i == 1: - self.lineno += 1 - else: - self._lines_found += 1 - + self.lineno = self.lineno + 1 if self.debug >= 3: print "shlex: in state", repr(self.state), \ "I see character:", repr(nextchar) @@ -155,7 +143,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -223,7 +210,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -178,15 +178,6 @@ "%s: %s != %s" % (self.data[i][0], l, self.data[i][1:])) - def testLineNumbers(self): - data = '"a \n b \n c"\n"x"\n"y"' - for is_posix in (True, False): - s = shlex.shlex(data, posix=is_posix) - for i in (1, 4, 5): - s.read_token() - self.assertEqual(s.lineno, i) - - # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): for methname in dir(ShlexTest): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -717,7 +717,6 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic -Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -208,9 +208,6 @@ Library ------- -- Issue #16121: Fix line number accounting in shlex. Patch by Birk - Nilson. - - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 23:15:13 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 23:15:13 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogUmV2ZXJ0ICJJc3N1?= =?utf-8?q?e_=2316121=3A_Fix_line_number_accounting_in_shlex=22?= Message-ID: <3ZD3dF5wXzzSpC@mail.python.org> http://hg.python.org/cpython/rev/cda4a9dc415a changeset: 82369:cda4a9dc415a branch: 3.2 parent: 82365:f1d19fdb254f user: Petri Lehtinen date: Sat Feb 23 23:03:15 2013 +0100 summary: Revert "Issue #16121: Fix line number accounting in shlex" files: Lib/shlex.py | 16 +--------------- Lib/test/test_shlex.py | 9 --------- Misc/ACKS | 1 - Misc/NEWS | 3 --- 4 files changed, 1 insertions(+), 28 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -44,7 +44,6 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 - self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -115,23 +114,12 @@ return raw def read_token(self): - if self._lines_found: - self.lineno += self._lines_found - self._lines_found = 0 - - i = 0 quoted = False escapedstate = ' ' while True: - i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - # In case newline is the first character increment lineno - if i == 1: - self.lineno += 1 - else: - self._lines_found += 1 - + self.lineno = self.lineno + 1 if self.debug >= 3: print("shlex: in state", repr(self.state), \ "I see character:", repr(nextchar)) @@ -151,7 +139,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -219,7 +206,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -173,15 +173,6 @@ "%s: %s != %s" % (self.data[i][0], l, self.data[i][1:])) - def testLineNumbers(self): - data = '"a \n b \n c"\n"x"\n"y"' - for is_posix in (True, False): - s = shlex.shlex(data, posix=is_posix) - for i in (1, 4, 5): - s.read_token() - self.assertEqual(s.lineno, i) - - # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): for methname in dir(ShlexTest): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -774,7 +774,6 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic -Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -227,9 +227,6 @@ Library ------- -- Issue #16121: Fix line number accounting in shlex. Patch by Birk - Nilson. - - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 23:15:15 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 23:15:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Revert_=22Issue_=2316121=3A_Fix_line_number_accounting_in_shle?= =?utf-8?q?x=22?= Message-ID: <3ZD3dH1xbJzSpC@mail.python.org> http://hg.python.org/cpython/rev/15f3fd6070b7 changeset: 82370:15f3fd6070b7 branch: 3.3 parent: 82366:560e53fcf2b0 parent: 82369:cda4a9dc415a user: Petri Lehtinen date: Sat Feb 23 23:12:35 2013 +0100 summary: Revert "Issue #16121: Fix line number accounting in shlex" files: Lib/shlex.py | 16 +--------------- Lib/test/test_shlex.py | 9 --------- Misc/ACKS | 1 - Misc/NEWS | 3 --- 4 files changed, 1 insertions(+), 28 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -45,7 +45,6 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 - self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -116,23 +115,12 @@ return raw def read_token(self): - if self._lines_found: - self.lineno += self._lines_found - self._lines_found = 0 - - i = 0 quoted = False escapedstate = ' ' while True: - i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - # In case newline is the first character increment lineno - if i == 1: - self.lineno += 1 - else: - self._lines_found += 1 - + self.lineno = self.lineno + 1 if self.debug >= 3: print("shlex: in state", repr(self.state), \ "I see character:", repr(nextchar)) @@ -152,7 +140,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -220,7 +207,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -189,15 +189,6 @@ self.assertEqual(shlex.quote("test%s'name'" % u), "'test%s'\"'\"'name'\"'\"''" % u) - def testLineNumbers(self): - data = '"a \n b \n c"\n"x"\n"y"' - for is_posix in (True, False): - s = shlex.shlex(data, posix=is_posix) - for i in (1, 4, 5): - s.read_token() - self.assertEqual(s.lineno, i) - - # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): for methname in dir(ShlexTest): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -855,7 +855,6 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic -Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -181,9 +181,6 @@ Library ------- -- Issue #16121: Fix line number accounting in shlex. Patch by Birk - Nilson. - - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sat Feb 23 23:15:16 2013 From: python-checkins at python.org (petri.lehtinen) Date: Sat, 23 Feb 2013 23:15:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Revert_=22Issue_=2316121=3A_Fix_line_number_accounting_i?= =?utf-8?q?n_shlex=22?= Message-ID: <3ZD3dJ4dKCzSq8@mail.python.org> http://hg.python.org/cpython/rev/1b0a6c1f8a08 changeset: 82371:1b0a6c1f8a08 parent: 82367:f48c3c7a3205 parent: 82370:15f3fd6070b7 user: Petri Lehtinen date: Sat Feb 23 23:13:03 2013 +0100 summary: Revert "Issue #16121: Fix line number accounting in shlex" files: Lib/shlex.py | 16 +--------------- Lib/test/test_shlex.py | 9 --------- Misc/ACKS | 1 - Misc/NEWS | 3 --- 4 files changed, 1 insertions(+), 28 deletions(-) diff --git a/Lib/shlex.py b/Lib/shlex.py --- a/Lib/shlex.py +++ b/Lib/shlex.py @@ -45,7 +45,6 @@ self.state = ' ' self.pushback = deque() self.lineno = 1 - self._lines_found = 0 self.debug = 0 self.token = '' self.filestack = deque() @@ -116,23 +115,12 @@ return raw def read_token(self): - if self._lines_found: - self.lineno += self._lines_found - self._lines_found = 0 - - i = 0 quoted = False escapedstate = ' ' while True: - i += 1 nextchar = self.instream.read(1) if nextchar == '\n': - # In case newline is the first character increment lineno - if i == 1: - self.lineno += 1 - else: - self._lines_found += 1 - + self.lineno = self.lineno + 1 if self.debug >= 3: print("shlex: in state", repr(self.state), \ "I see character:", repr(nextchar)) @@ -152,7 +140,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 elif self.posix and nextchar in self.escape: escapedstate = 'a' @@ -220,7 +207,6 @@ continue elif nextchar in self.commenters: self.instream.readline() - # Not considered a token so incrementing lineno directly self.lineno = self.lineno + 1 if self.posix: self.state = ' ' diff --git a/Lib/test/test_shlex.py b/Lib/test/test_shlex.py --- a/Lib/test/test_shlex.py +++ b/Lib/test/test_shlex.py @@ -189,15 +189,6 @@ self.assertEqual(shlex.quote("test%s'name'" % u), "'test%s'\"'\"'name'\"'\"''" % u) - def testLineNumbers(self): - data = '"a \n b \n c"\n"x"\n"y"' - for is_posix in (True, False): - s = shlex.shlex(data, posix=is_posix) - for i in (1, 4, 5): - s.read_token() - self.assertEqual(s.lineno, i) - - # Allow this test to be used with old shlex.py if not getattr(shlex, "split", None): for methname in dir(ShlexTest): diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -862,7 +862,6 @@ Gustavo Niemeyer Oscar Nierstrasz Hrvoje Niksic -Birk Nilson Gregory Nofi Jesse Noller Bill Noon diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -260,9 +260,6 @@ Library ------- -- Issue #16121: Fix line number accounting in shlex. Patch by Birk - Nilson. - - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 24 00:47:29 2013 From: python-checkins at python.org (chris.jerdonek) Date: Sun, 24 Feb 2013 00:47:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Add_a_=22Changed_in_versio?= =?utf-8?q?n=22_to_the_docs_for_issue_=2315132=2E?= Message-ID: <3ZD5gj1LkfzRqp@mail.python.org> http://hg.python.org/cpython/rev/4285d13fd3dc changeset: 82372:4285d13fd3dc user: Chris Jerdonek date: Sat Feb 23 15:44:46 2013 -0800 summary: Add a "Changed in version" to the docs for issue #15132. files: Doc/library/unittest.rst | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -1830,6 +1830,10 @@ The *verbosity*, *failfast*, *catchbreak*, *buffer* and *warnings* parameters were added. + .. versionchanged:: 3.4 + The *defaultTest* parameter was changed to also accept an iterable of + test names. + load_tests Protocol ################### -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 24 04:22:16 2013 From: python-checkins at python.org (r.david.murray) Date: Sun, 24 Feb 2013 04:22:16 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3Mjc1OiBGaXgg?= =?utf-8?q?class_name_in_init_errors_in_C_bufferedio_classes=2E?= Message-ID: <3ZDBRX03wrzSt8@mail.python.org> http://hg.python.org/cpython/rev/d6a26cd93825 changeset: 82373:d6a26cd93825 branch: 3.2 parent: 82369:cda4a9dc415a user: R David Murray date: Sat Feb 23 21:51:05 2013 -0500 summary: #17275: Fix class name in init errors in C bufferedio classes. This fixes an apparent copy-and-paste error. Patch by Manuel Jacob. files: Lib/test/test_io.py | 18 ++++++++++++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ Modules/_io/bufferedio.c | 4 ++-- 4 files changed, 24 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_io.py b/Lib/test/test_io.py --- a/Lib/test/test_io.py +++ b/Lib/test/test_io.py @@ -1039,6 +1039,12 @@ support.gc_collect() self.assertTrue(wr() is None, wr) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedReader"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedReaderTest(BufferedReaderTest): tp = pyio.BufferedReader @@ -1321,6 +1327,11 @@ with self.open(support.TESTFN, "rb") as f: self.assertEqual(f.read(), b"123xxx") + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedWriter"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + class PyBufferedWriterTest(BufferedWriterTest): tp = pyio.BufferedWriter @@ -1674,6 +1685,7 @@ # You can't construct a BufferedRandom over a non-seekable stream. test_unseekable = None + class CBufferedRandomTest(BufferedRandomTest, SizeofTest): tp = io.BufferedRandom @@ -1691,6 +1703,12 @@ CBufferedReaderTest.test_garbage_collection(self) CBufferedWriterTest.test_garbage_collection(self) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedRandom"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedRandomTest(BufferedRandomTest): tp = pyio.BufferedRandom diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -505,6 +505,7 @@ Adam Jackson Ben Jackson Paul Jackson +Manuel Jacob David Jacobs Kevin Jacobs Kjetil Jacobsen diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17275: Corrected class name in init error messages of the C version of + BufferedWriter and BufferedRandom. + - Issue #7963: Fixed misleading error message that issued when object is called without arguments. diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c --- a/Modules/_io/bufferedio.c +++ b/Modules/_io/bufferedio.c @@ -1702,7 +1702,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedWriter", kwlist, &raw, &buffer_size, &max_buffer_size)) { return -1; } @@ -2339,7 +2339,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedRandom", kwlist, &raw, &buffer_size, &max_buffer_size)) { return -1; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 24 04:22:17 2013 From: python-checkins at python.org (r.david.murray) Date: Sun, 24 Feb 2013 04:22:17 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge_=2317275=3A_Fix_class_name_in_init_errors_in_C_bufferedi?= =?utf-8?q?o_classes=2E?= Message-ID: <3ZDBRY36kczStG@mail.python.org> http://hg.python.org/cpython/rev/aae2bb2e3195 changeset: 82374:aae2bb2e3195 branch: 3.3 parent: 82370:15f3fd6070b7 parent: 82373:d6a26cd93825 user: R David Murray date: Sat Feb 23 22:07:55 2013 -0500 summary: Merge #17275: Fix class name in init errors in C bufferedio classes. This fixes an apparent copy-and-paste error. Patch by Manuel Jacob. files: Lib/test/test_io.py | 18 ++++++++++++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ Modules/_io/bufferedio.c | 4 ++-- 4 files changed, 24 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_io.py b/Lib/test/test_io.py --- a/Lib/test/test_io.py +++ b/Lib/test/test_io.py @@ -1072,6 +1072,12 @@ support.gc_collect() self.assertTrue(wr() is None, wr) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedReader"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedReaderTest(BufferedReaderTest): tp = pyio.BufferedReader @@ -1363,6 +1369,11 @@ with self.open(support.TESTFN, "rb") as f: self.assertEqual(f.read(), b"123xxx") + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedWriter"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + class PyBufferedWriterTest(BufferedWriterTest): tp = pyio.BufferedWriter @@ -1715,6 +1726,7 @@ # You can't construct a BufferedRandom over a non-seekable stream. test_unseekable = None + class CBufferedRandomTest(BufferedRandomTest, SizeofTest): tp = io.BufferedRandom @@ -1732,6 +1744,12 @@ CBufferedReaderTest.test_garbage_collection(self) CBufferedWriterTest.test_garbage_collection(self) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedRandom"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedRandomTest(BufferedRandomTest): tp = pyio.BufferedRandom diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -555,6 +555,7 @@ Adam Jackson Ben Jackson Paul Jackson +Manuel Jacob David Jacobs Kevin Jacobs Kjetil Jacobsen diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #17275: Corrected class name in init error messages of the C version of + BufferedWriter and BufferedRandom. + - Issue #7963: Fixed misleading error message that issued when object is called without arguments. diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c --- a/Modules/_io/bufferedio.c +++ b/Modules/_io/bufferedio.c @@ -1817,7 +1817,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedWriter", kwlist, &raw, &buffer_size)) { return -1; } @@ -2446,7 +2446,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedRandom", kwlist, &raw, &buffer_size)) { return -1; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 24 04:22:18 2013 From: python-checkins at python.org (r.david.murray) Date: Sun, 24 Feb 2013 04:22:18 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogIzE3Mjc1OiBGaXgg?= =?utf-8?q?class_name_in_init_errors_in_C_bufferedio_classes=2E?= Message-ID: <3ZDBRZ64nvzNNt@mail.python.org> http://hg.python.org/cpython/rev/df57314b93d1 changeset: 82375:df57314b93d1 branch: 2.7 parent: 82368:34f759fa5484 user: R David Murray date: Sat Feb 23 22:11:21 2013 -0500 summary: #17275: Fix class name in init errors in C bufferedio classes. This fixes an apparent copy-and-paste error. Original patch by Manuel Jacob. files: Lib/test/test_io.py | 18 ++++++++++++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ Modules/_io/bufferedio.c | 4 ++-- 4 files changed, 24 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_io.py b/Lib/test/test_io.py --- a/Lib/test/test_io.py +++ b/Lib/test/test_io.py @@ -1004,6 +1004,12 @@ support.gc_collect() self.assertTrue(wr() is None, wr) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegexp(TypeError, "BufferedReader"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedReaderTest(BufferedReaderTest): tp = pyio.BufferedReader @@ -1296,6 +1302,11 @@ with self.open(support.TESTFN, "rb") as f: self.assertEqual(f.read(), b"123xxx") + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegexp(TypeError, "BufferedWriter"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + class PyBufferedWriterTest(BufferedWriterTest): tp = pyio.BufferedWriter @@ -1646,6 +1657,7 @@ f.flush() self.assertEqual(raw.getvalue(), b'1b\n2def\n3\n') + class CBufferedRandomTest(CBufferedReaderTest, CBufferedWriterTest, BufferedRandomTest, SizeofTest): tp = io.BufferedRandom @@ -1664,6 +1676,12 @@ CBufferedReaderTest.test_garbage_collection(self) CBufferedWriterTest.test_garbage_collection(self) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegexp(TypeError, "BufferedRandom"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedRandomTest(BufferedRandomTest): tp = pyio.BufferedRandom diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -466,6 +466,7 @@ Atsuo Ishimoto Paul Jackson Ben Jackson +Manuel Jacob David Jacobs Kevin Jacobs Kjetil Jacobsen diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -9,6 +9,9 @@ Core and Builtins ----------------- +- Issue #17275: Corrected class name in init error messages of the C version of + BufferedWriter and BufferedRandom. + - Issue #7963: Fixed misleading error message that issued when object is called without arguments. diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c --- a/Modules/_io/bufferedio.c +++ b/Modules/_io/bufferedio.c @@ -1683,7 +1683,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedWriter", kwlist, &raw, &buffer_size, &max_buffer_size)) { return -1; } @@ -2316,7 +2316,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|nn:BufferedRandom", kwlist, &raw, &buffer_size, &max_buffer_size)) { return -1; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Sun Feb 24 04:22:20 2013 From: python-checkins at python.org (r.david.murray) Date: Sun, 24 Feb 2013 04:22:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_=2317275=3A_Fix_class_name_in_init_errors_in_C_buf?= =?utf-8?q?feredio_classes=2E?= Message-ID: <3ZDBRc1q7VzSyP@mail.python.org> http://hg.python.org/cpython/rev/96f08a22f562 changeset: 82376:96f08a22f562 parent: 82372:4285d13fd3dc parent: 82374:aae2bb2e3195 user: R David Murray date: Sat Feb 23 22:21:48 2013 -0500 summary: Merge #17275: Fix class name in init errors in C bufferedio classes. This fixes an apparent copy-and-paste error. Patch by Manuel Jacob. files: Lib/test/test_io.py | 18 ++++++++++++++++++ Misc/ACKS | 1 + Misc/NEWS | 3 +++ Modules/_io/bufferedio.c | 4 ++-- 4 files changed, 24 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_io.py b/Lib/test/test_io.py --- a/Lib/test/test_io.py +++ b/Lib/test/test_io.py @@ -1080,6 +1080,12 @@ support.gc_collect() self.assertTrue(wr() is None, wr) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedReader"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedReaderTest(BufferedReaderTest): tp = pyio.BufferedReader @@ -1371,6 +1377,11 @@ with self.open(support.TESTFN, "rb") as f: self.assertEqual(f.read(), b"123xxx") + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedWriter"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + class PyBufferedWriterTest(BufferedWriterTest): tp = pyio.BufferedWriter @@ -1723,6 +1734,7 @@ # You can't construct a BufferedRandom over a non-seekable stream. test_unseekable = None + class CBufferedRandomTest(BufferedRandomTest, SizeofTest): tp = io.BufferedRandom @@ -1740,6 +1752,12 @@ CBufferedReaderTest.test_garbage_collection(self) CBufferedWriterTest.test_garbage_collection(self) + def test_args_error(self): + # Issue #17275 + with self.assertRaisesRegex(TypeError, "BufferedRandom"): + self.tp(io.BytesIO(), 1024, 1024, 1024) + + class PyBufferedRandomTest(BufferedRandomTest): tp = pyio.BufferedRandom diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -558,6 +558,7 @@ Adam Jackson Ben Jackson Paul Jackson +Manuel Jacob David Jacobs Kevin Jacobs Kjetil Jacobsen diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17275: Corrected class name in init error messages of the C version of + BufferedWriter and BufferedRandom. + - Issue #7963: Fixed misleading error message that issued when object is called without arguments. diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c --- a/Modules/_io/bufferedio.c +++ b/Modules/_io/bufferedio.c @@ -1822,7 +1822,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedWriter", kwlist, &raw, &buffer_size)) { return -1; } @@ -2451,7 +2451,7 @@ self->ok = 0; self->detached = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedReader", kwlist, + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|n:BufferedRandom", kwlist, &raw, &buffer_size)) { return -1; } -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Sun Feb 24 05:59:30 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Sun, 24 Feb 2013 05:59:30 +0100 Subject: [Python-checkins] Daily reference leaks (4285d13fd3dc): sum=-2 Message-ID: results for 4285d13fd3dc on branch "default" -------------------------------------------- test_concurrent_futures leaked [0, 0, -2] memory blocks, sum=-2 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogh760n3', '-x'] From python-checkins at python.org Sun Feb 24 15:33:00 2013 From: python-checkins at python.org (eli.bendersky) Date: Sun, 24 Feb 2013 15:33:00 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Added_use_cases_and_some_of_t?= =?utf-8?q?he_discussed_variations=2E?= Message-ID: <3ZDTKS59phzStX@mail.python.org> http://hg.python.org/peps/rev/3754faa001f4 changeset: 4771:3754faa001f4 user: Eli Bendersky date: Sun Feb 24 06:32:40 2013 -0800 summary: Added use cases and some of the discussed variations. files: pepdraft-0435.txt | 81 +++++++++++++++++++++++++++++++++++ 1 files changed, 81 insertions(+), 0 deletions(-) diff --git a/pepdraft-0435.txt b/pepdraft-0435.txt --- a/pepdraft-0435.txt +++ b/pepdraft-0435.txt @@ -381,6 +381,87 @@ >>> enum.make('Flags', zip(list('abcdefg'), enumiter())) +Proposed variations +=================== + +Some variations were proposed during the discussions in the mailing list. +Here's some of the more popular: + +Not having to specify values for enums +-------------------------------------- + +Michael Foord proposed (and Tim Delaney provided a proof-of-concept +implementation) to use metaclass magic that makes this possible:: + + class Color(Enum): + red, green, blue + +The values get actually assigned only when first looked up. + +Pros: cleaner syntax that requires less typing for a very common task (just +listing enumertion names without caring about the values). + +Cons: involves much magic in the implementation, which makes even the +definition of such enums baffling when first seen. Besides, explicit is better +than implicit. + +Using special names or forms to auto-assign enum values +------------------------------------------------------- + +A different approach to avoid specifying enum values is to use a special name +or form to auto assign them. For example:: + + class Color(Enum): + red = None # auto-assigned to 0 + green = None # auto-assigned to 1 + blue = None # auto-assigned to 2 + +More flexibly:: + + class Color(Enum): + red = 7 + green = None # auto-assigned to 8 + blue = 19 + purple = None # auto-assigned to 20 + +Some variations on this theme: + +#. A special name ``auto`` imported from the enum package. +#. Georg Brandl proposed ellipsis (``...``) instead of ``None`` to achieve the + same effect. + +Pros: no need to manually enter values. Makes it easier to change the enum and +extend it, especially for large enumerations. + +Cons: actually longer to type in many simple cases. The argument of +explicit vs. implicit applies here as well. + +Use-cases in the standard library +================================= + +The Python standard library has many places where the usage of enums would be +beneficial to replace other idioms currently used to represent them. Such +usages can be divided to two categories: user-code facing constants, and +internal constants. + +User-code facing constants like ``os.SEEK_*``, ``socket`` module constants, +decimal rounding modes, HTML error codes could benefit from being enums had +they been implemented this way from the beginning. At this point, however, +at the risk of breaking user code (that relies on the constants' actual values +rather than their meaning) such a change cannot be made. This does not mean +that future uses in the stdlib can't use an enum for defining new user-code +facing constants. + +Internal constants are not seen by user code but are employed internally by +stdlib modules. It appears that nothing should stand in the way of +implementing such constants with enums. Some examples uncovered by a very +partial skim through the stdlib: ``binhex``, ``imaplib``, ``http/client``, +``urllib/robotparser``, ``idlelib``, ``concurrent.futures``, ``turledemo``. + +In addition, looking at the code of the Twisted library, there are many use +cases for replacing internal state constants with enums. The same can be +said about a lot of networking code (especially implementation of protocols) +and can be seen in test protocols written with the Tulip library as well. Differences from PEP 354 ======================== -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Mon Feb 25 03:41:56 2013 From: python-checkins at python.org (daniel.holth) Date: Mon, 25 Feb 2013 03:41:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP-0427=3A_clarify_some_impl?= =?utf-8?q?ementation_details=2E?= Message-ID: <3ZDnVX6xNtz7LmK@mail.python.org> http://hg.python.org/peps/rev/7d2494f4cd0a changeset: 4772:7d2494f4cd0a user: Daniel Holth date: Sun Feb 24 21:41:40 2013 -0500 summary: PEP-0427: clarify some implementation details. Hope it's OK to clarify two details that came up during Vinay's distlib wheel implementation: zip directory filenames are encoded as utf-8, and it's nicer to put the .dist-info directory at the end of the archive. files: pep-0427.txt | 20 +++++++++++++++++--- 1 files changed, 17 insertions(+), 3 deletions(-) diff --git a/pep-0427.txt b/pep-0427.txt --- a/pep-0427.txt +++ b/pep-0427.txt @@ -101,6 +101,15 @@ accompanying .exe wrappers. Windows installers may want to add them during install. +Recommended archiver features +''''''''''''''''''''''''''''' + +Place ``.dist-info`` at the end of the archive. + Archivers are encouraged to place the ``.dist-info`` files physically + at the end of the archive. This enables some potentially interesting + ZIP tricks including the ability to amend the metadata without + rewriting the entire archive. + File Format ----------- @@ -149,9 +158,14 @@ re.sub("[^\w\d.]+", "_", distribution, re.UNICODE) -The filename is Unicode. It will be some time before the tools are -updated to support non-ASCII filenames, but they are supported in this -specification. +The archive filename is Unicode. It will be some time before the tools +are updated to support non-ASCII filenames, but they are supported in +this specification. + +The filenames *inside* the archive are encoded as UTF-8. Although some +ZIP clients in common use do not properly display UTF-8 filenames, +the encoding is supported by both the ZIP specification and Python's +``zipfile``. File contents ''''''''''''' -- Repository URL: http://hg.python.org/peps From solipsis at pitrou.net Mon Feb 25 06:01:55 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Mon, 25 Feb 2013 06:01:55 +0100 Subject: [Python-checkins] Daily reference leaks (96f08a22f562): sum=0 Message-ID: results for 96f08a22f562 on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogeuI7eb', '-x'] From ncoghlan at gmail.com Mon Feb 25 09:16:54 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 25 Feb 2013 18:16:54 +1000 Subject: [Python-checkins] peps: PEP-0427: clarify some implementation details. In-Reply-To: <3ZDnVX6xNtz7LmK@mail.python.org> References: <3ZDnVX6xNtz7LmK@mail.python.org> Message-ID: On Mon, Feb 25, 2013 at 12:41 PM, daniel.holth wrote: > http://hg.python.org/peps/rev/7d2494f4cd0a > changeset: 4772:7d2494f4cd0a > user: Daniel Holth > date: Sun Feb 24 21:41:40 2013 -0500 > summary: > PEP-0427: clarify some implementation details. > > Hope it's OK to clarify two details that came up during Vinay's distlib wheel > implementation: zip directory filenames are encoded as utf-8, and it's nicer > to put the .dist-info directory at the end of the archive. It's better to announce the intended update/clarification on python-dev first, but I agree both these adjustments are reasonable (I don't remember if I actually said that in the distutils-sig thread). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From python-checkins at python.org Mon Feb 25 11:38:39 2013 From: python-checkins at python.org (giampaolo.rodola) Date: Mon, 25 Feb 2013 11:38:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Fix_=2317197=3A_profile/cP?= =?utf-8?q?rofile_modules_refactored_so_that_code_of_run=28=29_and?= Message-ID: <3ZF04b4GdjzSxM@mail.python.org> http://hg.python.org/cpython/rev/422169310b7c changeset: 82377:422169310b7c user: Giampaolo Rodola' date: Mon Feb 25 11:36:40 2013 +0100 summary: Fix #17197: profile/cProfile modules refactored so that code of run() and runctx() utility functions is not duplicated in both modules. files: Lib/cProfile.py | 46 ++++--------------------------- Lib/profile.py | 54 ++++++++++++++++++++++++------------ Misc/NEWS | 3 ++ 3 files changed, 45 insertions(+), 58 deletions(-) diff --git a/Lib/cProfile.py b/Lib/cProfile.py --- a/Lib/cProfile.py +++ b/Lib/cProfile.py @@ -7,54 +7,20 @@ __all__ = ["run", "runctx", "Profile"] import _lsprof +import profile as _pyprofile # ____________________________________________________________ # Simple interface def run(statement, filename=None, sort=-1): - """Run statement under profiler optionally saving results in filename - - This function takes a single argument that can be passed to the - "exec" statement, and an optional file name. In all cases this - routine attempts to "exec" its first argument and gather profiling - statistics from the execution. If no file name is present, then this - function automatically prints a simple profiling report, sorted by the - standard name string (file/line/function-name) that is presented in - each line. - """ - prof = Profile() - result = None - try: - try: - prof = prof.run(statement) - except SystemExit: - pass - finally: - if filename is not None: - prof.dump_stats(filename) - else: - result = prof.print_stats(sort) - return result + return _pyprofile._Utils(Profile).run(statement, filename, sort) def runctx(statement, globals, locals, filename=None, sort=-1): - """Run statement under profiler, supplying your own globals and locals, - optionally saving results in filename. + return _pyprofile._Utils(Profile).runctx(statement, globals, locals, + filename, sort) - statement and filename have the same semantics as profile.run - """ - prof = Profile() - result = None - try: - try: - prof = prof.runctx(statement, globals, locals) - except SystemExit: - pass - finally: - if filename is not None: - prof.dump_stats(filename) - else: - result = prof.print_stats(sort) - return result +run.__doc__ = _pyprofile.run.__doc__ +runctx.__doc__ = _pyprofile.runctx.__doc__ # ____________________________________________________________ diff --git a/Lib/profile.py b/Lib/profile.py --- a/Lib/profile.py +++ b/Lib/profile.py @@ -40,6 +40,40 @@ # return i_count #itimes = integer_timer # replace with C coded timer returning integers +class _Utils: + """Support class for utility functions which are shared by + profile.py and cProfile.py modules. + Not supposed to be used directly. + """ + + def __init__(self, profiler): + self.profiler = profiler + + def run(self, statement, filename, sort): + prof = self.profiler() + try: + prof.run(statement) + except SystemExit: + pass + finally: + self._show(prof, filename, sort) + + def runctx(self, statement, globals, locals, filename, sort): + prof = self.profiler() + try: + prof.runctx(statement, globals, locals) + except SystemExit: + pass + finally: + self._show(prof, filename, sort) + + def _show(self, prof, filename, sort): + if filename is not None: + prof.dump_stats(filename) + else: + prof.print_stats(sort) + + #************************************************************************** # The following are the static member functions for the profiler class # Note that an instance of Profile() is *not* needed to call them. @@ -56,15 +90,7 @@ standard name string (file/line/function-name) that is presented in each line. """ - prof = Profile() - try: - prof = prof.run(statement) - except SystemExit: - pass - if filename is not None: - prof.dump_stats(filename) - else: - return prof.print_stats(sort) + return _Utils(Profile).run(statement, filename, sort) def runctx(statement, globals, locals, filename=None, sort=-1): """Run statement under profiler, supplying your own globals and locals, @@ -72,16 +98,8 @@ statement and filename have the same semantics as profile.run """ - prof = Profile() - try: - prof = prof.runctx(statement, globals, locals) - except SystemExit: - pass + return _Utils(Profile).runctx(statement, globals, locals, filename, sort) - if filename is not None: - prof.dump_stats(filename) - else: - return prof.print_stats(sort) class Profile: """Profiler class. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -263,6 +263,9 @@ Library ------- +- Issue #17197: profile/cProfile modules refactored so that code of run() and + runctx() utility functions is not duplicated in both modules. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 25 12:32:14 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 25 Feb 2013 12:32:14 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE0NzA1?= =?utf-8?q?48=3A_Do_not_buffer_XMLGenerator_output=2E?= Message-ID: <3ZF1GQ6Xk4zSwk@mail.python.org> http://hg.python.org/cpython/rev/d707e3345a74 changeset: 82378:d707e3345a74 branch: 2.7 parent: 82375:df57314b93d1 user: Serhiy Storchaka date: Mon Feb 25 13:31:29 2013 +0200 summary: Issue #1470548: Do not buffer XMLGenerator output. Add test for fragment producing with XMLGenerator. files: Lib/test/test_sax.py | 15 +++++++++++++++ Lib/xml/sax/saxutils.py | 10 +++++++--- 2 files changed, 22 insertions(+), 3 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -403,6 +403,21 @@ func(result) self.assertFalse(result.closed) + def test_xmlgen_fragment(self): + result = self.ioclass() + gen = XMLGenerator(result) + + # Don't call gen.startDocument() + gen.startElement("foo", {"a": "1.0"}) + gen.characters("Hello") + gen.endElement("foo") + gen.startElement("bar", {"b": "2.0"}) + gen.endElement("bar") + # Don't call gen.endDocument() + + self.assertEqual(result.getvalue(), + 'Hello') + class StringXmlgenTest(XmlgenTest, unittest.TestCase): ioclass = StringIO diff --git a/Lib/xml/sax/saxutils.py b/Lib/xml/sax/saxutils.py --- a/Lib/xml/sax/saxutils.py +++ b/Lib/xml/sax/saxutils.py @@ -98,9 +98,13 @@ except AttributeError: pass # wrap a binary writer with TextIOWrapper - return io.TextIOWrapper(buffer, encoding=encoding, - errors='xmlcharrefreplace', - newline='\n') + class UnbufferedTextIOWrapper(io.TextIOWrapper): + def write(self, s): + super(UnbufferedTextIOWrapper, self).write(s) + self.flush() + return UnbufferedTextIOWrapper(buffer, encoding=encoding, + errors='xmlcharrefreplace', + newline='\n') class XMLGenerator(handler.ContentHandler): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 25 12:49:20 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 25 Feb 2013 12:49:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE0NzA1?= =?utf-8?q?48=3A_Add_test_for_fragment_producing_with_XMLGenerator=2E?= Message-ID: <3ZF1f81SsdzSvy@mail.python.org> http://hg.python.org/cpython/rev/1c03e499cdc2 changeset: 82379:1c03e499cdc2 branch: 3.2 parent: 82373:d6a26cd93825 user: Serhiy Storchaka date: Mon Feb 25 13:46:10 2013 +0200 summary: Issue #1470548: Add test for fragment producing with XMLGenerator. files: Lib/test/test_sax.py | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -493,6 +493,21 @@ func(result) self.assertFalse(result.closed) + def test_xmlgen_fragment(self): + result = self.ioclass() + gen = XMLGenerator(result) + + # Don't call gen.startDocument() + gen.startElement("foo", {"a": "1.0"}) + gen.characters("Hello") + gen.endElement("foo") + gen.startElement("bar", {"b": "2.0"}) + gen.endElement("bar") + # Don't call gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('Hello')[len(self.xml('')):]) + class StringXmlgenTest(XmlgenTest, unittest.TestCase): ioclass = StringIO -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 25 12:49:21 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 25 Feb 2013 12:49:21 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=231470548=3A_Add_test_for_fragment_producing_with_XMLGe?= =?utf-8?q?nerator=2E?= Message-ID: <3ZF1f942HlzSxM@mail.python.org> http://hg.python.org/cpython/rev/5a4b3094903f changeset: 82380:5a4b3094903f branch: 3.3 parent: 82374:aae2bb2e3195 parent: 82379:1c03e499cdc2 user: Serhiy Storchaka date: Mon Feb 25 13:46:32 2013 +0200 summary: Issue #1470548: Add test for fragment producing with XMLGenerator. files: Lib/test/test_sax.py | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -493,6 +493,21 @@ func(result) self.assertFalse(result.closed) + def test_xmlgen_fragment(self): + result = self.ioclass() + gen = XMLGenerator(result) + + # Don't call gen.startDocument() + gen.startElement("foo", {"a": "1.0"}) + gen.characters("Hello") + gen.endElement("foo") + gen.startElement("bar", {"b": "2.0"}) + gen.endElement("bar") + # Don't call gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('Hello')[len(self.xml('')):]) + class StringXmlgenTest(XmlgenTest, unittest.TestCase): ioclass = StringIO -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 25 12:49:22 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 25 Feb 2013 12:49:22 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=231470548=3A_Add_test_for_fragment_producing_with?= =?utf-8?q?_XMLGenerator=2E?= Message-ID: <3ZF1fB6gnHzSxf@mail.python.org> http://hg.python.org/cpython/rev/810d70fb17a2 changeset: 82381:810d70fb17a2 parent: 82377:422169310b7c parent: 82380:5a4b3094903f user: Serhiy Storchaka date: Mon Feb 25 13:47:20 2013 +0200 summary: Issue #1470548: Add test for fragment producing with XMLGenerator. files: Lib/test/test_sax.py | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_sax.py b/Lib/test/test_sax.py --- a/Lib/test/test_sax.py +++ b/Lib/test/test_sax.py @@ -493,6 +493,21 @@ func(result) self.assertFalse(result.closed) + def test_xmlgen_fragment(self): + result = self.ioclass() + gen = XMLGenerator(result) + + # Don't call gen.startDocument() + gen.startElement("foo", {"a": "1.0"}) + gen.characters("Hello") + gen.endElement("foo") + gen.startElement("bar", {"b": "2.0"}) + gen.endElement("bar") + # Don't call gen.endDocument() + + self.assertEqual(result.getvalue(), + self.xml('Hello')[len(self.xml('')):]) + class StringXmlgenTest(XmlgenTest, unittest.TestCase): ioclass = StringIO -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 25 14:44:45 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 25 Feb 2013 14:44:45 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2317220=3A_Little_c?= =?utf-8?q?leanup_of_=5Fbootstrap=2Epy=2E?= Message-ID: <3ZF4CK26tzzN3m@mail.python.org> http://hg.python.org/cpython/rev/2528e4aea338 changeset: 82382:2528e4aea338 user: Serhiy Storchaka date: Mon Feb 25 15:40:33 2013 +0200 summary: Issue #17220: Little cleanup of _bootstrap.py. files: Lib/importlib/_bootstrap.py | 44 +- Python/importlib.h | 8664 +++++++++++----------- 2 files changed, 4337 insertions(+), 4371 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -48,13 +48,7 @@ XXX Temporary until marshal's long functions are exposed. """ - x = int(x) - int_bytes = [] - int_bytes.append(x & 0xFF) - int_bytes.append((x >> 8) & 0xFF) - int_bytes.append((x >> 16) & 0xFF) - int_bytes.append((x >> 24) & 0xFF) - return bytearray(int_bytes) + return int(x).to_bytes(4, 'little') # TODO: Expose from marshal @@ -64,35 +58,25 @@ XXX Temporary until marshal's long function are exposed. """ - x = int_bytes[0] - x |= int_bytes[1] << 8 - x |= int_bytes[2] << 16 - x |= int_bytes[3] << 24 - return x + return int.from_bytes(int_bytes, 'little') def _path_join(*path_parts): """Replacement for os.path.join().""" - new_parts = [] - for part in path_parts: - if not part: - continue - new_parts.append(part) - if part[-1] not in path_separators: - new_parts.append(path_sep) - return ''.join(new_parts[:-1]) # Drop superfluous path separator. + return path_sep.join([part.rstrip(path_separators) + for part in path_parts if part]) def _path_split(path): """Replacement for os.path.split().""" + if len(path_separators) == 1: + front, _, tail = path.rpartition(path_sep) + return front, tail for x in reversed(path): if x in path_separators: - sep = x - break - else: - sep = path_sep - front, _, tail = path.rpartition(sep) - return front, tail + front, tail = path.rsplit(x) + return front, tail + return '', path def _path_is_mode_type(path, mode): @@ -404,8 +388,8 @@ due to the addition of new opcodes). """ -_RAW_MAGIC_NUMBER = 3250 | ord('\r') << 16 | ord('\n') << 24 -_MAGIC_BYTES = bytes(_RAW_MAGIC_NUMBER >> n & 0xff for n in range(0, 25, 8)) +_MAGIC_BYTES = (3250).to_bytes(2, 'little') + b'\r\n' +_RAW_MAGIC_NUMBER = int.from_bytes(_MAGIC_BYTES, 'little') _PYCACHE = '__pycache__' @@ -1441,7 +1425,7 @@ lower_suffix_contents.add(new_name) self._path_cache = lower_suffix_contents if sys.platform.startswith(_CASE_INSENSITIVE_PLATFORMS): - self._relaxed_path_cache = set(fn.lower() for fn in contents) + self._relaxed_path_cache = {fn.lower() for fn in contents} @classmethod def path_hook(cls, *loader_details): @@ -1774,7 +1758,7 @@ setattr(self_module, '_thread', thread_module) setattr(self_module, '_weakref', weakref_module) setattr(self_module, 'path_sep', path_sep) - setattr(self_module, 'path_separators', set(path_separators)) + setattr(self_module, 'path_separators', ''.join(path_separators)) # Constants setattr(self_module, '_relax_case', _make_relax_case()) EXTENSION_SUFFIXES.extend(_imp.extension_suffixes()) diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Mon Feb 25 15:17:13 2013 From: python-checkins at python.org (eli.bendersky) Date: Mon, 25 Feb 2013 15:17:13 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Clarify_that_int=28Enum=29_fa?= =?utf-8?q?cilitates_interoperability_with_C_extensions_that?= Message-ID: <3ZF4wn6GlDzQ80@mail.python.org> http://hg.python.org/peps/rev/2a48d9b76487 changeset: 4773:2a48d9b76487 user: Eli Bendersky date: Mon Feb 25 06:16:51 2013 -0800 summary: Clarify that int(Enum) facilitates interoperability with C extensions that expect integer values. files: pepdraft-0435.txt | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/pepdraft-0435.txt b/pepdraft-0435.txt --- a/pepdraft-0435.txt +++ b/pepdraft-0435.txt @@ -237,7 +237,8 @@ If you really want the integer equivalent values, you can convert enumeration values explicitly using the ``int()`` built-in. This is quite convenient for -storing enums in a database for example:: +storing enums in a database, as well as for interoperability with C extensions +that expect integers:: >>> int(Colors.red) 1 -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Mon Feb 25 15:50:43 2013 From: python-checkins at python.org (barry.warsaw) Date: Mon, 25 Feb 2013 15:50:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_436=2C_The_Argument_Clini?= =?utf-8?q?c_DSL=2C_Larry_Hastings?= Message-ID: <3ZF5gR3GZQzNgy@mail.python.org> http://hg.python.org/peps/rev/85758f0f93bc changeset: 4774:85758f0f93bc user: Barry Warsaw date: Mon Feb 25 09:50:32 2013 -0500 summary: PEP 436, The Argument Clinic DSL, Larry Hastings files: pep-0436.txt | 480 +++++++++++++++++++++++++++++++++++++++ 1 files changed, 480 insertions(+), 0 deletions(-) diff --git a/pep-0436.txt b/pep-0436.txt new file mode 100644 --- /dev/null +++ b/pep-0436.txt @@ -0,0 +1,480 @@ +PEP: 436 +Title: The Argument Clinic DSL +Version: $Revision$ +Last-Modified: $Date$ +Author: Larry Hastings +Discussions-To: Python-Dev +Status: Draft +Type: Standards Track +Content-Type: text/x-rst +Created: 22-Feb-2013 + + +Abstract +======== + +This document proposes "Argument Clinic", a DSL designed to facilitate +argument processing for built-in functions in the implementation of +CPython. + + +Rationale and Goals +=================== + +The primary implementation of Python, "CPython", is written in a +mixture of Python and C. One of the implementation details of CPython +is what are called "built-in" functions -- functions available to +Python programs but written in C. When a Python program calls a +built-in function and passes in arguments, those arguments must be +translated from Python values into C values. This process is called +"parsing arguments". + +As of CPython 3.3, arguments to functions are primarily parsed with +one of two functions: the original ``PyArg_ParseTuple()``, [1]_ and +the more modern ``PyArg_ParseTupleAndKeywords()``. [2]_ The former +function only handles positional parameters; the latter also +accommodates keyword and keyword-only parameters, and is preferred for +new code. + +``PyArg_ParseTuple()`` was a reasonable approach when it was first +conceived. The programmer specified the translation for the arguments +in a "format string": [3]_ each parameter matched to a "format unit", +a one-or-two character sequence telling ``PyArg_ParseTuple()`` what +Python types to accept and how to translate them into the appropriate +C value for that parameter. There were only a dozen or so of these +"format units", and each one was distinct and easy to understand. + +Over the years the ``PyArg_Parse`` interface has been extended in +numerous ways. The modern API is quite complex, to the point that it +is somewhat painful to use. Consider: + + * There are now forty different "format units"; a few are even three + characters long. This makes it difficult to understand what the + format string says without constantly cross-indexing it with the + documentation. + * There are also six meta-format units that may be buried in the + format string. (They are: ``"()|$:;"``.) + * The more format units are added, the less likely it is the + implementer can pick an easy-to-use mnemonic for the format unit, + because the character of choice is probably already in use. In + other words, the more format units we have, the more obtuse the + format units become. + * Several format units are nearly identical to others, having only + subtle differences. This makes understanding the exact semantics + of the format string even harder. + * The docstring is specified as a static C string, which is mildly + bothersome to read and edit. + * When adding a new parameter to a function using + ``PyArg_ParseTupleAndKeywords()``, it's necessary to touch six + different places in the code: [4]_ + + * Declaring the variable to store the argument. + * Passing in a pointer to that variable in the correct spot in + ``PyArg_ParseTupleAndKeywords()``, also passing in any + "length" or "converter" arguments in the correct order. + * Adding the name of the argument in the correct spot of the + "keywords" array passed in to + ``PyArg_ParseTupleAndKeywords()``. + * Adding the format unit to the correct spot in the format + string. + * Adding the parameter to the prototype in the docstring. + * Documenting the parameter in the docstring. + + * There is currently no mechanism for builtin functions to provide + their "signature" information (see ``inspect.getfullargspec`` and + ``inspect.Signature``). Adding this information using a mechanism + similar to the existing ``PyArg_Parse`` functions would require + repeating ourselves yet again. + +The goal of Argument Clinic is to replace this API with a mechanism +inheriting none of these downsides: + + * You need specify each parameter only once. + * All information about a parameter is kept together in one place. + * For each parameter, you specify its type in C; Argument Clinic + handles the translation from Python value into C value for you. + * Argument Clinic also allows for fine-tuning of argument processing + behavior with highly-readable "flags", both per-parameter and + applying across the whole function. + * Docstrings are written in plain text. + * From this, Argument Clinic generates for you all the mundane, + repetitious code and data structures CPython needs internally. + Once you've specified the interface, the next step is simply to + write your implementation using native C types. Every detail of + argument parsing is handled for you. + +Future goals of Argument Clinic include: + + * providing signature information for builtins, and + * speed improvements to the generated code. + + +DSL Syntax Summary +================== + +The Argument Clinic DSL is specified as a comment embedded in a C +file, as follows. The "Example" column on the right shows you sample +input to the Argument Clinic DSL, and the "Section" column on the left +specifies what each line represents in turn. + +:: + + +-----------------------+-----------------------------------------------------+ + | Section | Example | + +-----------------------+-----------------------------------------------------+ + | Clinic DSL start | /*[clinic] | + | Function declaration | module.function_name -> return_annotation | + | Function flags | flag flag2 flag3=value | + | Parameter declaration | type name = default | + | Parameter flags | flag flag2 flag3=value | + | Parameter docstring | Lorem ipsum dolor sit amet, consectetur | + | | adipisicing elit, sed do eiusmod tempor | + | Function docstring | Lorem ipsum dolor sit amet, consectetur adipisicing | + | | elit, sed do eiusmod tempor incididunt ut labore et | + | Clinic DSL end | [clinic]*/ | + | Clinic output | ... | + | Clinic output end | /*[clinic end output:]*/ | + +-----------------------+-----------------------------------------------------+ + + +General Behavior Of the Argument Clinic DSL +------------------------------------------- + +All lines support ``#`` as a line comment delimiter *except* +docstrings. Blank lines are always ignored. + +Like Python itself, leading whitespace is significant in the Argument +Clinic DSL. The first line of the "function" section is the +declaration; all subsequent lines at the same indent are function +flags. Once you indent, the first line is a parameter declaration; +subsequent lines at that indent are parameter flags. Indent one more +time for the lines of the parameter docstring. Finally, dedent back +to the same level as the function declaration for the function +docstring. + + +Function Declaration +-------------------- + +The return annotation is optional. If skipped, the arrow ("``->``") +must also be omitted. + + +Parameter Declaration +--------------------- + +The "type" is a C type. If it's a pointer type, you must specify a +single space between the type and the "``*``", and zero spaces between +the "``*``" and the name. (e.g. "``PyObject *foo``", not "``PyObject* +foo``") + +The "name" must be a legal C identifier. + +The "default" is a Python value. Default values are optional; if not +specified you must omit the equals sign too. Parameters which don't +have a default are implicitly required. The default value is +dynamically assigned, "live" in the generated C code, and although +it's specified as a Python value, it's translated into a native C +value in the generated C code. + +It's explicitly permitted to end the parameter declaration line with a +semicolon, though the semicolon is optional. This is intended to +allow directly cutting and pasting in declarations from C code. +However, the preferred style is without the semicolon. + + +Flags +----- + +"Flags" are like "``make -D``" arguments. They're unordered. Flags +lines are parsed much like the shell (specifically, using +``shlex.split()`` [5]_ ). You can have as many flag lines as you +like. Specifying a flag twice is currently an error. + +Supported flags for functions: + +``basename`` + The basename to use for the generated C functions. By default this + is the name of the function from the DSL, only with periods replaced + by underscores. + +``positional-only`` + This function only supports positional parameters, not keyword + parameters. See `Functions With Positional-Only Parameters`_ below. + +Supported flags for parameters: + +``bitwise`` + If the Python integer passed in is signed, copy the bits directly + even if it is negative. Only valid for unsigned integer types. + +``converter`` + Backwards-compatibility support for parameter "converter" + functions. [6]_ The value should be the name of the converter + function in C. Only valid when the type of the parameter is + ``void *``. + +``default`` + The Python value to use in place of the parameter's actual default + in Python contexts. Specifically, when specified, this value will + be used for the parameter's default in the docstring, and in the + ``Signature``. (TBD: If the string is a valid Python expression + which can be rendered into a Python value using ``eval()``, then the + result of ``eval()`` on it will be used as the default in the + ``Signature``.) Ignored if there is no default. + +``encoding`` + Encoding to use when encoding a Unicode string to a ``char *``. + Only valid when the type of the parameter is ``char *``. + +``group=`` + This parameter is part of a group of options that must either all be + specified or none specified. Parameters in the same "group" must be + contiguous. The value of the group flag is the name used for the + group variable, and therefore must be legal as a C identifier. Only + valid for functions marked "``positional-only``"; see `Functions + With Positional-Only Parameters`_ below. + +``immutable`` + Only accept immutable values. + +``keyword-only`` + This parameter (and all subsequent parameters) is keyword-only. + Keyword-only parameters must also be optional parameters. Not valid + for positional-only functions. + +``length`` + This is an iterable type, and we also want its length. The DSL will + generate a second ``Py_ssize_t`` variable; its name will be this + parameter's name appended with "``_length``". + +``nullable`` + ``None`` is a legal argument for this parameter. If ``None`` is + supplied on the Python side, the equivalent C argument will be + ``NULL``. Only valid for pointer types. + +``required`` + Normally any parameter that has a default value is automatically + optional. A parameter that has "required" set will be considered + required (non-optional) even if it has a default value. The + generated documentation will also not show any default value. + +``types`` + Space-separated list of acceptable Python types for this object. + There are also four special-case types which represent Python + protocols: + + * buffer + * mapping + * number + * sequence + +``zeroes`` + This parameter is a string type, and its value should be allowed to + have embedded zeroes. Not valid for all varieties of string + parameters. + + +Python Code +----------- + +Argument Clinic also permits embedding Python code inside C files, +which is executed in-place when Argument Clinic processes the file. +Embedded code looks like this: + +:: + + /*[python] + + # this is python code! + print("/" + "* Hello world! *" + "/") + + [python]*/ + +Any Python code is valid. Python code sections in Argument Clinic can +also be used to modify Clinic's behavior at runtime; for example, see +`Extending Argument Clinic`_. + + +Output +====== + +Argument Clinic writes its output in-line in the C file, immediately +after the section of Clinic code. For "python" sections, the output +is everything printed using ``builtins.print``. For "clinic" +sections, the output is valid C code, including: + + * a ``#define`` providing the correct ``methoddef`` structure for the + function + * a prototype for the "impl" function -- this is what you'll write + to implement this function + * a function that handles all argument processing, which calls your + "impl" function + * the definition line of the "impl" function + * and a comment indicating the end of output. + +The intention is that you will write the body of your impl function +immediately after the output -- as in, you write a left-curly-brace +immediately after the end-of-output comment and write the +implementation of the builtin in the body there. (It's a bit strange +at first, but oddly convenient.) + +Argument Clinic will define the parameters of the impl function for +you. The function will take the "self" parameter passed in +originally, all the parameters you define, and possibly some extra +generated parameters ("length" parameters; also "group" parameters, +see next section). + +Argument Clinic also writes a checksum for the output section. This +is a valuable safety feature: if you modify the output by hand, Clinic +will notice that the checksum doesn't match, and will refuse to +overwrite the file. (You can force Clinic to overwrite with the +"``-f``" command-line argument; Clinic will also ignore the checksums +when using the "``-o``" command-line argument.) + + +Functions With Positional-Only Parameters +========================================= + +A significant fraction of Python builtins implemented in C use the +older positional-only API for processing arguments +(``PyArg_ParseTuple()``). In some instances, these builtins parse +their arguments differently based on how many arguments were passed +in. This can provide some bewildering flexibility: there may be +groups of optional parameters, which must either all be specified or +none specified. And occasionally these groups are on the *left!* (For +example: ``curses.window.addch()``.) + +Argument Clinic supports these legacy use-cases with a special set of +flags. First, set the flag "``positional-only``" on the entire +function. Then, for every group of parameters that is collectively +optional, add a "``group=``" flag with a unique string to all the +parameters in that group. Note that these groups are permitted on the +right *or left* of any required parameters! However, all groups +(including the group of required parameters) must be contiguous. + +The impl function generated by Clinic will add an extra parameter for +every group, "``int _group``". This argument will be nonzero +if the group was specified on this call, and zero if it was not. + +Note that when operating in this mode, you cannot specify default +arguments. You can simulate defaults by putting parameters in +individual groups and detecting whether or not they were specified; +generally speaking it's better to simply not use "positional-only" +where it isn't absolutely necessary. (TBD: It might be possible to +relax this restriction. But adding default arguments into the mix of +groups would seemingly make calculating which groups are active a good +deal harder.) + +Also, note that it's possible to specify a set of groups to a function +such that there are several valid mappings from the number of +arguments to a valid set of groups. If this happens, Clinic will exit +with an error message. This should not be a problem, as +positional-only operation is only intended for legacy use cases, and +all the legacy functions using this quirky behavior should have +unambiguous mappings. + + +Current Status +============== + +As of this writing, there is a working prototype implementation of +Argument Clinic available online. [7]_ The prototype implements the +syntax above, and generates code using the existing ``PyArg_Parse`` +APIs. It supports translating to all current format units except +``"w*"``. Sample functions using Argument Clinic exercise all major +features, including positional-only argument parsing. + + +Extending Argument Clinic +------------------------- + +The prototype also currently provides an experimental extension +mechanism, allowing adding support for new types on-the-fly. See +``Modules/posixmodule.c`` in the prototype for an example of its use. + + +Notes / TBD +=========== + +* Guido proposed having the "function docstring" be hand-written inline, + in the middle of the output, something like this: + + :: + + /*[clinic] + ... prototype and parameters (including parameter docstrings) go here + [clinic]*/ + ... some output ... + /*[clinic docstring start]*/ + ... hand-edited function docstring goes here <-- you edit this by hand! + /*[clinic docstring end]*/ + ... more output + /*[clinic output end]*/ + + I tried it this way and don't like it -- I think it's clumsy. I + prefer that everything you write goes in one place, rather than + having an island of hand-edited stuff in the middle of the DSL + output. + +* Do we need to support tuple unpacking? (The "``(OOO)``" style + format string.) Boy I sure hope not. + +* What about Python functions that take no arguments? This syntax + doesn't provide for that. Perhaps a lone indented "None" should + mean "no arguments"? + +* This approach removes some dynamism / flexibility. With the + existing syntax one could theoretically pass in different encodings + at runtime for the "``es``"/"``et``" format units. AFAICT CPython + doesn't do this itself, however it's possible external users might + do this. (Trivia: there are no uses of "``es``" exercised by + regrtest, and all the uses of "``et``" exercised are in + socketmodule.c, except for one in _ssl.c. They're all static, + specifying the encoding ``"idna"``.) + +* Right now the "basename" flag on a function changes the ``#define + methoddef`` name too. Should it, or should the #define'd methoddef + name always be ``{module_name}_{function_name}`` ? + + +References +========== + +.. [1] ``PyArg_ParseTuple()``: + http://docs.python.org/3/c-api/arg.html#PyArg_ParseTuple + +.. [2] ``PyArg_ParseTupleAndKeywords()``: + http://docs.python.org/3/c-api/arg.html#PyArg_ParseTupleAndKeywords + +.. [3] ``PyArg_`` format units: + http://docs.python.org/3/c-api/arg.html#strings-and-buffers + +.. [4] Keyword parameters for extension functions: + http://docs.python.org/3/extending/extending.html#keyword-parameters-for-extension-functions + +.. [5] ``shlex.split()``: + http://docs.python.org/3/library/shlex.html#shlex.split + +.. [6] ``PyArg_`` "converter" functions, see ``"O&"`` in this section: + http://docs.python.org/3/c-api/arg.html#other-objects + +.. [7] Argument Clinic prototype: + https://bitbucket.org/larry/python-clinic/ + + +Copyright +========= + +This document has been placed in the public domain. + + + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Mon Feb 25 16:15:07 2013 From: python-checkins at python.org (barry.warsaw) Date: Mon, 25 Feb 2013 16:15:07 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_PEP_435=2C_Adding_an_Enum_typ?= =?utf-8?q?e_to_the_Python_standard_library=2C_Bendersky/Warsaw?= Message-ID: <3ZF6Cb3mhWzM4Q@mail.python.org> http://hg.python.org/peps/rev/102cbc9b1b29 changeset: 4775:102cbc9b1b29 user: Barry Warsaw date: Mon Feb 25 10:15:03 2013 -0500 summary: PEP 435, Adding an Enum type to the Python standard library, Bendersky/Warsaw files: pepdraft-0435.txt | 118 ++++++++++++++++++--------------- 1 files changed, 63 insertions(+), 55 deletions(-) diff --git a/pepdraft-0435.txt b/pep-0435.txt rename from pepdraft-0435.txt rename to pep-0435.txt --- a/pepdraft-0435.txt +++ b/pep-0435.txt @@ -16,52 +16,51 @@ ======== This PEP proposes adding an enumeration type to the Python standard library. -Specifically, it proposes moving the existing ``flufl.enum`` package by -Barry Warsaw into the standard library. Much of this PEP is based on the -"using" document from the documentation of ``flufl.enum``. +Specifically, it proposes moving the existing ``flufl.enum`` package by Barry +Warsaw into the standard library. Much of this PEP is based on the "using" +[1]_ document from the documentation of ``flufl.enum``. An enumeration is a set of symbolic names bound to unique, constant integer -values. Within an enumeration, the values can be compared by identity, and -the enumeration itself can be iterated over. Enumeration items can be -converted to and from their integer equivalents, supporting use cases such as -storing enumeration values in a database. +values. Within an enumeration, the values can be compared by identity, and the +enumeration itself can be iterated over. Enumeration items can be converted to +and from their integer equivalents, supporting use cases such as storing +enumeration values in a database. Status of discussions ===================== -The idea of adding an enum type to Python is not new - PEP 354 is a previous -attempt that was rejected in 2005. Recently a new set of discussions was -initiated [#]_ on the ``python-ideas`` mailing list. Many new ideas were -proposed in several threads; after a lengthy discussion Guido proposed -adding ``flufl.enum`` to the standard library [#]_. This PEP is an attempt to -formalize this decision as well as discuss a number of variations that can -be considered for inclusion. +The idea of adding an enum type to Python is not new - PEP 354 [2]_ is a +previous attempt that was rejected in 2005. Recently a new set of discussions +was initiated [3]_ on the ``python-ideas`` mailing list. Many new ideas were +proposed in several threads; after a lengthy discussion Guido proposed adding +``flufl.enum`` to the standard library [4]_. This PEP is an attempt to +formalize this decision as well as discuss a number of variations that can be +considered for inclusion. + Motivation ========== *[Based partly on the Motivation stated in PEP 354]* -The properties of an enumeration are useful for defining an immutable, -related set of constant values that have a defined sequence but no -inherent semantic meaning. Classic examples are days of the week -(Sunday through Saturday) and school assessment grades ('A' through -'D', and 'F'). Other examples include error status values and states -within a defined process. +The properties of an enumeration are useful for defining an immutable, related +set of constant values that have a defined sequence but no inherent semantic +meaning. Classic examples are days of the week (Sunday through Saturday) and +school assessment grades ('A' through 'D', and 'F'). Other examples include +error status values and states within a defined process. -It is possible to simply define a sequence of values of some other -basic type, such as ``int`` or ``str``, to represent discrete -arbitrary values. However, an enumeration ensures that such values -are distinct from any others including, importantly, values within other -enumerations, and that operations without meaning ("Wednesday times two") -are not defined for these values. It also provides a convenient printable -representation of enum values without requiring tedious repetition while -defining them (i.e. no ``GREEN = 'green'``). +It is possible to simply define a sequence of values of some other basic type, +such as ``int`` or ``str``, to represent discrete arbitrary values. However, +an enumeration ensures that such values are distinct from any others including, +importantly, values within other enumerations, and that operations without +meaning ("Wednesday times two") are not defined for these values. It also +provides a convenient printable representation of enum values without requiring +tedious repetition while defining them (i.e. no ``GREEN = 'green'``). -Module & type name -================== +Module and type name +==================== We propose to add a module named ``enum`` to the standard library. The main type exposed by this module is ``Enum``. Hence, to import the ``Enum`` type @@ -69,6 +68,7 @@ >>> from enum import Enum + Proposed semantics for the new enumeration type =============================================== @@ -78,8 +78,8 @@ Enumerations are created using the class syntax, which makes them easy to read and write. Every enumeration value must have a unique integer value and the only restriction on their names is that they must be valid Python identifiers. -To define an enumeration, derive from the ``Enum`` class and add attributes with -assignment to their integer values:: +To define an enumeration, derive from the ``Enum`` class and add attributes +with assignment to their integer values:: >>> from enum import Enum >>> class Colors(Enum): @@ -247,8 +247,8 @@ >>> int(Colors.blue) 3 -You can also convert back to the enumeration value by calling the Enum subclass, -passing in the integer value for the item you want:: +You can also convert back to the enumeration value by calling the Enum +subclass, passing in the integer value for the item you want:: >>> Colors(1) @@ -358,7 +358,7 @@ You can also create enumerations using the convenience function ``make()``, which takes an iterable object or dictionary to provide the item names and -values. ``make()`` is a static method. +values. ``make()`` is a module-level function. The first argument to ``make()`` is the name of the enumeration, and it returns the so-named `Enum` subclass. The second argument is a *source* which can be @@ -382,11 +382,13 @@ >>> enum.make('Flags', zip(list('abcdefg'), enumiter())) + Proposed variations =================== Some variations were proposed during the discussions in the mailing list. -Here's some of the more popular: +Here's some of the more popular ones. + Not having to specify values for enums -------------------------------------- @@ -400,11 +402,12 @@ The values get actually assigned only when first looked up. Pros: cleaner syntax that requires less typing for a very common task (just -listing enumertion names without caring about the values). +listing enumeration names without caring about the values). Cons: involves much magic in the implementation, which makes even the -definition of such enums baffling when first seen. Besides, explicit is better -than implicit. +definition of such enums baffling when first seen. Besides, explicit is +better than implicit. + Using special names or forms to auto-assign enum values ------------------------------------------------------- @@ -434,8 +437,9 @@ Pros: no need to manually enter values. Makes it easier to change the enum and extend it, especially for large enumerations. -Cons: actually longer to type in many simple cases. The argument of -explicit vs. implicit applies here as well. +Cons: actually longer to type in many simple cases. The argument of explicit +vs. implicit applies here as well. + Use-cases in the standard library ================================= @@ -447,8 +451,8 @@ User-code facing constants like ``os.SEEK_*``, ``socket`` module constants, decimal rounding modes, HTML error codes could benefit from being enums had -they been implemented this way from the beginning. At this point, however, -at the risk of breaking user code (that relies on the constants' actual values +they been implemented this way from the beginning. At this point, however, at +the risk of breaking user code (that relies on the constants' actual values rather than their meaning) such a change cannot be made. This does not mean that future uses in the stdlib can't use an enum for defining new user-code facing constants. @@ -457,12 +461,13 @@ stdlib modules. It appears that nothing should stand in the way of implementing such constants with enums. Some examples uncovered by a very partial skim through the stdlib: ``binhex``, ``imaplib``, ``http/client``, -``urllib/robotparser``, ``idlelib``, ``concurrent.futures``, ``turledemo``. +``urllib/robotparser``, ``idlelib``, ``concurrent.futures``, ``turtledemo``. In addition, looking at the code of the Twisted library, there are many use -cases for replacing internal state constants with enums. The same can be -said about a lot of networking code (especially implementation of protocols) -and can be seen in test protocols written with the Tulip library as well. +cases for replacing internal state constants with enums. The same can be said +about a lot of networking code (especially implementation of protocols) and +can be seen in test protocols written with the Tulip library as well. + Differences from PEP 354 ======================== @@ -486,8 +491,8 @@ complexity, though minimal, is hidden from users of the enumeration. Unlike PEP 354, enumeration values can only be tested by identity comparison. -This is to emphasise the fact that enumeration values are singletons, much like -``None``. +This is to emphasize the fact that enumeration values are singletons, much +like ``None``. Acknowledgments @@ -495,28 +500,31 @@ This PEP describes the ``flufl.enum`` package by Barry Warsaw. ``flufl.enum`` is based on an example by Jeremy Hylton. It has been modified and extended -by Barry Warsaw for use in the GNU Mailman [#]_ project. Ben Finney is the +by Barry Warsaw for use in the GNU Mailman [5]_ project. Ben Finney is the author of the earlier enumeration PEP 354. + References ========== -.. [#] http://mail.python.org/pipermail/python-ideas/2013-January/019003.html -.. [#] http://mail.python.org/pipermail/python-ideas/2013-February/019373.html -.. [#] http://www.list.org +.. [1] http://pythonhosted.org/flufl.enum/docs/using.html +.. [2] http://www.python.org/dev/peps/pep-0354/ +.. [3] http://mail.python.org/pipermail/python-ideas/2013-January/019003.html +.. [4] http://mail.python.org/pipermail/python-ideas/2013-February/019373.html +.. [5] http://www.list.org + Copyright ========= This document has been placed in the public domain. + Todo ==== * Mark PEP 354 "superseded by" this one, if accepted * New package name within stdlib - enum? (top-level) - * "Convenience API" says "make() is a static method" - what does this mean? - make seems to be a simple module-level function in the implementation. * For make, can we add an API like namedtuple's? make('Animals, 'ant bee cat dog') I.e. when make sees a string argument it splits it, making it similar to a -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Mon Feb 25 16:23:12 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Mon, 25 Feb 2013 16:23:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE1MDgz?= =?utf-8?q?=3A_Convert_ElementTree_doctests_to_unittests=2E?= Message-ID: <3ZF6Nw6lsvzN2r@mail.python.org> http://hg.python.org/cpython/rev/af570205b978 changeset: 82383:af570205b978 branch: 3.3 parent: 82380:5a4b3094903f user: Serhiy Storchaka date: Mon Feb 25 17:20:59 2013 +0200 summary: Issue #15083: Convert ElementTree doctests to unittests. files: Lib/test/test_xml_etree.py | 2255 ++++++++++------------- 1 files changed, 1007 insertions(+), 1248 deletions(-) diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -87,21 +87,26 @@ """ -def sanity(): - """ - Import sanity. +ENTITY_XML = """\ + +%user-entities; +]> +&entity; +""" - >>> from xml.etree import ElementTree - >>> from xml.etree import ElementInclude - >>> from xml.etree import ElementPath - """ -def check_method(method): - if not hasattr(method, '__call__'): - print(method, "not callable") +class ModuleTest(unittest.TestCase): + + def test_sanity(self): + # Import sanity. + + from xml.etree import ElementTree + from xml.etree import ElementInclude + from xml.etree import ElementPath + def serialize(elem, to_string=True, encoding='unicode', **options): - import io if encoding != 'unicode': file = io.BytesIO() else: @@ -114,68 +119,9 @@ file.seek(0) return file -def summarize(elem): - if elem.tag == ET.Comment: - return "" - return elem.tag +def summarize_list(seq): + return [elem.tag for elem in seq] -def summarize_list(seq): - return [summarize(elem) for elem in seq] - -def normalize_crlf(tree): - for elem in tree.iter(): - if elem.text: - elem.text = elem.text.replace("\r\n", "\n") - if elem.tail: - elem.tail = elem.tail.replace("\r\n", "\n") - -def normalize_exception(func, *args, **kwargs): - # Ignore the exception __module__ - try: - func(*args, **kwargs) - except Exception as err: - print("Traceback (most recent call last):") - print("{}: {}".format(err.__class__.__name__, err)) - -def check_string(string): - len(string) - for char in string: - if len(char) != 1: - print("expected one-character string, got %r" % char) - new_string = string + "" - new_string = string + " " - string[:0] - -def check_mapping(mapping): - len(mapping) - keys = mapping.keys() - items = mapping.items() - for key in keys: - item = mapping[key] - mapping["key"] = "value" - if mapping["key"] != "value": - print("expected value string, got %r" % mapping["key"]) - -def check_element(element): - if not ET.iselement(element): - print("not an element") - if not hasattr(element, "tag"): - print("no tag member") - if not hasattr(element, "attrib"): - print("no attrib member") - if not hasattr(element, "text"): - print("no text member") - if not hasattr(element, "tail"): - print("no tail member") - - check_string(element.tag) - check_mapping(element.attrib) - if element.text is not None: - check_string(element.text) - if element.tail is not None: - check_string(element.tail) - for elem in element: - check_element(elem) class ElementTestCase: @classmethod @@ -212,837 +158,757 @@ # -------------------------------------------------------------------- # element tree tests -def interface(): - """ - Test element tree interface. +class ElementTreeTest(unittest.TestCase): - >>> element = ET.Element("tag") - >>> check_element(element) - >>> tree = ET.ElementTree(element) - >>> check_element(tree.getroot()) + def serialize_check(self, elem, expected): + self.assertEqual(serialize(elem), expected) - >>> element = ET.Element("t\\xe4g", key="value") - >>> tree = ET.ElementTree(element) - >>> repr(element) # doctest: +ELLIPSIS - "" - >>> element = ET.Element("tag", key="value") + def test_interface(self): + # Test element tree interface. - Make sure all standard element methods exist. + def check_string(string): + len(string) + for char in string: + self.assertEqual(len(char), 1, + msg="expected one-character string, got %r" % char) + new_string = string + "" + new_string = string + " " + string[:0] - >>> check_method(element.append) - >>> check_method(element.extend) - >>> check_method(element.insert) - >>> check_method(element.remove) - >>> check_method(element.getchildren) - >>> check_method(element.find) - >>> check_method(element.iterfind) - >>> check_method(element.findall) - >>> check_method(element.findtext) - >>> check_method(element.clear) - >>> check_method(element.get) - >>> check_method(element.set) - >>> check_method(element.keys) - >>> check_method(element.items) - >>> check_method(element.iter) - >>> check_method(element.itertext) - >>> check_method(element.getiterator) + def check_mapping(mapping): + len(mapping) + keys = mapping.keys() + items = mapping.items() + for key in keys: + item = mapping[key] + mapping["key"] = "value" + self.assertEqual(mapping["key"], "value", + msg="expected value string, got %r" % mapping["key"]) - These methods return an iterable. See bug 6472. + def check_element(element): + self.assertTrue(ET.iselement(element), msg="not an element") + self.assertTrue(hasattr(element, "tag"), msg="no tag member") + self.assertTrue(hasattr(element, "attrib"), msg="no attrib member") + self.assertTrue(hasattr(element, "text"), msg="no text member") + self.assertTrue(hasattr(element, "tail"), msg="no tail member") - >>> check_method(element.iterfind("tag").__next__) - >>> check_method(element.iterfind("*").__next__) - >>> check_method(tree.iterfind("tag").__next__) - >>> check_method(tree.iterfind("*").__next__) + check_string(element.tag) + check_mapping(element.attrib) + if element.text is not None: + check_string(element.text) + if element.tail is not None: + check_string(element.tail) + for elem in element: + check_element(elem) - These aliases are provided: + element = ET.Element("tag") + check_element(element) + tree = ET.ElementTree(element) + check_element(tree.getroot()) + element = ET.Element("t\xe4g", key="value") + tree = ET.ElementTree(element) + self.assertRegex(repr(element), r"^$") + element = ET.Element("tag", key="value") - >>> assert ET.XML == ET.fromstring - >>> assert ET.PI == ET.ProcessingInstruction - >>> assert ET.XMLParser == ET.XMLTreeBuilder - """ + # Make sure all standard element methods exist. -def simpleops(): - """ - Basic method sanity checks. + def check_method(method): + self.assertTrue(hasattr(method, '__call__'), + msg="%s not callable" % method) - >>> elem = ET.XML("") - >>> serialize(elem) - '' - >>> e = ET.Element("tag2") - >>> elem.append(e) - >>> serialize(elem) - '' - >>> elem.remove(e) - >>> serialize(elem) - '' - >>> elem.insert(0, e) - >>> serialize(elem) - '' - >>> elem.remove(e) - >>> elem.extend([e]) - >>> serialize(elem) - '' - >>> elem.remove(e) + check_method(element.append) + check_method(element.extend) + check_method(element.insert) + check_method(element.remove) + check_method(element.getchildren) + check_method(element.find) + check_method(element.iterfind) + check_method(element.findall) + check_method(element.findtext) + check_method(element.clear) + check_method(element.get) + check_method(element.set) + check_method(element.keys) + check_method(element.items) + check_method(element.iter) + check_method(element.itertext) + check_method(element.getiterator) - >>> element = ET.Element("tag", key="value") - >>> serialize(element) # 1 - '' - >>> subelement = ET.Element("subtag") - >>> element.append(subelement) - >>> serialize(element) # 2 - '' - >>> element.insert(0, subelement) - >>> serialize(element) # 3 - '' - >>> element.remove(subelement) - >>> serialize(element) # 4 - '' - >>> element.remove(subelement) - >>> serialize(element) # 5 - '' - >>> element.remove(subelement) - Traceback (most recent call last): - ValueError: list.remove(x): x not in list - >>> serialize(element) # 6 - '' - >>> element[0:0] = [subelement, subelement, subelement] - >>> serialize(element[1]) - '' - >>> element[1:9] == [element[1], element[2]] - True - >>> element[:9:2] == [element[0], element[2]] - True - >>> del element[1:2] - >>> serialize(element) - '' - """ + # These methods return an iterable. See bug 6472. -def cdata(): - """ - Test CDATA handling (etc). + def check_iter(it): + check_method(it.__next__) - >>> serialize(ET.XML("hello")) - 'hello' - >>> serialize(ET.XML("hello")) - 'hello' - >>> serialize(ET.XML("")) - 'hello' - """ + check_iter(element.iterfind("tag")) + check_iter(element.iterfind("*")) + check_iter(tree.iterfind("tag")) + check_iter(tree.iterfind("*")) -def file_init(): - """ - >>> import io + # These aliases are provided: - >>> stringfile = io.BytesIO(SAMPLE_XML.encode("utf-8")) - >>> tree = ET.ElementTree(file=stringfile) - >>> tree.find("tag").tag - 'tag' - >>> tree.find("section/tag").tag - 'tag' + self.assertEqual(ET.XML, ET.fromstring) + self.assertEqual(ET.PI, ET.ProcessingInstruction) + self.assertEqual(ET.XMLParser, ET.XMLTreeBuilder) - >>> tree = ET.ElementTree(file=SIMPLE_XMLFILE) - >>> tree.find("element").tag - 'element' - >>> tree.find("element/../empty-element").tag - 'empty-element' - """ + def test_simpleops(self): + # Basic method sanity checks. -def path_cache(): - """ - Check that the path cache behaves sanely. + elem = ET.XML("") + self.serialize_check(elem, '') + e = ET.Element("tag2") + elem.append(e) + self.serialize_check(elem, '') + elem.remove(e) + self.serialize_check(elem, '') + elem.insert(0, e) + self.serialize_check(elem, '') + elem.remove(e) + elem.extend([e]) + self.serialize_check(elem, '') + elem.remove(e) - >>> from xml.etree import ElementPath + element = ET.Element("tag", key="value") + self.serialize_check(element, '') # 1 + subelement = ET.Element("subtag") + element.append(subelement) + self.serialize_check(element, '') # 2 + element.insert(0, subelement) + self.serialize_check(element, + '') # 3 + element.remove(subelement) + self.serialize_check(element, '') # 4 + element.remove(subelement) + self.serialize_check(element, '') # 5 + with self.assertRaises(ValueError) as cm: + element.remove(subelement) + self.assertEqual(str(cm.exception), 'list.remove(x): x not in list') + self.serialize_check(element, '') # 6 + element[0:0] = [subelement, subelement, subelement] + self.serialize_check(element[1], '') + self.assertEqual(element[1:9], [element[1], element[2]]) + self.assertEqual(element[:9:2], [element[0], element[2]]) + del element[1:2] + self.serialize_check(element, + '') - >>> elem = ET.XML(SAMPLE_XML) - >>> for i in range(10): ET.ElementTree(elem).find('./'+str(i)) - >>> cache_len_10 = len(ElementPath._cache) - >>> for i in range(10): ET.ElementTree(elem).find('./'+str(i)) - >>> len(ElementPath._cache) == cache_len_10 - True - >>> for i in range(20): ET.ElementTree(elem).find('./'+str(i)) - >>> len(ElementPath._cache) > cache_len_10 - True - >>> for i in range(600): ET.ElementTree(elem).find('./'+str(i)) - >>> len(ElementPath._cache) < 500 - True - """ + def test_cdata(self): + # Test CDATA handling (etc). -def copy(): - """ - Test copy handling (etc). + self.serialize_check(ET.XML("hello"), + 'hello') + self.serialize_check(ET.XML("hello"), + 'hello') + self.serialize_check(ET.XML(""), + 'hello') - >>> import copy - >>> e1 = ET.XML("hello") - >>> e2 = copy.copy(e1) - >>> e3 = copy.deepcopy(e1) - >>> e1.find("foo").tag = "bar" - >>> serialize(e1) - 'hello' - >>> serialize(e2) - 'hello' - >>> serialize(e3) - 'hello' + def test_file_init(self): + stringfile = io.BytesIO(SAMPLE_XML.encode("utf-8")) + tree = ET.ElementTree(file=stringfile) + self.assertEqual(tree.find("tag").tag, 'tag') + self.assertEqual(tree.find("section/tag").tag, 'tag') - """ + tree = ET.ElementTree(file=SIMPLE_XMLFILE) + self.assertEqual(tree.find("element").tag, 'element') + self.assertEqual(tree.find("element/../empty-element").tag, + 'empty-element') -def attrib(): - """ - Test attribute handling. + def test_path_cache(self): + # Check that the path cache behaves sanely. - >>> elem = ET.Element("tag") - >>> elem.get("key") # 1.1 - >>> elem.get("key", "default") # 1.2 - 'default' - >>> elem.set("key", "value") - >>> elem.get("key") # 1.3 - 'value' + from xml.etree import ElementPath - >>> elem = ET.Element("tag", key="value") - >>> elem.get("key") # 2.1 - 'value' - >>> elem.attrib # 2.2 - {'key': 'value'} + elem = ET.XML(SAMPLE_XML) + for i in range(10): ET.ElementTree(elem).find('./'+str(i)) + cache_len_10 = len(ElementPath._cache) + for i in range(10): ET.ElementTree(elem).find('./'+str(i)) + self.assertEqual(len(ElementPath._cache), cache_len_10) + for i in range(20): ET.ElementTree(elem).find('./'+str(i)) + self.assertGreater(len(ElementPath._cache), cache_len_10) + for i in range(600): ET.ElementTree(elem).find('./'+str(i)) + self.assertLess(len(ElementPath._cache), 500) - >>> attrib = {"key": "value"} - >>> elem = ET.Element("tag", attrib) - >>> attrib.clear() # check for aliasing issues - >>> elem.get("key") # 3.1 - 'value' - >>> elem.attrib # 3.2 - {'key': 'value'} + def test_copy(self): + # Test copy handling (etc). - >>> attrib = {"key": "value"} - >>> elem = ET.Element("tag", **attrib) - >>> attrib.clear() # check for aliasing issues - >>> elem.get("key") # 4.1 - 'value' - >>> elem.attrib # 4.2 - {'key': 'value'} + import copy + e1 = ET.XML("hello") + e2 = copy.copy(e1) + e3 = copy.deepcopy(e1) + e1.find("foo").tag = "bar" + self.serialize_check(e1, 'hello') + self.serialize_check(e2, 'hello') + self.serialize_check(e3, 'hello') - >>> elem = ET.Element("tag", {"key": "other"}, key="value") - >>> elem.get("key") # 5.1 - 'value' - >>> elem.attrib # 5.2 - {'key': 'value'} + def test_attrib(self): + # Test attribute handling. - >>> elem = ET.Element('test') - >>> elem.text = "aa" - >>> elem.set('testa', 'testval') - >>> elem.set('testb', 'test2') - >>> ET.tostring(elem) - b'aa' - >>> sorted(elem.keys()) - ['testa', 'testb'] - >>> sorted(elem.items()) - [('testa', 'testval'), ('testb', 'test2')] - >>> elem.attrib['testb'] - 'test2' - >>> elem.attrib['testb'] = 'test1' - >>> elem.attrib['testc'] = 'test2' - >>> ET.tostring(elem) - b'aa' - """ + elem = ET.Element("tag") + elem.get("key") # 1.1 + self.assertEqual(elem.get("key", "default"), 'default') # 1.2 -def makeelement(): - """ - Test makeelement handling. + elem.set("key", "value") + self.assertEqual(elem.get("key"), 'value') # 1.3 - >>> elem = ET.Element("tag") - >>> attrib = {"key": "value"} - >>> subelem = elem.makeelement("subtag", attrib) - >>> if subelem.attrib is attrib: - ... print("attrib aliasing") - >>> elem.append(subelem) - >>> serialize(elem) - '' + elem = ET.Element("tag", key="value") + self.assertEqual(elem.get("key"), 'value') # 2.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 2.2 - >>> elem.clear() - >>> serialize(elem) - '' - >>> elem.append(subelem) - >>> serialize(elem) - '' - >>> elem.extend([subelem, subelem]) - >>> serialize(elem) - '' - >>> elem[:] = [subelem] - >>> serialize(elem) - '' - >>> elem[:] = tuple([subelem]) - >>> serialize(elem) - '' + attrib = {"key": "value"} + elem = ET.Element("tag", attrib) + attrib.clear() # check for aliasing issues + self.assertEqual(elem.get("key"), 'value') # 3.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 3.2 - """ + attrib = {"key": "value"} + elem = ET.Element("tag", **attrib) + attrib.clear() # check for aliasing issues + self.assertEqual(elem.get("key"), 'value') # 4.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 4.2 -def parsefile(): - """ - Test parsing from file. + elem = ET.Element("tag", {"key": "other"}, key="value") + self.assertEqual(elem.get("key"), 'value') # 5.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 5.2 - >>> tree = ET.parse(SIMPLE_XMLFILE) - >>> normalize_crlf(tree) - >>> tree.write(sys.stdout, encoding='unicode') - - text - texttail - - - >>> tree = ET.parse(SIMPLE_NS_XMLFILE) - >>> normalize_crlf(tree) - >>> tree.write(sys.stdout, encoding='unicode') - - text - texttail - - + elem = ET.Element('test') + elem.text = "aa" + elem.set('testa', 'testval') + elem.set('testb', 'test2') + self.assertEqual(ET.tostring(elem), + b'aa') + self.assertEqual(sorted(elem.keys()), ['testa', 'testb']) + self.assertEqual(sorted(elem.items()), + [('testa', 'testval'), ('testb', 'test2')]) + self.assertEqual(elem.attrib['testb'], 'test2') + elem.attrib['testb'] = 'test1' + elem.attrib['testc'] = 'test2' + self.assertEqual(ET.tostring(elem), + b'aa') - >>> with open(SIMPLE_XMLFILE) as f: - ... data = f.read() + def test_makeelement(self): + # Test makeelement handling. - >>> parser = ET.XMLParser() - >>> parser.version # doctest: +ELLIPSIS - 'Expat ...' - >>> parser.feed(data) - >>> print(serialize(parser.close())) - - text - texttail - - + elem = ET.Element("tag") + attrib = {"key": "value"} + subelem = elem.makeelement("subtag", attrib) + self.assertIsNot(subelem.attrib, attrib, msg="attrib aliasing") + elem.append(subelem) + self.serialize_check(elem, '') - >>> parser = ET.XMLTreeBuilder() # 1.2 compatibility - >>> parser.feed(data) - >>> print(serialize(parser.close())) - - text - texttail - - + elem.clear() + self.serialize_check(elem, '') + elem.append(subelem) + self.serialize_check(elem, '') + elem.extend([subelem, subelem]) + self.serialize_check(elem, + '') + elem[:] = [subelem] + self.serialize_check(elem, '') + elem[:] = tuple([subelem]) + self.serialize_check(elem, '') - >>> target = ET.TreeBuilder() - >>> parser = ET.XMLParser(target=target) - >>> parser.feed(data) - >>> print(serialize(parser.close())) - - text - texttail - - - """ + def test_parsefile(self): + # Test parsing from file. -def parseliteral(): - """ - >>> element = ET.XML("text") - >>> ET.ElementTree(element).write(sys.stdout, encoding='unicode') - text - >>> element = ET.fromstring("text") - >>> ET.ElementTree(element).write(sys.stdout, encoding='unicode') - text - >>> sequence = ["", "text"] - >>> element = ET.fromstringlist(sequence) - >>> ET.tostring(element) - b'text' - >>> b"".join(ET.tostringlist(element)) - b'text' - >>> ET.tostring(element, "ascii") - b"\\ntext" - >>> _, ids = ET.XMLID("text") - >>> len(ids) - 0 - >>> _, ids = ET.XMLID("text") - >>> len(ids) - 1 - >>> ids["body"].tag - 'body' - """ + tree = ET.parse(SIMPLE_XMLFILE) + stream = io.StringIO() + tree.write(stream, encoding='unicode') + self.assertEqual(stream.getvalue(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') + tree = ET.parse(SIMPLE_NS_XMLFILE) + stream = io.StringIO() + tree.write(stream, encoding='unicode') + self.assertEqual(stream.getvalue(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') -def iterparse(): - """ - Test iterparse interface. + with open(SIMPLE_XMLFILE) as f: + data = f.read() - >>> iterparse = ET.iterparse + parser = ET.XMLParser() + self.assertRegex(parser.version, r'^Expat ') + parser.feed(data) + self.serialize_check(parser.close(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') - >>> context = iterparse(SIMPLE_XMLFILE) - >>> action, elem = next(context) - >>> print(action, elem.tag) - end element - >>> for action, elem in context: - ... print(action, elem.tag) - end element - end empty-element - end root - >>> context.root.tag - 'root' + parser = ET.XMLTreeBuilder() # 1.2 compatibility + parser.feed(data) + self.serialize_check(parser.close(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') - >>> context = iterparse(SIMPLE_NS_XMLFILE) - >>> for action, elem in context: - ... print(action, elem.tag) - end {namespace}element - end {namespace}element - end {namespace}empty-element - end {namespace}root + target = ET.TreeBuilder() + parser = ET.XMLParser(target=target) + parser.feed(data) + self.serialize_check(parser.close(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') - >>> events = () - >>> context = iterparse(SIMPLE_XMLFILE, events) - >>> for action, elem in context: - ... print(action, elem.tag) + def test_parseliteral(self): + element = ET.XML("text") + self.assertEqual(ET.tostring(element, encoding='unicode'), + 'text') + element = ET.fromstring("text") + self.assertEqual(ET.tostring(element, encoding='unicode'), + 'text') + sequence = ["", "text"] + element = ET.fromstringlist(sequence) + self.assertEqual(ET.tostring(element), + b'text') + self.assertEqual(b"".join(ET.tostringlist(element)), + b'text') + self.assertEqual(ET.tostring(element, "ascii"), + b"\n" + b"text") + _, ids = ET.XMLID("text") + self.assertEqual(len(ids), 0) + _, ids = ET.XMLID("text") + self.assertEqual(len(ids), 1) + self.assertEqual(ids["body"].tag, 'body') - >>> events = () - >>> context = iterparse(SIMPLE_XMLFILE, events=events) - >>> for action, elem in context: - ... print(action, elem.tag) + def test_iterparse(self): + # Test iterparse interface. - >>> events = ("start", "end") - >>> context = iterparse(SIMPLE_XMLFILE, events) - >>> for action, elem in context: - ... print(action, elem.tag) - start root - start element - end element - start element - end element - start empty-element - end empty-element - end root + iterparse = ET.iterparse - >>> events = ("start", "end", "start-ns", "end-ns") - >>> context = iterparse(SIMPLE_NS_XMLFILE, events) - >>> for action, elem in context: - ... if action in ("start", "end"): - ... print(action, elem.tag) - ... else: - ... print(action, elem) - start-ns ('', 'namespace') - start {namespace}root - start {namespace}element - end {namespace}element - start {namespace}element - end {namespace}element - start {namespace}empty-element - end {namespace}empty-element - end {namespace}root - end-ns None + context = iterparse(SIMPLE_XMLFILE) + action, elem = next(context) + self.assertEqual((action, elem.tag), ('end', 'element')) + self.assertEqual([(action, elem.tag) for action, elem in context], [ + ('end', 'element'), + ('end', 'empty-element'), + ('end', 'root'), + ]) + self.assertEqual(context.root.tag, 'root') - >>> events = ("start", "end", "bogus") - >>> with open(SIMPLE_XMLFILE, "rb") as f: - ... iterparse(f, events) - Traceback (most recent call last): - ValueError: unknown event 'bogus' + context = iterparse(SIMPLE_NS_XMLFILE) + self.assertEqual([(action, elem.tag) for action, elem in context], [ + ('end', '{namespace}element'), + ('end', '{namespace}element'), + ('end', '{namespace}empty-element'), + ('end', '{namespace}root'), + ]) - >>> import io + events = () + context = iterparse(SIMPLE_XMLFILE, events) + self.assertEqual([(action, elem.tag) for action, elem in context], []) - >>> source = io.BytesIO( - ... b"\\n" - ... b"text\\n") - >>> events = ("start-ns",) - >>> context = iterparse(source, events) - >>> for action, elem in context: - ... print(action, elem) - start-ns ('', 'http://\\xe9ffbot.org/ns') - start-ns ('cl\\xe9', 'http://effbot.org/ns') + events = () + context = iterparse(SIMPLE_XMLFILE, events=events) + self.assertEqual([(action, elem.tag) for action, elem in context], []) - >>> source = io.StringIO("junk") - >>> try: - ... for action, elem in iterparse(source): - ... print(action, elem.tag) - ... except ET.ParseError as v: - ... print(v) - end document - junk after document element: line 1, column 12 - """ + events = ("start", "end") + context = iterparse(SIMPLE_XMLFILE, events) + self.assertEqual([(action, elem.tag) for action, elem in context], [ + ('start', 'root'), + ('start', 'element'), + ('end', 'element'), + ('start', 'element'), + ('end', 'element'), + ('start', 'empty-element'), + ('end', 'empty-element'), + ('end', 'root'), + ]) -def writefile(): - """ - >>> elem = ET.Element("tag") - >>> elem.text = "text" - >>> serialize(elem) - 'text' - >>> ET.SubElement(elem, "subtag").text = "subtext" - >>> serialize(elem) - 'textsubtext' + events = ("start", "end", "start-ns", "end-ns") + context = iterparse(SIMPLE_NS_XMLFILE, events) + self.assertEqual([(action, elem.tag) if action in ("start", "end") else (action, elem) + for action, elem in context], [ + ('start-ns', ('', 'namespace')), + ('start', '{namespace}root'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}empty-element'), + ('end', '{namespace}empty-element'), + ('end', '{namespace}root'), + ('end-ns', None), + ]) - Test tag suppression - >>> elem.tag = None - >>> serialize(elem) - 'textsubtext' - >>> elem.insert(0, ET.Comment("comment")) - >>> serialize(elem) # assumes 1.3 - 'textsubtext' - >>> elem[0] = ET.PI("key", "value") - >>> serialize(elem) - 'textsubtext' - """ + events = ("start", "end", "bogus") + with self.assertRaises(ValueError) as cm: + with open(SIMPLE_XMLFILE, "rb") as f: + iterparse(f, events) + self.assertEqual(str(cm.exception), "unknown event 'bogus'") -def custom_builder(): - """ - Test parser w. custom builder. + source = io.BytesIO( + b"\n" + b"text\n") + events = ("start-ns",) + context = iterparse(source, events) + self.assertEqual([(action, elem) for action, elem in context], [ + ('start-ns', ('', 'http://\xe9ffbot.org/ns')), + ('start-ns', ('cl\xe9', 'http://effbot.org/ns')), + ]) - >>> with open(SIMPLE_XMLFILE) as f: - ... data = f.read() - >>> class Builder: - ... def start(self, tag, attrib): - ... print("start", tag) - ... def end(self, tag): - ... print("end", tag) - ... def data(self, text): - ... pass - >>> builder = Builder() - >>> parser = ET.XMLParser(target=builder) - >>> parser.feed(data) - start root - start element - end element - start element - end element - start empty-element - end empty-element - end root + source = io.StringIO("junk") + it = iterparse(source) + action, elem = next(it) + self.assertEqual((action, elem.tag), ('end', 'document')) + with self.assertRaises(ET.ParseError) as cm: + next(it) + self.assertEqual(str(cm.exception), + 'junk after document element: line 1, column 12') - >>> with open(SIMPLE_NS_XMLFILE) as f: - ... data = f.read() - >>> class Builder: - ... def start(self, tag, attrib): - ... print("start", tag) - ... def end(self, tag): - ... print("end", tag) - ... def data(self, text): - ... pass - ... def pi(self, target, data): - ... print("pi", target, repr(data)) - ... def comment(self, data): - ... print("comment", repr(data)) - >>> builder = Builder() - >>> parser = ET.XMLParser(target=builder) - >>> parser.feed(data) - pi pi 'data' - comment ' comment ' - start {namespace}root - start {namespace}element - end {namespace}element - start {namespace}element - end {namespace}element - start {namespace}empty-element - end {namespace}empty-element - end {namespace}root + def test_writefile(self): + elem = ET.Element("tag") + elem.text = "text" + self.serialize_check(elem, 'text') + ET.SubElement(elem, "subtag").text = "subtext" + self.serialize_check(elem, 'textsubtext') - """ + # Test tag suppression + elem.tag = None + self.serialize_check(elem, 'textsubtext') + elem.insert(0, ET.Comment("comment")) + self.serialize_check(elem, + 'textsubtext') # assumes 1.3 -def getchildren(): - """ - Test Element.getchildren() + elem[0] = ET.PI("key", "value") + self.serialize_check(elem, 'textsubtext') - >>> with open(SIMPLE_XMLFILE, "rb") as f: - ... tree = ET.parse(f) - >>> for elem in tree.getroot().iter(): - ... summarize_list(elem.getchildren()) - ['element', 'element', 'empty-element'] - [] - [] - [] - >>> for elem in tree.getiterator(): - ... summarize_list(elem.getchildren()) - ['element', 'element', 'empty-element'] - [] - [] - [] + def test_custom_builder(self): + # Test parser w. custom builder. - >>> elem = ET.XML(SAMPLE_XML) - >>> len(elem.getchildren()) - 3 - >>> len(elem[2].getchildren()) - 1 - >>> elem[:] == elem.getchildren() - True - >>> child1 = elem[0] - >>> child2 = elem[2] - >>> del elem[1:2] - >>> len(elem.getchildren()) - 2 - >>> child1 == elem[0] - True - >>> child2 == elem[1] - True - >>> elem[0:2] = [child2, child1] - >>> child2 == elem[0] - True - >>> child1 == elem[1] - True - >>> child1 == elem[0] - False - >>> elem.clear() - >>> elem.getchildren() - [] - """ + with open(SIMPLE_XMLFILE) as f: + data = f.read() + class Builder(list): + def start(self, tag, attrib): + self.append(("start", tag)) + def end(self, tag): + self.append(("end", tag)) + def data(self, text): + pass + builder = Builder() + parser = ET.XMLParser(target=builder) + parser.feed(data) + self.assertEqual(builder, [ + ('start', 'root'), + ('start', 'element'), + ('end', 'element'), + ('start', 'element'), + ('end', 'element'), + ('start', 'empty-element'), + ('end', 'empty-element'), + ('end', 'root'), + ]) -def writestring(): - """ - >>> elem = ET.XML("text") - >>> ET.tostring(elem) - b'text' - >>> elem = ET.fromstring("text") - >>> ET.tostring(elem) - b'text' - """ + with open(SIMPLE_NS_XMLFILE) as f: + data = f.read() + class Builder(list): + def start(self, tag, attrib): + self.append(("start", tag)) + def end(self, tag): + self.append(("end", tag)) + def data(self, text): + pass + def pi(self, target, data): + self.append(("pi", target, data)) + def comment(self, data): + self.append(("comment", data)) + builder = Builder() + parser = ET.XMLParser(target=builder) + parser.feed(data) + self.assertEqual(builder, [ + ('pi', 'pi', 'data'), + ('comment', ' comment '), + ('start', '{namespace}root'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}empty-element'), + ('end', '{namespace}empty-element'), + ('end', '{namespace}root'), + ]) -def check_encoding(encoding): - """ - >>> check_encoding("ascii") - >>> check_encoding("us-ascii") - >>> check_encoding("iso-8859-1") - >>> check_encoding("iso-8859-15") - >>> check_encoding("cp437") - >>> check_encoding("mac-roman") - """ - ET.XML("" % encoding) -def methods(): - r""" - Test serialization methods. + def test_getchildren(self): + # Test Element.getchildren() - >>> e = ET.XML("") - >>> e.tail = "\n" - >>> serialize(e) - '\n' - >>> serialize(e, method=None) - '\n' - >>> serialize(e, method="xml") - '\n' - >>> serialize(e, method="html") - '\n' - >>> serialize(e, method="text") - '1 < 2\n' - """ + with open(SIMPLE_XMLFILE, "rb") as f: + tree = ET.parse(f) + self.assertEqual([summarize_list(elem.getchildren()) + for elem in tree.getroot().iter()], [ + ['element', 'element', 'empty-element'], + [], + [], + [], + ]) + self.assertEqual([summarize_list(elem.getchildren()) + for elem in tree.getiterator()], [ + ['element', 'element', 'empty-element'], + [], + [], + [], + ]) -ENTITY_XML = """\ - -%user-entities; -]> -&entity; -""" + elem = ET.XML(SAMPLE_XML) + self.assertEqual(len(elem.getchildren()), 3) + self.assertEqual(len(elem[2].getchildren()), 1) + self.assertEqual(elem[:], elem.getchildren()) + child1 = elem[0] + child2 = elem[2] + del elem[1:2] + self.assertEqual(len(elem.getchildren()), 2) + self.assertEqual(child1, elem[0]) + self.assertEqual(child2, elem[1]) + elem[0:2] = [child2, child1] + self.assertEqual(child2, elem[0]) + self.assertEqual(child1, elem[1]) + self.assertNotEqual(child1, elem[0]) + elem.clear() + self.assertEqual(elem.getchildren(), []) -def entity(): - """ - Test entity handling. + def test_writestring(self): + elem = ET.XML("text") + self.assertEqual(ET.tostring(elem), b'text') + elem = ET.fromstring("text") + self.assertEqual(ET.tostring(elem), b'text') - 1) good entities + def test_encoding(encoding): + def check(encoding): + ET.XML("" % encoding) + check("ascii") + check("us-ascii") + check("iso-8859-1") + check("iso-8859-15") + check("cp437") + check("mac-roman") - >>> e = ET.XML("test") - >>> serialize(e, encoding="us-ascii") - b'test' - >>> serialize(e) - 'test' + def test_methods(self): + # Test serialization methods. - 2) bad entities + e = ET.XML("") + e.tail = "\n" + self.assertEqual(serialize(e), + '\n') + self.assertEqual(serialize(e, method=None), + '\n') + self.assertEqual(serialize(e, method="xml"), + '\n') + self.assertEqual(serialize(e, method="html"), + '\n') + self.assertEqual(serialize(e, method="text"), '1 < 2\n') - >>> normalize_exception(ET.XML, "&entity;") - Traceback (most recent call last): - ParseError: undefined entity: line 1, column 10 + def test_entity(self): + # Test entity handling. - >>> normalize_exception(ET.XML, ENTITY_XML) - Traceback (most recent call last): - ParseError: undefined entity &entity;: line 5, column 10 + # 1) good entities - 3) custom entity + e = ET.XML("test") + self.assertEqual(serialize(e, encoding="us-ascii"), + b'test') + self.serialize_check(e, 'test') - >>> parser = ET.XMLParser() - >>> parser.entity["entity"] = "text" - >>> parser.feed(ENTITY_XML) - >>> root = parser.close() - >>> serialize(root) - 'text' - """ + # 2) bad entities -def namespace(): - """ - Test namespace issues. + with self.assertRaises(ET.ParseError) as cm: + ET.XML("&entity;") + self.assertEqual(str(cm.exception), + 'undefined entity: line 1, column 10') - 1) xml namespace + with self.assertRaises(ET.ParseError) as cm: + ET.XML(ENTITY_XML) + self.assertEqual(str(cm.exception), + 'undefined entity &entity;: line 5, column 10') - >>> elem = ET.XML("") - >>> serialize(elem) # 1.1 - '' + # 3) custom entity - 2) other "well-known" namespaces + parser = ET.XMLParser() + parser.entity["entity"] = "text" + parser.feed(ENTITY_XML) + root = parser.close() + self.serialize_check(root, 'text') - >>> elem = ET.XML("") - >>> serialize(elem) # 2.1 - '' + def test_namespace(self): + # Test namespace issues. - >>> elem = ET.XML("") - >>> serialize(elem) # 2.2 - '' + # 1) xml namespace - >>> elem = ET.XML("") - >>> serialize(elem) # 2.3 - '' + elem = ET.XML("") + self.serialize_check(elem, '') # 1.1 - 3) unknown namespaces - >>> elem = ET.XML(SAMPLE_XML_NS) - >>> print(serialize(elem)) - - text - - - subtext - - - """ + # 2) other "well-known" namespaces -def qname(): - """ - Test QName handling. + elem = ET.XML("") + self.serialize_check(elem, + '') # 2.1 - 1) decorated tags + elem = ET.XML("") + self.serialize_check(elem, + '') # 2.2 - >>> elem = ET.Element("{uri}tag") - >>> serialize(elem) # 1.1 - '' - >>> elem = ET.Element(ET.QName("{uri}tag")) - >>> serialize(elem) # 1.2 - '' - >>> elem = ET.Element(ET.QName("uri", "tag")) - >>> serialize(elem) # 1.3 - '' - >>> elem = ET.Element(ET.QName("uri", "tag")) - >>> subelem = ET.SubElement(elem, ET.QName("uri", "tag1")) - >>> subelem = ET.SubElement(elem, ET.QName("uri", "tag2")) - >>> serialize(elem) # 1.4 - '' + elem = ET.XML("") + self.serialize_check(elem, + '') # 2.3 - 2) decorated attributes + # 3) unknown namespaces + elem = ET.XML(SAMPLE_XML_NS) + self.serialize_check(elem, + '\n' + ' text\n' + ' \n' + ' \n' + ' subtext\n' + ' \n' + '') - >>> elem.clear() - >>> elem.attrib["{uri}key"] = "value" - >>> serialize(elem) # 2.1 - '' + def test_qname(self): + # Test QName handling. - >>> elem.clear() - >>> elem.attrib[ET.QName("{uri}key")] = "value" - >>> serialize(elem) # 2.2 - '' + # 1) decorated tags - 3) decorated values are not converted by default, but the - QName wrapper can be used for values + elem = ET.Element("{uri}tag") + self.serialize_check(elem, '') # 1.1 + elem = ET.Element(ET.QName("{uri}tag")) + self.serialize_check(elem, '') # 1.2 + elem = ET.Element(ET.QName("uri", "tag")) + self.serialize_check(elem, '') # 1.3 + elem = ET.Element(ET.QName("uri", "tag")) + subelem = ET.SubElement(elem, ET.QName("uri", "tag1")) + subelem = ET.SubElement(elem, ET.QName("uri", "tag2")) + self.serialize_check(elem, + '') # 1.4 - >>> elem.clear() - >>> elem.attrib["{uri}key"] = "{uri}value" - >>> serialize(elem) # 3.1 - '' + # 2) decorated attributes - >>> elem.clear() - >>> elem.attrib["{uri}key"] = ET.QName("{uri}value") - >>> serialize(elem) # 3.2 - '' + elem.clear() + elem.attrib["{uri}key"] = "value" + self.serialize_check(elem, + '') # 2.1 - >>> elem.clear() - >>> subelem = ET.Element("tag") - >>> subelem.attrib["{uri1}key"] = ET.QName("{uri2}value") - >>> elem.append(subelem) - >>> elem.append(subelem) - >>> serialize(elem) # 3.3 - '' + elem.clear() + elem.attrib[ET.QName("{uri}key")] = "value" + self.serialize_check(elem, + '') # 2.2 - 4) Direct QName tests + # 3) decorated values are not converted by default, but the + # QName wrapper can be used for values - >>> str(ET.QName('ns', 'tag')) - '{ns}tag' - >>> str(ET.QName('{ns}tag')) - '{ns}tag' - >>> q1 = ET.QName('ns', 'tag') - >>> q2 = ET.QName('ns', 'tag') - >>> q1 == q2 - True - >>> q2 = ET.QName('ns', 'other-tag') - >>> q1 == q2 - False - >>> q1 == 'ns:tag' - False - >>> q1 == '{ns}tag' - True - """ + elem.clear() + elem.attrib["{uri}key"] = "{uri}value" + self.serialize_check(elem, + '') # 3.1 -def doctype_public(): - """ - Test PUBLIC doctype. + elem.clear() + elem.attrib["{uri}key"] = ET.QName("{uri}value") + self.serialize_check(elem, + '') # 3.2 - >>> elem = ET.XML('' - ... 'text') + elem.clear() + subelem = ET.Element("tag") + subelem.attrib["{uri1}key"] = ET.QName("{uri2}value") + elem.append(subelem) + elem.append(subelem) + self.serialize_check(elem, + '' + '' + '' + '') # 3.3 - """ + # 4) Direct QName tests -def xpath_tokenizer(p): - """ - Test the XPath tokenizer. + self.assertEqual(str(ET.QName('ns', 'tag')), '{ns}tag') + self.assertEqual(str(ET.QName('{ns}tag')), '{ns}tag') + q1 = ET.QName('ns', 'tag') + q2 = ET.QName('ns', 'tag') + self.assertEqual(q1, q2) + q2 = ET.QName('ns', 'other-tag') + self.assertNotEqual(q1, q2) + self.assertNotEqual(q1, 'ns:tag') + self.assertEqual(q1, '{ns}tag') - >>> # tests from the xml specification - >>> xpath_tokenizer("*") - ['*'] - >>> xpath_tokenizer("text()") - ['text', '()'] - >>> xpath_tokenizer("@name") - ['@', 'name'] - >>> xpath_tokenizer("@*") - ['@', '*'] - >>> xpath_tokenizer("para[1]") - ['para', '[', '1', ']'] - >>> xpath_tokenizer("para[last()]") - ['para', '[', 'last', '()', ']'] - >>> xpath_tokenizer("*/para") - ['*', '/', 'para'] - >>> xpath_tokenizer("/doc/chapter[5]/section[2]") - ['/', 'doc', '/', 'chapter', '[', '5', ']', '/', 'section', '[', '2', ']'] - >>> xpath_tokenizer("chapter//para") - ['chapter', '//', 'para'] - >>> xpath_tokenizer("//para") - ['//', 'para'] - >>> xpath_tokenizer("//olist/item") - ['//', 'olist', '/', 'item'] - >>> xpath_tokenizer(".") - ['.'] - >>> xpath_tokenizer(".//para") - ['.', '//', 'para'] - >>> xpath_tokenizer("..") - ['..'] - >>> xpath_tokenizer("../@lang") - ['..', '/', '@', 'lang'] - >>> xpath_tokenizer("chapter[title]") - ['chapter', '[', 'title', ']'] - >>> xpath_tokenizer("employee[@secretary and @assistant]") - ['employee', '[', '@', 'secretary', '', 'and', '', '@', 'assistant', ']'] + def test_doctype_public(self): + # Test PUBLIC doctype. - >>> # additional tests - >>> xpath_tokenizer("{http://spam}egg") - ['{http://spam}egg'] - >>> xpath_tokenizer("./spam.egg") - ['.', '/', 'spam.egg'] - >>> xpath_tokenizer(".//{http://spam}egg") - ['.', '//', '{http://spam}egg'] - """ - from xml.etree import ElementPath - out = [] - for op, tag in ElementPath.xpath_tokenizer(p): - out.append(op or tag) - return out + elem = ET.XML('' + 'text') -def processinginstruction(): - """ - Test ProcessingInstruction directly + def test_xpath_tokenizer(self): + # Test the XPath tokenizer. + from xml.etree import ElementPath + def check(p, expected): + self.assertEqual([op or tag + for op, tag in ElementPath.xpath_tokenizer(p)], + expected) - >>> ET.tostring(ET.ProcessingInstruction('test', 'instruction')) - b'' - >>> ET.tostring(ET.PI('test', 'instruction')) - b'' + # tests from the xml specification + check("*", ['*']) + check("text()", ['text', '()']) + check("@name", ['@', 'name']) + check("@*", ['@', '*']) + check("para[1]", ['para', '[', '1', ']']) + check("para[last()]", ['para', '[', 'last', '()', ']']) + check("*/para", ['*', '/', 'para']) + check("/doc/chapter[5]/section[2]", + ['/', 'doc', '/', 'chapter', '[', '5', ']', + '/', 'section', '[', '2', ']']) + check("chapter//para", ['chapter', '//', 'para']) + check("//para", ['//', 'para']) + check("//olist/item", ['//', 'olist', '/', 'item']) + check(".", ['.']) + check(".//para", ['.', '//', 'para']) + check("..", ['..']) + check("../@lang", ['..', '/', '@', 'lang']) + check("chapter[title]", ['chapter', '[', 'title', ']']) + check("employee[@secretary and @assistant]", ['employee', + '[', '@', 'secretary', '', 'and', '', '@', 'assistant', ']']) - Issue #2746 + # additional tests + check("{http://spam}egg", ['{http://spam}egg']) + check("./spam.egg", ['.', '/', 'spam.egg']) + check(".//{http://spam}egg", ['.', '//', '{http://spam}egg']) - >>> ET.tostring(ET.PI('test', '')) - b'?>' - >>> ET.tostring(ET.PI('test', '\xe3'), 'latin-1') - b"\\n\\xe3?>" - """ + def test_processinginstruction(self): + # Test ProcessingInstruction directly + + self.assertEqual(ET.tostring(ET.ProcessingInstruction('test', 'instruction')), + b'') + self.assertEqual(ET.tostring(ET.PI('test', 'instruction')), + b'') + + # Issue #2746 + + self.assertEqual(ET.tostring(ET.PI('test', '')), + b'?>') + self.assertEqual(ET.tostring(ET.PI('test', '\xe3'), 'latin-1'), + b"\n" + b"\xe3?>") + + def test_html_empty_elems_serialization(self): + # issue 15970 + # from http://www.w3.org/TR/html401/index/elements.html + for element in ['AREA', 'BASE', 'BASEFONT', 'BR', 'COL', 'FRAME', 'HR', + 'IMG', 'INPUT', 'ISINDEX', 'LINK', 'META', 'PARAM']: + for elem in [element, element.lower()]: + expected = '<%s>' % elem + serialized = serialize(ET.XML('<%s />' % elem), method='html') + self.assertEqual(serialized, expected) + serialized = serialize(ET.XML('<%s>' % (elem,elem)), + method='html') + self.assertEqual(serialized, expected) + # # xinclude tests (samples from appendix C of the xinclude specification) @@ -1120,79 +986,6 @@ """.format(html.escape(SIMPLE_XMLFILE, True)) - -def xinclude_loader(href, parse="xml", encoding=None): - try: - data = XINCLUDE[href] - except KeyError: - raise OSError("resource not found") - if parse == "xml": - data = ET.XML(data) - return data - -def xinclude(): - r""" - Basic inclusion example (XInclude C.1) - - >>> from xml.etree import ElementInclude - - >>> document = xinclude_loader("C1.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C1 - -

120 Mz is adequate for an average home user.

- -

The opinions represented herein represent those of the individual - and should not be interpreted as official policy endorsed by this - organization.

-
-
- - Textual inclusion example (XInclude C.2) - - >>> document = xinclude_loader("C2.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C2 - -

This document has been accessed - 324387 times.

-
- - Textual inclusion after sibling element (based on modified XInclude C.2) - - >>> document = xinclude_loader("C2b.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C2b - -

This document has been accessed - 324387 times.

-
- - Textual inclusion of XML example (XInclude C.3) - - >>> document = xinclude_loader("C3.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C3 - -

The following is the source of the "data.xml" resource:

- <?xml version='1.0'?> - <data> - <item><![CDATA[Brooks & Shields]]></item> - </data> - -
- - Fallback example (XInclude C.5) - Note! Fallback support is not yet implemented - - >>> document = xinclude_loader("C5.xml") - >>> ElementInclude.include(document, xinclude_loader) - Traceback (most recent call last): - OSError: resource not found - >>> # print(serialize(document)) # C5 - """ - - # # badly formatted xi:include tags @@ -1213,410 +1006,412 @@ """ -def xinclude_failures(): - r""" - Test failure to locate included XML file. +class XIncludeTest(unittest.TestCase): - >>> from xml.etree import ElementInclude + def xinclude_loader(self, href, parse="xml", encoding=None): + try: + data = XINCLUDE[href] + except KeyError: + raise OSError("resource not found") + if parse == "xml": + data = ET.XML(data) + return data - >>> def none_loader(href, parser, encoding=None): - ... return None + def none_loader(self, href, parser, encoding=None): + return None - >>> document = ET.XML(XINCLUDE["C1.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: cannot load 'disclaimer.xml' as 'xml' + def _my_loader(self, href, parse): + # Used to avoid a test-dependency problem where the default loader + # of ElementInclude uses the pyET parser for cET tests. + if parse == 'xml': + with open(href, 'rb') as f: + return ET.parse(f).getroot() + else: + return None - Test failure to locate included text file. + def test_xinclude_default(self): + from xml.etree import ElementInclude + doc = self.xinclude_loader('default.xml') + ElementInclude.include(doc, self._my_loader) + self.assertEqual(serialize(doc), + '\n' + '

Example.

\n' + ' \n' + ' text\n' + ' texttail\n' + ' \n' + '\n' + '
') - >>> document = ET.XML(XINCLUDE["C2.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: cannot load 'count.txt' as 'text' + def test_xinclude(self): + from xml.etree import ElementInclude - Test bad parse type. + # Basic inclusion example (XInclude C.1) + document = self.xinclude_loader("C1.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

120 Mz is adequate for an average home user.

\n' + ' \n' + '

The opinions represented herein represent those of the individual\n' + ' and should not be interpreted as official policy endorsed by this\n' + ' organization.

\n' + '
\n' + '
') # C1 - >>> document = ET.XML(XINCLUDE_BAD["B1.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: unknown parse type in xi:include tag ('BAD_TYPE') + # Textual inclusion example (XInclude C.2) + document = self.xinclude_loader("C2.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

This document has been accessed\n' + ' 324387 times.

\n' + '
') # C2 - Test xi:fallback outside xi:include. + # Textual inclusion after sibling element (based on modified XInclude C.2) + document = self.xinclude_loader("C2b.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

This document has been accessed\n' + ' 324387 times.

\n' + '
') # C2b - >>> document = ET.XML(XINCLUDE_BAD["B2.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: xi:fallback tag must be child of xi:include ('{http://www.w3.org/2001/XInclude}fallback') - """ + # Textual inclusion of XML example (XInclude C.3) + document = self.xinclude_loader("C3.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

The following is the source of the "data.xml" resource:

\n' + " <?xml version='1.0'?>\n" + '<data>\n' + ' <item><![CDATA[Brooks & Shields]]></item>\n' + '</data>\n' + '\n' + '
') # C3 + + # Fallback example (XInclude C.5) + # Note! Fallback support is not yet implemented + document = self.xinclude_loader("C5.xml") + with self.assertRaises(OSError) as cm: + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(str(cm.exception), 'resource not found') + self.assertEqual(serialize(document), + '
\n' + ' \n' + ' \n' + ' \n' + ' Report error\n' + ' \n' + ' \n' + ' \n' + '
') # C5 + + def test_xinclude_failures(self): + from xml.etree import ElementInclude + + # Test failure to locate included XML file. + document = ET.XML(XINCLUDE["C1.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "cannot load 'disclaimer.xml' as 'xml'") + + # Test failure to locate included text file. + document = ET.XML(XINCLUDE["C2.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "cannot load 'count.txt' as 'text'") + + # Test bad parse type. + document = ET.XML(XINCLUDE_BAD["B1.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "unknown parse type in xi:include tag ('BAD_TYPE')") + + # Test xi:fallback outside xi:include. + document = ET.XML(XINCLUDE_BAD["B2.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "xi:fallback tag must be child of xi:include " + "('{http://www.w3.org/2001/XInclude}fallback')") # -------------------------------------------------------------------- # reported bugs -def bug_xmltoolkit21(): - """ +class BugsTest(unittest.TestCase): - marshaller gives obscure errors for non-string values + def test_bug_xmltoolkit21(self): + # marshaller gives obscure errors for non-string values - >>> elem = ET.Element(123) - >>> serialize(elem) # tag - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.text = 123 - >>> serialize(elem) # text - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.tail = 123 - >>> serialize(elem) # tail - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.set(123, "123") - >>> serialize(elem) # attribute key - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.set("123", 123) - >>> serialize(elem) # attribute value - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) + def check(elem): + with self.assertRaises(TypeError) as cm: + serialize(elem) + self.assertEqual(str(cm.exception), + 'cannot serialize 123 (type int)') - """ + elem = ET.Element(123) + check(elem) # tag -def bug_xmltoolkit25(): - """ + elem = ET.Element("elem") + elem.text = 123 + check(elem) # text - typo in ElementTree.findtext + elem = ET.Element("elem") + elem.tail = 123 + check(elem) # tail - >>> elem = ET.XML(SAMPLE_XML) - >>> tree = ET.ElementTree(elem) - >>> tree.findtext("tag") - 'text' - >>> tree.findtext("section/tag") - 'subtext' + elem = ET.Element("elem") + elem.set(123, "123") + check(elem) # attribute key - """ + elem = ET.Element("elem") + elem.set("123", 123) + check(elem) # attribute value -def bug_xmltoolkit28(): - """ + def test_bug_xmltoolkit25(self): + # typo in ElementTree.findtext - .//tag causes exceptions + elem = ET.XML(SAMPLE_XML) + tree = ET.ElementTree(elem) + self.assertEqual(tree.findtext("tag"), 'text') + self.assertEqual(tree.findtext("section/tag"), 'subtext') - >>> tree = ET.XML("
") - >>> summarize_list(tree.findall(".//thead")) - [] - >>> summarize_list(tree.findall(".//tbody")) - ['tbody'] + def test_bug_xmltoolkit28(self): + # .//tag causes exceptions - """ + tree = ET.XML("
") + self.assertEqual(summarize_list(tree.findall(".//thead")), []) + self.assertEqual(summarize_list(tree.findall(".//tbody")), ['tbody']) -def bug_xmltoolkitX1(): - """ + def test_bug_xmltoolkitX1(self): + # dump() doesn't flush the output buffer - dump() doesn't flush the output buffer + tree = ET.XML("
") + with support.captured_stdout() as stdout: + ET.dump(tree) + self.assertEqual(stdout.getvalue(), '
\n') - >>> tree = ET.XML("
") - >>> ET.dump(tree); print("tail") -
- tail + def test_bug_xmltoolkit39(self): + # non-ascii element and attribute names doesn't work - """ + tree = ET.XML(b"") + self.assertEqual(ET.tostring(tree, "utf-8"), b'') -def bug_xmltoolkit39(): - """ + tree = ET.XML(b"" + b"") + self.assertEqual(tree.attrib, {'\xe4ttr': 'v\xe4lue'}) + self.assertEqual(ET.tostring(tree, "utf-8"), + b'') - non-ascii element and attribute names doesn't work + tree = ET.XML(b"" + b'text') + self.assertEqual(ET.tostring(tree, "utf-8"), + b'text') - >>> tree = ET.XML(b"") - >>> ET.tostring(tree, "utf-8") - b'' + tree = ET.Element("t\u00e4g") + self.assertEqual(ET.tostring(tree, "utf-8"), b'') - >>> tree = ET.XML(b"") - >>> tree.attrib - {'\\xe4ttr': 'v\\xe4lue'} - >>> ET.tostring(tree, "utf-8") - b'' + tree = ET.Element("tag") + tree.set("\u00e4ttr", "v\u00e4lue") + self.assertEqual(ET.tostring(tree, "utf-8"), + b'') - >>> tree = ET.XML(b"text") - >>> ET.tostring(tree, "utf-8") - b'text' + def test_bug_xmltoolkit54(self): + # problems handling internally defined entities - >>> tree = ET.Element("t\u00e4g") - >>> ET.tostring(tree, "utf-8") - b'' + e = ET.XML("]>" + '&ldots;') + self.assertEqual(serialize(e, encoding="us-ascii"), + b'') + self.assertEqual(serialize(e), '\u8230') - >>> tree = ET.Element("tag") - >>> tree.set("\u00e4ttr", "v\u00e4lue") - >>> ET.tostring(tree, "utf-8") - b'' + def test_bug_xmltoolkit55(self): + # make sure we're reporting the first error, not the last - """ + with self.assertRaises(ET.ParseError) as cm: + ET.XML(b"" + b'&ldots;&ndots;&rdots;') + self.assertEqual(str(cm.exception), + 'undefined entity &ldots;: line 1, column 36') -def bug_xmltoolkit54(): - """ + def test_bug_xmltoolkit60(self): + # Handle crash in stream source. - problems handling internally defined entities + class ExceptionFile: + def read(self, x): + raise OSError - >>> e = ET.XML("]>&ldots;") - >>> serialize(e, encoding="us-ascii") - b'' - >>> serialize(e) - '\u8230' + self.assertRaises(OSError, ET.parse, ExceptionFile()) - """ + def test_bug_xmltoolkit62(self): + # Don't crash when using custom entities. -def bug_xmltoolkit55(): - """ - - make sure we're reporting the first error, not the last - - >>> normalize_exception(ET.XML, b"&ldots;&ndots;&rdots;") - Traceback (most recent call last): - ParseError: undefined entity &ldots;: line 1, column 36 - - """ - -class ExceptionFile: - def read(self, x): - raise OSError - -def xmltoolkit60(): - """ - - Handle crash in stream source. - >>> tree = ET.parse(ExceptionFile()) - Traceback (most recent call last): - OSError - - """ - -XMLTOOLKIT62_DOC = """ + ENTITIES = {'rsquo': '\u2019', 'lsquo': '\u2018'} + parser = ET.XMLTreeBuilder() + parser.entity.update(ENTITIES) + parser.feed(""" A new cultivar of Begonia plant named ‘BCT9801BEG’. -""" +""") + t = parser.close() + self.assertEqual(t.find('.//paragraph').text, + 'A new cultivar of Begonia plant named \u2018BCT9801BEG\u2019.') + def test_bug_xmltoolkit63(self): + # Check reference leak. + def xmltoolkit63(): + tree = ET.TreeBuilder() + tree.start("tag", {}) + tree.data("text") + tree.end("tag") -def xmltoolkit62(): - """ + xmltoolkit63() + count = sys.getrefcount(None) + for i in range(1000): + xmltoolkit63() + self.assertEqual(sys.getrefcount(None), count) - Don't crash when using custom entities. + def test_bug_200708_newline(self): + # Preserve newlines in attributes. - >>> xmltoolkit62() - 'A new cultivar of Begonia plant named \u2018BCT9801BEG\u2019.' + e = ET.Element('SomeTag', text="def _f():\n return 3\n") + self.assertEqual(ET.tostring(e), + b'') + self.assertEqual(ET.XML(ET.tostring(e)).get("text"), + 'def _f():\n return 3\n') + self.assertEqual(ET.tostring(ET.XML(ET.tostring(e))), + b'') - """ - ENTITIES = {'rsquo': '\u2019', 'lsquo': '\u2018'} - parser = ET.XMLTreeBuilder() - parser.entity.update(ENTITIES) - parser.feed(XMLTOOLKIT62_DOC) - t = parser.close() - return t.find('.//paragraph').text + def test_bug_200708_close(self): + # Test default builder. + parser = ET.XMLParser() # default + parser.feed("some text") + self.assertEqual(parser.close().tag, 'element') -def xmltoolkit63(): - """ + # Test custom builder. + class EchoTarget: + def close(self): + return ET.Element("element") # simulate root + parser = ET.XMLParser(EchoTarget()) + parser.feed("some text") + self.assertEqual(parser.close().tag, 'element') - Check reference leak. - >>> xmltoolkit63() - >>> count = sys.getrefcount(None) - >>> for i in range(1000): - ... xmltoolkit63() - >>> sys.getrefcount(None) - count - 0 + def test_bug_200709_default_namespace(self): + e = ET.Element("{default}elem") + s = ET.SubElement(e, "{default}elem") + self.assertEqual(serialize(e, default_namespace="default"), # 1 + '') - """ - tree = ET.TreeBuilder() - tree.start("tag", {}) - tree.data("text") - tree.end("tag") + e = ET.Element("{default}elem") + s = ET.SubElement(e, "{default}elem") + s = ET.SubElement(e, "{not-default}elem") + self.assertEqual(serialize(e, default_namespace="default"), # 2 + '' + '' + '' + '') -# -------------------------------------------------------------------- + e = ET.Element("{default}elem") + s = ET.SubElement(e, "{default}elem") + s = ET.SubElement(e, "elem") # unprefixed name + with self.assertRaises(ValueError) as cm: + serialize(e, default_namespace="default") # 3 + self.assertEqual(str(cm.exception), + 'cannot use non-qualified names with default_namespace option') + def test_bug_200709_register_namespace(self): + e = ET.Element("{http://namespace.invalid/does/not/exist/}title") + self.assertEqual(ET.tostring(e), + b'') + ET.register_namespace("foo", "http://namespace.invalid/does/not/exist/") + e = ET.Element("{http://namespace.invalid/does/not/exist/}title") + self.assertEqual(ET.tostring(e), + b'') -def bug_200708_newline(): - r""" + # And the Dublin Core namespace is in the default list: - Preserve newlines in attributes. + e = ET.Element("{http://purl.org/dc/elements/1.1/}title") + self.assertEqual(ET.tostring(e), + b'') - >>> e = ET.Element('SomeTag', text="def _f():\n return 3\n") - >>> ET.tostring(e) - b'' - >>> ET.XML(ET.tostring(e)).get("text") - 'def _f():\n return 3\n' - >>> ET.tostring(ET.XML(ET.tostring(e))) - b'' + def test_bug_200709_element_comment(self): + # Not sure if this can be fixed, really (since the serializer needs + # ET.Comment, not cET.comment). - """ + a = ET.Element('a') + a.append(ET.Comment('foo')) + self.assertEqual(a[0].tag, ET.Comment) -def bug_200708_close(): - """ + a = ET.Element('a') + a.append(ET.PI('foo')) + self.assertEqual(a[0].tag, ET.PI) - Test default builder. - >>> parser = ET.XMLParser() # default - >>> parser.feed("some text") - >>> summarize(parser.close()) - 'element' + def test_bug_200709_element_insert(self): + a = ET.Element('a') + b = ET.SubElement(a, 'b') + c = ET.SubElement(a, 'c') + d = ET.Element('d') + a.insert(0, d) + self.assertEqual(summarize_list(a), ['d', 'b', 'c']) + a.insert(-1, d) + self.assertEqual(summarize_list(a), ['d', 'b', 'd', 'c']) - Test custom builder. - >>> class EchoTarget: - ... def close(self): - ... return ET.Element("element") # simulate root - >>> parser = ET.XMLParser(EchoTarget()) - >>> parser.feed("some text") - >>> summarize(parser.close()) - 'element' + def test_bug_200709_iter_comment(self): + a = ET.Element('a') + b = ET.SubElement(a, 'b') + comment_b = ET.Comment("TEST-b") + b.append(comment_b) + self.assertEqual(summarize_list(a.iter(ET.Comment)), [ET.Comment]) - """ + # -------------------------------------------------------------------- + # reported on bugs.python.org -def bug_200709_default_namespace(): - """ + def test_bug_1534630(self): + bob = ET.TreeBuilder() + e = bob.data("data") + e = bob.start("tag", {}) + e = bob.end("tag") + e = bob.close() + self.assertEqual(serialize(e), '') - >>> e = ET.Element("{default}elem") - >>> s = ET.SubElement(e, "{default}elem") - >>> serialize(e, default_namespace="default") # 1 - '' + def test_issue6233(self): + e = ET.XML(b"" + b't\xc3\xa3g') + self.assertEqual(ET.tostring(e, 'ascii'), + b"\n" + b'tãg') + e = ET.XML(b"" + b't\xe3g') + self.assertEqual(ET.tostring(e, 'ascii'), + b"\n" + b'tãg') - >>> e = ET.Element("{default}elem") - >>> s = ET.SubElement(e, "{default}elem") - >>> s = ET.SubElement(e, "{not-default}elem") - >>> serialize(e, default_namespace="default") # 2 - '' + def test_issue3151(self): + e = ET.XML('') + self.assertEqual(e.tag, '{${stuff}}localname') + t = ET.ElementTree(e) + self.assertEqual(ET.tostring(e), b'') - >>> e = ET.Element("{default}elem") - >>> s = ET.SubElement(e, "{default}elem") - >>> s = ET.SubElement(e, "elem") # unprefixed name - >>> serialize(e, default_namespace="default") # 3 - Traceback (most recent call last): - ValueError: cannot use non-qualified names with default_namespace option + def test_issue6565(self): + elem = ET.XML("") + self.assertEqual(summarize_list(elem), ['tag']) + newelem = ET.XML(SAMPLE_XML) + elem[:] = newelem[:] + self.assertEqual(summarize_list(elem), ['tag', 'tag', 'section']) - """ + def test_issue10777(self): + # Registering a namespace twice caused a "dictionary changed size during + # iteration" bug. -def bug_200709_register_namespace(): - """ - - >>> ET.tostring(ET.Element("{http://namespace.invalid/does/not/exist/}title")) - b'' - >>> ET.register_namespace("foo", "http://namespace.invalid/does/not/exist/") - >>> ET.tostring(ET.Element("{http://namespace.invalid/does/not/exist/}title")) - b'' - - And the Dublin Core namespace is in the default list: - - >>> ET.tostring(ET.Element("{http://purl.org/dc/elements/1.1/}title")) - b'' - - """ - -def bug_200709_element_comment(): - """ - - Not sure if this can be fixed, really (since the serializer needs - ET.Comment, not cET.comment). - - >>> a = ET.Element('a') - >>> a.append(ET.Comment('foo')) - >>> a[0].tag == ET.Comment - True - - >>> a = ET.Element('a') - >>> a.append(ET.PI('foo')) - >>> a[0].tag == ET.PI - True - - """ - -def bug_200709_element_insert(): - """ - - >>> a = ET.Element('a') - >>> b = ET.SubElement(a, 'b') - >>> c = ET.SubElement(a, 'c') - >>> d = ET.Element('d') - >>> a.insert(0, d) - >>> summarize_list(a) - ['d', 'b', 'c'] - >>> a.insert(-1, d) - >>> summarize_list(a) - ['d', 'b', 'd', 'c'] - - """ - -def bug_200709_iter_comment(): - """ - - >>> a = ET.Element('a') - >>> b = ET.SubElement(a, 'b') - >>> comment_b = ET.Comment("TEST-b") - >>> b.append(comment_b) - >>> summarize_list(a.iter(ET.Comment)) - [''] - - """ - -# -------------------------------------------------------------------- -# reported on bugs.python.org - -def bug_1534630(): - """ - - >>> bob = ET.TreeBuilder() - >>> e = bob.data("data") - >>> e = bob.start("tag", {}) - >>> e = bob.end("tag") - >>> e = bob.close() - >>> serialize(e) - '' - - """ - -def check_issue6233(): - """ - - >>> e = ET.XML(b"t\\xc3\\xa3g") - >>> ET.tostring(e, 'ascii') - b"\\ntãg" - >>> e = ET.XML(b"t\\xe3g") - >>> ET.tostring(e, 'ascii') - b"\\ntãg" - - """ - -def check_issue3151(): - """ - - >>> e = ET.XML('') - >>> e.tag - '{${stuff}}localname' - >>> t = ET.ElementTree(e) - >>> ET.tostring(e) - b'' - - """ - -def check_issue6565(): - """ - - >>> elem = ET.XML("") - >>> summarize_list(elem) - ['tag'] - >>> newelem = ET.XML(SAMPLE_XML) - >>> elem[:] = newelem[:] - >>> summarize_list(elem) - ['tag', 'tag', 'section'] - - """ - -def check_issue10777(): - """ - Registering a namespace twice caused a "dictionary changed size during - iteration" bug. - - >>> ET.register_namespace('test10777', 'http://myuri/') - >>> ET.register_namespace('test10777', 'http://myuri/') - """ + ET.register_namespace('test10777', 'http://myuri/') + ET.register_namespace('test10777', 'http://myuri/') # -------------------------------------------------------------------- @@ -1698,7 +1493,7 @@ self.assertEqual(len(e2), 2) self.assertEqualElements(e, e2) -class ElementTreeTest(unittest.TestCase): +class ElementTreeTypeTest(unittest.TestCase): def test_istype(self): self.assertIsInstance(ET.ParseError, type) self.assertIsInstance(ET.QName, type) @@ -1738,19 +1533,6 @@ mye = MyElement('joe') self.assertEqual(mye.newmethod(), 'joe') - def test_html_empty_elems_serialization(self): - # issue 15970 - # from http://www.w3.org/TR/html401/index/elements.html - for element in ['AREA', 'BASE', 'BASEFONT', 'BR', 'COL', 'FRAME', 'HR', - 'IMG', 'INPUT', 'ISINDEX', 'LINK', 'META', 'PARAM']: - for elem in [element, element.lower()]: - expected = '<%s>' % elem - serialized = serialize(ET.XML('<%s />' % elem), method='html') - self.assertEqual(serialized, expected) - serialized = serialize(ET.XML('<%s>' % (elem,elem)), - method='html') - self.assertEqual(serialized, expected) - class ElementFindTest(unittest.TestCase): def test_find_simple(self): @@ -2059,31 +1841,6 @@ 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd')) -class XincludeTest(unittest.TestCase): - def _my_loader(self, href, parse): - # Used to avoid a test-dependency problem where the default loader - # of ElementInclude uses the pyET parser for cET tests. - if parse == 'xml': - with open(href, 'rb') as f: - return ET.parse(f).getroot() - else: - return None - - def test_xinclude_default(self): - from xml.etree import ElementInclude - doc = xinclude_loader('default.xml') - ElementInclude.include(doc, self._my_loader) - s = serialize(doc) - self.assertEqual(s.strip(), ''' -

Example.

- - text - texttail - - -
''') - - class XMLParserTest(unittest.TestCase): sample1 = '22' sample2 = (' http://hg.python.org/cpython/rev/5eefc85b8be8 changeset: 82384:5eefc85b8be8 parent: 82382:2528e4aea338 parent: 82383:af570205b978 user: Serhiy Storchaka date: Mon Feb 25 17:21:42 2013 +0200 summary: Issue #15083: Convert ElementTree doctests to unittests. files: Lib/test/test_xml_etree.py | 2255 ++++++++++------------- 1 files changed, 1007 insertions(+), 1248 deletions(-) diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -87,21 +87,26 @@ """ -def sanity(): - """ - Import sanity. +ENTITY_XML = """\ + +%user-entities; +]> +&entity; +""" - >>> from xml.etree import ElementTree - >>> from xml.etree import ElementInclude - >>> from xml.etree import ElementPath - """ -def check_method(method): - if not hasattr(method, '__call__'): - print(method, "not callable") +class ModuleTest(unittest.TestCase): + + def test_sanity(self): + # Import sanity. + + from xml.etree import ElementTree + from xml.etree import ElementInclude + from xml.etree import ElementPath + def serialize(elem, to_string=True, encoding='unicode', **options): - import io if encoding != 'unicode': file = io.BytesIO() else: @@ -114,68 +119,9 @@ file.seek(0) return file -def summarize(elem): - if elem.tag == ET.Comment: - return "" - return elem.tag +def summarize_list(seq): + return [elem.tag for elem in seq] -def summarize_list(seq): - return [summarize(elem) for elem in seq] - -def normalize_crlf(tree): - for elem in tree.iter(): - if elem.text: - elem.text = elem.text.replace("\r\n", "\n") - if elem.tail: - elem.tail = elem.tail.replace("\r\n", "\n") - -def normalize_exception(func, *args, **kwargs): - # Ignore the exception __module__ - try: - func(*args, **kwargs) - except Exception as err: - print("Traceback (most recent call last):") - print("{}: {}".format(err.__class__.__name__, err)) - -def check_string(string): - len(string) - for char in string: - if len(char) != 1: - print("expected one-character string, got %r" % char) - new_string = string + "" - new_string = string + " " - string[:0] - -def check_mapping(mapping): - len(mapping) - keys = mapping.keys() - items = mapping.items() - for key in keys: - item = mapping[key] - mapping["key"] = "value" - if mapping["key"] != "value": - print("expected value string, got %r" % mapping["key"]) - -def check_element(element): - if not ET.iselement(element): - print("not an element") - if not hasattr(element, "tag"): - print("no tag member") - if not hasattr(element, "attrib"): - print("no attrib member") - if not hasattr(element, "text"): - print("no text member") - if not hasattr(element, "tail"): - print("no tail member") - - check_string(element.tag) - check_mapping(element.attrib) - if element.text is not None: - check_string(element.text) - if element.tail is not None: - check_string(element.tail) - for elem in element: - check_element(elem) class ElementTestCase: @classmethod @@ -212,837 +158,757 @@ # -------------------------------------------------------------------- # element tree tests -def interface(): - """ - Test element tree interface. +class ElementTreeTest(unittest.TestCase): - >>> element = ET.Element("tag") - >>> check_element(element) - >>> tree = ET.ElementTree(element) - >>> check_element(tree.getroot()) + def serialize_check(self, elem, expected): + self.assertEqual(serialize(elem), expected) - >>> element = ET.Element("t\\xe4g", key="value") - >>> tree = ET.ElementTree(element) - >>> repr(element) # doctest: +ELLIPSIS - "" - >>> element = ET.Element("tag", key="value") + def test_interface(self): + # Test element tree interface. - Make sure all standard element methods exist. + def check_string(string): + len(string) + for char in string: + self.assertEqual(len(char), 1, + msg="expected one-character string, got %r" % char) + new_string = string + "" + new_string = string + " " + string[:0] - >>> check_method(element.append) - >>> check_method(element.extend) - >>> check_method(element.insert) - >>> check_method(element.remove) - >>> check_method(element.getchildren) - >>> check_method(element.find) - >>> check_method(element.iterfind) - >>> check_method(element.findall) - >>> check_method(element.findtext) - >>> check_method(element.clear) - >>> check_method(element.get) - >>> check_method(element.set) - >>> check_method(element.keys) - >>> check_method(element.items) - >>> check_method(element.iter) - >>> check_method(element.itertext) - >>> check_method(element.getiterator) + def check_mapping(mapping): + len(mapping) + keys = mapping.keys() + items = mapping.items() + for key in keys: + item = mapping[key] + mapping["key"] = "value" + self.assertEqual(mapping["key"], "value", + msg="expected value string, got %r" % mapping["key"]) - These methods return an iterable. See bug 6472. + def check_element(element): + self.assertTrue(ET.iselement(element), msg="not an element") + self.assertTrue(hasattr(element, "tag"), msg="no tag member") + self.assertTrue(hasattr(element, "attrib"), msg="no attrib member") + self.assertTrue(hasattr(element, "text"), msg="no text member") + self.assertTrue(hasattr(element, "tail"), msg="no tail member") - >>> check_method(element.iterfind("tag").__next__) - >>> check_method(element.iterfind("*").__next__) - >>> check_method(tree.iterfind("tag").__next__) - >>> check_method(tree.iterfind("*").__next__) + check_string(element.tag) + check_mapping(element.attrib) + if element.text is not None: + check_string(element.text) + if element.tail is not None: + check_string(element.tail) + for elem in element: + check_element(elem) - These aliases are provided: + element = ET.Element("tag") + check_element(element) + tree = ET.ElementTree(element) + check_element(tree.getroot()) + element = ET.Element("t\xe4g", key="value") + tree = ET.ElementTree(element) + self.assertRegex(repr(element), r"^$") + element = ET.Element("tag", key="value") - >>> assert ET.XML == ET.fromstring - >>> assert ET.PI == ET.ProcessingInstruction - >>> assert ET.XMLParser == ET.XMLTreeBuilder - """ + # Make sure all standard element methods exist. -def simpleops(): - """ - Basic method sanity checks. + def check_method(method): + self.assertTrue(hasattr(method, '__call__'), + msg="%s not callable" % method) - >>> elem = ET.XML("") - >>> serialize(elem) - '' - >>> e = ET.Element("tag2") - >>> elem.append(e) - >>> serialize(elem) - '' - >>> elem.remove(e) - >>> serialize(elem) - '' - >>> elem.insert(0, e) - >>> serialize(elem) - '' - >>> elem.remove(e) - >>> elem.extend([e]) - >>> serialize(elem) - '' - >>> elem.remove(e) + check_method(element.append) + check_method(element.extend) + check_method(element.insert) + check_method(element.remove) + check_method(element.getchildren) + check_method(element.find) + check_method(element.iterfind) + check_method(element.findall) + check_method(element.findtext) + check_method(element.clear) + check_method(element.get) + check_method(element.set) + check_method(element.keys) + check_method(element.items) + check_method(element.iter) + check_method(element.itertext) + check_method(element.getiterator) - >>> element = ET.Element("tag", key="value") - >>> serialize(element) # 1 - '' - >>> subelement = ET.Element("subtag") - >>> element.append(subelement) - >>> serialize(element) # 2 - '' - >>> element.insert(0, subelement) - >>> serialize(element) # 3 - '' - >>> element.remove(subelement) - >>> serialize(element) # 4 - '' - >>> element.remove(subelement) - >>> serialize(element) # 5 - '' - >>> element.remove(subelement) - Traceback (most recent call last): - ValueError: list.remove(x): x not in list - >>> serialize(element) # 6 - '' - >>> element[0:0] = [subelement, subelement, subelement] - >>> serialize(element[1]) - '' - >>> element[1:9] == [element[1], element[2]] - True - >>> element[:9:2] == [element[0], element[2]] - True - >>> del element[1:2] - >>> serialize(element) - '' - """ + # These methods return an iterable. See bug 6472. -def cdata(): - """ - Test CDATA handling (etc). + def check_iter(it): + check_method(it.__next__) - >>> serialize(ET.XML("hello")) - 'hello' - >>> serialize(ET.XML("hello")) - 'hello' - >>> serialize(ET.XML("")) - 'hello' - """ + check_iter(element.iterfind("tag")) + check_iter(element.iterfind("*")) + check_iter(tree.iterfind("tag")) + check_iter(tree.iterfind("*")) -def file_init(): - """ - >>> import io + # These aliases are provided: - >>> stringfile = io.BytesIO(SAMPLE_XML.encode("utf-8")) - >>> tree = ET.ElementTree(file=stringfile) - >>> tree.find("tag").tag - 'tag' - >>> tree.find("section/tag").tag - 'tag' + self.assertEqual(ET.XML, ET.fromstring) + self.assertEqual(ET.PI, ET.ProcessingInstruction) + self.assertEqual(ET.XMLParser, ET.XMLTreeBuilder) - >>> tree = ET.ElementTree(file=SIMPLE_XMLFILE) - >>> tree.find("element").tag - 'element' - >>> tree.find("element/../empty-element").tag - 'empty-element' - """ + def test_simpleops(self): + # Basic method sanity checks. -def path_cache(): - """ - Check that the path cache behaves sanely. + elem = ET.XML("") + self.serialize_check(elem, '') + e = ET.Element("tag2") + elem.append(e) + self.serialize_check(elem, '') + elem.remove(e) + self.serialize_check(elem, '') + elem.insert(0, e) + self.serialize_check(elem, '') + elem.remove(e) + elem.extend([e]) + self.serialize_check(elem, '') + elem.remove(e) - >>> from xml.etree import ElementPath + element = ET.Element("tag", key="value") + self.serialize_check(element, '') # 1 + subelement = ET.Element("subtag") + element.append(subelement) + self.serialize_check(element, '') # 2 + element.insert(0, subelement) + self.serialize_check(element, + '') # 3 + element.remove(subelement) + self.serialize_check(element, '') # 4 + element.remove(subelement) + self.serialize_check(element, '') # 5 + with self.assertRaises(ValueError) as cm: + element.remove(subelement) + self.assertEqual(str(cm.exception), 'list.remove(x): x not in list') + self.serialize_check(element, '') # 6 + element[0:0] = [subelement, subelement, subelement] + self.serialize_check(element[1], '') + self.assertEqual(element[1:9], [element[1], element[2]]) + self.assertEqual(element[:9:2], [element[0], element[2]]) + del element[1:2] + self.serialize_check(element, + '') - >>> elem = ET.XML(SAMPLE_XML) - >>> for i in range(10): ET.ElementTree(elem).find('./'+str(i)) - >>> cache_len_10 = len(ElementPath._cache) - >>> for i in range(10): ET.ElementTree(elem).find('./'+str(i)) - >>> len(ElementPath._cache) == cache_len_10 - True - >>> for i in range(20): ET.ElementTree(elem).find('./'+str(i)) - >>> len(ElementPath._cache) > cache_len_10 - True - >>> for i in range(600): ET.ElementTree(elem).find('./'+str(i)) - >>> len(ElementPath._cache) < 500 - True - """ + def test_cdata(self): + # Test CDATA handling (etc). -def copy(): - """ - Test copy handling (etc). + self.serialize_check(ET.XML("hello"), + 'hello') + self.serialize_check(ET.XML("hello"), + 'hello') + self.serialize_check(ET.XML(""), + 'hello') - >>> import copy - >>> e1 = ET.XML("hello") - >>> e2 = copy.copy(e1) - >>> e3 = copy.deepcopy(e1) - >>> e1.find("foo").tag = "bar" - >>> serialize(e1) - 'hello' - >>> serialize(e2) - 'hello' - >>> serialize(e3) - 'hello' + def test_file_init(self): + stringfile = io.BytesIO(SAMPLE_XML.encode("utf-8")) + tree = ET.ElementTree(file=stringfile) + self.assertEqual(tree.find("tag").tag, 'tag') + self.assertEqual(tree.find("section/tag").tag, 'tag') - """ + tree = ET.ElementTree(file=SIMPLE_XMLFILE) + self.assertEqual(tree.find("element").tag, 'element') + self.assertEqual(tree.find("element/../empty-element").tag, + 'empty-element') -def attrib(): - """ - Test attribute handling. + def test_path_cache(self): + # Check that the path cache behaves sanely. - >>> elem = ET.Element("tag") - >>> elem.get("key") # 1.1 - >>> elem.get("key", "default") # 1.2 - 'default' - >>> elem.set("key", "value") - >>> elem.get("key") # 1.3 - 'value' + from xml.etree import ElementPath - >>> elem = ET.Element("tag", key="value") - >>> elem.get("key") # 2.1 - 'value' - >>> elem.attrib # 2.2 - {'key': 'value'} + elem = ET.XML(SAMPLE_XML) + for i in range(10): ET.ElementTree(elem).find('./'+str(i)) + cache_len_10 = len(ElementPath._cache) + for i in range(10): ET.ElementTree(elem).find('./'+str(i)) + self.assertEqual(len(ElementPath._cache), cache_len_10) + for i in range(20): ET.ElementTree(elem).find('./'+str(i)) + self.assertGreater(len(ElementPath._cache), cache_len_10) + for i in range(600): ET.ElementTree(elem).find('./'+str(i)) + self.assertLess(len(ElementPath._cache), 500) - >>> attrib = {"key": "value"} - >>> elem = ET.Element("tag", attrib) - >>> attrib.clear() # check for aliasing issues - >>> elem.get("key") # 3.1 - 'value' - >>> elem.attrib # 3.2 - {'key': 'value'} + def test_copy(self): + # Test copy handling (etc). - >>> attrib = {"key": "value"} - >>> elem = ET.Element("tag", **attrib) - >>> attrib.clear() # check for aliasing issues - >>> elem.get("key") # 4.1 - 'value' - >>> elem.attrib # 4.2 - {'key': 'value'} + import copy + e1 = ET.XML("hello") + e2 = copy.copy(e1) + e3 = copy.deepcopy(e1) + e1.find("foo").tag = "bar" + self.serialize_check(e1, 'hello') + self.serialize_check(e2, 'hello') + self.serialize_check(e3, 'hello') - >>> elem = ET.Element("tag", {"key": "other"}, key="value") - >>> elem.get("key") # 5.1 - 'value' - >>> elem.attrib # 5.2 - {'key': 'value'} + def test_attrib(self): + # Test attribute handling. - >>> elem = ET.Element('test') - >>> elem.text = "aa" - >>> elem.set('testa', 'testval') - >>> elem.set('testb', 'test2') - >>> ET.tostring(elem) - b'aa' - >>> sorted(elem.keys()) - ['testa', 'testb'] - >>> sorted(elem.items()) - [('testa', 'testval'), ('testb', 'test2')] - >>> elem.attrib['testb'] - 'test2' - >>> elem.attrib['testb'] = 'test1' - >>> elem.attrib['testc'] = 'test2' - >>> ET.tostring(elem) - b'aa' - """ + elem = ET.Element("tag") + elem.get("key") # 1.1 + self.assertEqual(elem.get("key", "default"), 'default') # 1.2 -def makeelement(): - """ - Test makeelement handling. + elem.set("key", "value") + self.assertEqual(elem.get("key"), 'value') # 1.3 - >>> elem = ET.Element("tag") - >>> attrib = {"key": "value"} - >>> subelem = elem.makeelement("subtag", attrib) - >>> if subelem.attrib is attrib: - ... print("attrib aliasing") - >>> elem.append(subelem) - >>> serialize(elem) - '' + elem = ET.Element("tag", key="value") + self.assertEqual(elem.get("key"), 'value') # 2.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 2.2 - >>> elem.clear() - >>> serialize(elem) - '' - >>> elem.append(subelem) - >>> serialize(elem) - '' - >>> elem.extend([subelem, subelem]) - >>> serialize(elem) - '' - >>> elem[:] = [subelem] - >>> serialize(elem) - '' - >>> elem[:] = tuple([subelem]) - >>> serialize(elem) - '' + attrib = {"key": "value"} + elem = ET.Element("tag", attrib) + attrib.clear() # check for aliasing issues + self.assertEqual(elem.get("key"), 'value') # 3.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 3.2 - """ + attrib = {"key": "value"} + elem = ET.Element("tag", **attrib) + attrib.clear() # check for aliasing issues + self.assertEqual(elem.get("key"), 'value') # 4.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 4.2 -def parsefile(): - """ - Test parsing from file. + elem = ET.Element("tag", {"key": "other"}, key="value") + self.assertEqual(elem.get("key"), 'value') # 5.1 + self.assertEqual(elem.attrib, {'key': 'value'}) # 5.2 - >>> tree = ET.parse(SIMPLE_XMLFILE) - >>> normalize_crlf(tree) - >>> tree.write(sys.stdout, encoding='unicode') - - text - texttail - - - >>> tree = ET.parse(SIMPLE_NS_XMLFILE) - >>> normalize_crlf(tree) - >>> tree.write(sys.stdout, encoding='unicode') - - text - texttail - - + elem = ET.Element('test') + elem.text = "aa" + elem.set('testa', 'testval') + elem.set('testb', 'test2') + self.assertEqual(ET.tostring(elem), + b'aa') + self.assertEqual(sorted(elem.keys()), ['testa', 'testb']) + self.assertEqual(sorted(elem.items()), + [('testa', 'testval'), ('testb', 'test2')]) + self.assertEqual(elem.attrib['testb'], 'test2') + elem.attrib['testb'] = 'test1' + elem.attrib['testc'] = 'test2' + self.assertEqual(ET.tostring(elem), + b'aa') - >>> with open(SIMPLE_XMLFILE) as f: - ... data = f.read() + def test_makeelement(self): + # Test makeelement handling. - >>> parser = ET.XMLParser() - >>> parser.version # doctest: +ELLIPSIS - 'Expat ...' - >>> parser.feed(data) - >>> print(serialize(parser.close())) - - text - texttail - - + elem = ET.Element("tag") + attrib = {"key": "value"} + subelem = elem.makeelement("subtag", attrib) + self.assertIsNot(subelem.attrib, attrib, msg="attrib aliasing") + elem.append(subelem) + self.serialize_check(elem, '') - >>> parser = ET.XMLTreeBuilder() # 1.2 compatibility - >>> parser.feed(data) - >>> print(serialize(parser.close())) - - text - texttail - - + elem.clear() + self.serialize_check(elem, '') + elem.append(subelem) + self.serialize_check(elem, '') + elem.extend([subelem, subelem]) + self.serialize_check(elem, + '') + elem[:] = [subelem] + self.serialize_check(elem, '') + elem[:] = tuple([subelem]) + self.serialize_check(elem, '') - >>> target = ET.TreeBuilder() - >>> parser = ET.XMLParser(target=target) - >>> parser.feed(data) - >>> print(serialize(parser.close())) - - text - texttail - - - """ + def test_parsefile(self): + # Test parsing from file. -def parseliteral(): - """ - >>> element = ET.XML("text") - >>> ET.ElementTree(element).write(sys.stdout, encoding='unicode') - text - >>> element = ET.fromstring("text") - >>> ET.ElementTree(element).write(sys.stdout, encoding='unicode') - text - >>> sequence = ["", "text"] - >>> element = ET.fromstringlist(sequence) - >>> ET.tostring(element) - b'text' - >>> b"".join(ET.tostringlist(element)) - b'text' - >>> ET.tostring(element, "ascii") - b"\\ntext" - >>> _, ids = ET.XMLID("text") - >>> len(ids) - 0 - >>> _, ids = ET.XMLID("text") - >>> len(ids) - 1 - >>> ids["body"].tag - 'body' - """ + tree = ET.parse(SIMPLE_XMLFILE) + stream = io.StringIO() + tree.write(stream, encoding='unicode') + self.assertEqual(stream.getvalue(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') + tree = ET.parse(SIMPLE_NS_XMLFILE) + stream = io.StringIO() + tree.write(stream, encoding='unicode') + self.assertEqual(stream.getvalue(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') -def iterparse(): - """ - Test iterparse interface. + with open(SIMPLE_XMLFILE) as f: + data = f.read() - >>> iterparse = ET.iterparse + parser = ET.XMLParser() + self.assertRegex(parser.version, r'^Expat ') + parser.feed(data) + self.serialize_check(parser.close(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') - >>> context = iterparse(SIMPLE_XMLFILE) - >>> action, elem = next(context) - >>> print(action, elem.tag) - end element - >>> for action, elem in context: - ... print(action, elem.tag) - end element - end empty-element - end root - >>> context.root.tag - 'root' + parser = ET.XMLTreeBuilder() # 1.2 compatibility + parser.feed(data) + self.serialize_check(parser.close(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') - >>> context = iterparse(SIMPLE_NS_XMLFILE) - >>> for action, elem in context: - ... print(action, elem.tag) - end {namespace}element - end {namespace}element - end {namespace}empty-element - end {namespace}root + target = ET.TreeBuilder() + parser = ET.XMLParser(target=target) + parser.feed(data) + self.serialize_check(parser.close(), + '\n' + ' text\n' + ' texttail\n' + ' \n' + '') - >>> events = () - >>> context = iterparse(SIMPLE_XMLFILE, events) - >>> for action, elem in context: - ... print(action, elem.tag) + def test_parseliteral(self): + element = ET.XML("text") + self.assertEqual(ET.tostring(element, encoding='unicode'), + 'text') + element = ET.fromstring("text") + self.assertEqual(ET.tostring(element, encoding='unicode'), + 'text') + sequence = ["", "text"] + element = ET.fromstringlist(sequence) + self.assertEqual(ET.tostring(element), + b'text') + self.assertEqual(b"".join(ET.tostringlist(element)), + b'text') + self.assertEqual(ET.tostring(element, "ascii"), + b"\n" + b"text") + _, ids = ET.XMLID("text") + self.assertEqual(len(ids), 0) + _, ids = ET.XMLID("text") + self.assertEqual(len(ids), 1) + self.assertEqual(ids["body"].tag, 'body') - >>> events = () - >>> context = iterparse(SIMPLE_XMLFILE, events=events) - >>> for action, elem in context: - ... print(action, elem.tag) + def test_iterparse(self): + # Test iterparse interface. - >>> events = ("start", "end") - >>> context = iterparse(SIMPLE_XMLFILE, events) - >>> for action, elem in context: - ... print(action, elem.tag) - start root - start element - end element - start element - end element - start empty-element - end empty-element - end root + iterparse = ET.iterparse - >>> events = ("start", "end", "start-ns", "end-ns") - >>> context = iterparse(SIMPLE_NS_XMLFILE, events) - >>> for action, elem in context: - ... if action in ("start", "end"): - ... print(action, elem.tag) - ... else: - ... print(action, elem) - start-ns ('', 'namespace') - start {namespace}root - start {namespace}element - end {namespace}element - start {namespace}element - end {namespace}element - start {namespace}empty-element - end {namespace}empty-element - end {namespace}root - end-ns None + context = iterparse(SIMPLE_XMLFILE) + action, elem = next(context) + self.assertEqual((action, elem.tag), ('end', 'element')) + self.assertEqual([(action, elem.tag) for action, elem in context], [ + ('end', 'element'), + ('end', 'empty-element'), + ('end', 'root'), + ]) + self.assertEqual(context.root.tag, 'root') - >>> events = ("start", "end", "bogus") - >>> with open(SIMPLE_XMLFILE, "rb") as f: - ... iterparse(f, events) - Traceback (most recent call last): - ValueError: unknown event 'bogus' + context = iterparse(SIMPLE_NS_XMLFILE) + self.assertEqual([(action, elem.tag) for action, elem in context], [ + ('end', '{namespace}element'), + ('end', '{namespace}element'), + ('end', '{namespace}empty-element'), + ('end', '{namespace}root'), + ]) - >>> import io + events = () + context = iterparse(SIMPLE_XMLFILE, events) + self.assertEqual([(action, elem.tag) for action, elem in context], []) - >>> source = io.BytesIO( - ... b"\\n" - ... b"text\\n") - >>> events = ("start-ns",) - >>> context = iterparse(source, events) - >>> for action, elem in context: - ... print(action, elem) - start-ns ('', 'http://\\xe9ffbot.org/ns') - start-ns ('cl\\xe9', 'http://effbot.org/ns') + events = () + context = iterparse(SIMPLE_XMLFILE, events=events) + self.assertEqual([(action, elem.tag) for action, elem in context], []) - >>> source = io.StringIO("junk") - >>> try: - ... for action, elem in iterparse(source): - ... print(action, elem.tag) - ... except ET.ParseError as v: - ... print(v) - end document - junk after document element: line 1, column 12 - """ + events = ("start", "end") + context = iterparse(SIMPLE_XMLFILE, events) + self.assertEqual([(action, elem.tag) for action, elem in context], [ + ('start', 'root'), + ('start', 'element'), + ('end', 'element'), + ('start', 'element'), + ('end', 'element'), + ('start', 'empty-element'), + ('end', 'empty-element'), + ('end', 'root'), + ]) -def writefile(): - """ - >>> elem = ET.Element("tag") - >>> elem.text = "text" - >>> serialize(elem) - 'text' - >>> ET.SubElement(elem, "subtag").text = "subtext" - >>> serialize(elem) - 'textsubtext' + events = ("start", "end", "start-ns", "end-ns") + context = iterparse(SIMPLE_NS_XMLFILE, events) + self.assertEqual([(action, elem.tag) if action in ("start", "end") else (action, elem) + for action, elem in context], [ + ('start-ns', ('', 'namespace')), + ('start', '{namespace}root'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}empty-element'), + ('end', '{namespace}empty-element'), + ('end', '{namespace}root'), + ('end-ns', None), + ]) - Test tag suppression - >>> elem.tag = None - >>> serialize(elem) - 'textsubtext' - >>> elem.insert(0, ET.Comment("comment")) - >>> serialize(elem) # assumes 1.3 - 'textsubtext' - >>> elem[0] = ET.PI("key", "value") - >>> serialize(elem) - 'textsubtext' - """ + events = ("start", "end", "bogus") + with self.assertRaises(ValueError) as cm: + with open(SIMPLE_XMLFILE, "rb") as f: + iterparse(f, events) + self.assertEqual(str(cm.exception), "unknown event 'bogus'") -def custom_builder(): - """ - Test parser w. custom builder. + source = io.BytesIO( + b"\n" + b"text\n") + events = ("start-ns",) + context = iterparse(source, events) + self.assertEqual([(action, elem) for action, elem in context], [ + ('start-ns', ('', 'http://\xe9ffbot.org/ns')), + ('start-ns', ('cl\xe9', 'http://effbot.org/ns')), + ]) - >>> with open(SIMPLE_XMLFILE) as f: - ... data = f.read() - >>> class Builder: - ... def start(self, tag, attrib): - ... print("start", tag) - ... def end(self, tag): - ... print("end", tag) - ... def data(self, text): - ... pass - >>> builder = Builder() - >>> parser = ET.XMLParser(target=builder) - >>> parser.feed(data) - start root - start element - end element - start element - end element - start empty-element - end empty-element - end root + source = io.StringIO("junk") + it = iterparse(source) + action, elem = next(it) + self.assertEqual((action, elem.tag), ('end', 'document')) + with self.assertRaises(ET.ParseError) as cm: + next(it) + self.assertEqual(str(cm.exception), + 'junk after document element: line 1, column 12') - >>> with open(SIMPLE_NS_XMLFILE) as f: - ... data = f.read() - >>> class Builder: - ... def start(self, tag, attrib): - ... print("start", tag) - ... def end(self, tag): - ... print("end", tag) - ... def data(self, text): - ... pass - ... def pi(self, target, data): - ... print("pi", target, repr(data)) - ... def comment(self, data): - ... print("comment", repr(data)) - >>> builder = Builder() - >>> parser = ET.XMLParser(target=builder) - >>> parser.feed(data) - pi pi 'data' - comment ' comment ' - start {namespace}root - start {namespace}element - end {namespace}element - start {namespace}element - end {namespace}element - start {namespace}empty-element - end {namespace}empty-element - end {namespace}root + def test_writefile(self): + elem = ET.Element("tag") + elem.text = "text" + self.serialize_check(elem, 'text') + ET.SubElement(elem, "subtag").text = "subtext" + self.serialize_check(elem, 'textsubtext') - """ + # Test tag suppression + elem.tag = None + self.serialize_check(elem, 'textsubtext') + elem.insert(0, ET.Comment("comment")) + self.serialize_check(elem, + 'textsubtext') # assumes 1.3 -def getchildren(): - """ - Test Element.getchildren() + elem[0] = ET.PI("key", "value") + self.serialize_check(elem, 'textsubtext') - >>> with open(SIMPLE_XMLFILE, "rb") as f: - ... tree = ET.parse(f) - >>> for elem in tree.getroot().iter(): - ... summarize_list(elem.getchildren()) - ['element', 'element', 'empty-element'] - [] - [] - [] - >>> for elem in tree.getiterator(): - ... summarize_list(elem.getchildren()) - ['element', 'element', 'empty-element'] - [] - [] - [] + def test_custom_builder(self): + # Test parser w. custom builder. - >>> elem = ET.XML(SAMPLE_XML) - >>> len(elem.getchildren()) - 3 - >>> len(elem[2].getchildren()) - 1 - >>> elem[:] == elem.getchildren() - True - >>> child1 = elem[0] - >>> child2 = elem[2] - >>> del elem[1:2] - >>> len(elem.getchildren()) - 2 - >>> child1 == elem[0] - True - >>> child2 == elem[1] - True - >>> elem[0:2] = [child2, child1] - >>> child2 == elem[0] - True - >>> child1 == elem[1] - True - >>> child1 == elem[0] - False - >>> elem.clear() - >>> elem.getchildren() - [] - """ + with open(SIMPLE_XMLFILE) as f: + data = f.read() + class Builder(list): + def start(self, tag, attrib): + self.append(("start", tag)) + def end(self, tag): + self.append(("end", tag)) + def data(self, text): + pass + builder = Builder() + parser = ET.XMLParser(target=builder) + parser.feed(data) + self.assertEqual(builder, [ + ('start', 'root'), + ('start', 'element'), + ('end', 'element'), + ('start', 'element'), + ('end', 'element'), + ('start', 'empty-element'), + ('end', 'empty-element'), + ('end', 'root'), + ]) -def writestring(): - """ - >>> elem = ET.XML("text") - >>> ET.tostring(elem) - b'text' - >>> elem = ET.fromstring("text") - >>> ET.tostring(elem) - b'text' - """ + with open(SIMPLE_NS_XMLFILE) as f: + data = f.read() + class Builder(list): + def start(self, tag, attrib): + self.append(("start", tag)) + def end(self, tag): + self.append(("end", tag)) + def data(self, text): + pass + def pi(self, target, data): + self.append(("pi", target, data)) + def comment(self, data): + self.append(("comment", data)) + builder = Builder() + parser = ET.XMLParser(target=builder) + parser.feed(data) + self.assertEqual(builder, [ + ('pi', 'pi', 'data'), + ('comment', ' comment '), + ('start', '{namespace}root'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}element'), + ('end', '{namespace}element'), + ('start', '{namespace}empty-element'), + ('end', '{namespace}empty-element'), + ('end', '{namespace}root'), + ]) -def check_encoding(encoding): - """ - >>> check_encoding("ascii") - >>> check_encoding("us-ascii") - >>> check_encoding("iso-8859-1") - >>> check_encoding("iso-8859-15") - >>> check_encoding("cp437") - >>> check_encoding("mac-roman") - """ - ET.XML("" % encoding) -def methods(): - r""" - Test serialization methods. + def test_getchildren(self): + # Test Element.getchildren() - >>> e = ET.XML("") - >>> e.tail = "\n" - >>> serialize(e) - '\n' - >>> serialize(e, method=None) - '\n' - >>> serialize(e, method="xml") - '\n' - >>> serialize(e, method="html") - '\n' - >>> serialize(e, method="text") - '1 < 2\n' - """ + with open(SIMPLE_XMLFILE, "rb") as f: + tree = ET.parse(f) + self.assertEqual([summarize_list(elem.getchildren()) + for elem in tree.getroot().iter()], [ + ['element', 'element', 'empty-element'], + [], + [], + [], + ]) + self.assertEqual([summarize_list(elem.getchildren()) + for elem in tree.getiterator()], [ + ['element', 'element', 'empty-element'], + [], + [], + [], + ]) -ENTITY_XML = """\ - -%user-entities; -]> -&entity; -""" + elem = ET.XML(SAMPLE_XML) + self.assertEqual(len(elem.getchildren()), 3) + self.assertEqual(len(elem[2].getchildren()), 1) + self.assertEqual(elem[:], elem.getchildren()) + child1 = elem[0] + child2 = elem[2] + del elem[1:2] + self.assertEqual(len(elem.getchildren()), 2) + self.assertEqual(child1, elem[0]) + self.assertEqual(child2, elem[1]) + elem[0:2] = [child2, child1] + self.assertEqual(child2, elem[0]) + self.assertEqual(child1, elem[1]) + self.assertNotEqual(child1, elem[0]) + elem.clear() + self.assertEqual(elem.getchildren(), []) -def entity(): - """ - Test entity handling. + def test_writestring(self): + elem = ET.XML("text") + self.assertEqual(ET.tostring(elem), b'text') + elem = ET.fromstring("text") + self.assertEqual(ET.tostring(elem), b'text') - 1) good entities + def test_encoding(encoding): + def check(encoding): + ET.XML("" % encoding) + check("ascii") + check("us-ascii") + check("iso-8859-1") + check("iso-8859-15") + check("cp437") + check("mac-roman") - >>> e = ET.XML("test") - >>> serialize(e, encoding="us-ascii") - b'test' - >>> serialize(e) - 'test' + def test_methods(self): + # Test serialization methods. - 2) bad entities + e = ET.XML("") + e.tail = "\n" + self.assertEqual(serialize(e), + '\n') + self.assertEqual(serialize(e, method=None), + '\n') + self.assertEqual(serialize(e, method="xml"), + '\n') + self.assertEqual(serialize(e, method="html"), + '\n') + self.assertEqual(serialize(e, method="text"), '1 < 2\n') - >>> normalize_exception(ET.XML, "&entity;") - Traceback (most recent call last): - ParseError: undefined entity: line 1, column 10 + def test_entity(self): + # Test entity handling. - >>> normalize_exception(ET.XML, ENTITY_XML) - Traceback (most recent call last): - ParseError: undefined entity &entity;: line 5, column 10 + # 1) good entities - 3) custom entity + e = ET.XML("test") + self.assertEqual(serialize(e, encoding="us-ascii"), + b'test') + self.serialize_check(e, 'test') - >>> parser = ET.XMLParser() - >>> parser.entity["entity"] = "text" - >>> parser.feed(ENTITY_XML) - >>> root = parser.close() - >>> serialize(root) - 'text' - """ + # 2) bad entities -def namespace(): - """ - Test namespace issues. + with self.assertRaises(ET.ParseError) as cm: + ET.XML("&entity;") + self.assertEqual(str(cm.exception), + 'undefined entity: line 1, column 10') - 1) xml namespace + with self.assertRaises(ET.ParseError) as cm: + ET.XML(ENTITY_XML) + self.assertEqual(str(cm.exception), + 'undefined entity &entity;: line 5, column 10') - >>> elem = ET.XML("") - >>> serialize(elem) # 1.1 - '' + # 3) custom entity - 2) other "well-known" namespaces + parser = ET.XMLParser() + parser.entity["entity"] = "text" + parser.feed(ENTITY_XML) + root = parser.close() + self.serialize_check(root, 'text') - >>> elem = ET.XML("") - >>> serialize(elem) # 2.1 - '' + def test_namespace(self): + # Test namespace issues. - >>> elem = ET.XML("") - >>> serialize(elem) # 2.2 - '' + # 1) xml namespace - >>> elem = ET.XML("") - >>> serialize(elem) # 2.3 - '' + elem = ET.XML("") + self.serialize_check(elem, '') # 1.1 - 3) unknown namespaces - >>> elem = ET.XML(SAMPLE_XML_NS) - >>> print(serialize(elem)) - - text - - - subtext - - - """ + # 2) other "well-known" namespaces -def qname(): - """ - Test QName handling. + elem = ET.XML("") + self.serialize_check(elem, + '') # 2.1 - 1) decorated tags + elem = ET.XML("") + self.serialize_check(elem, + '') # 2.2 - >>> elem = ET.Element("{uri}tag") - >>> serialize(elem) # 1.1 - '' - >>> elem = ET.Element(ET.QName("{uri}tag")) - >>> serialize(elem) # 1.2 - '' - >>> elem = ET.Element(ET.QName("uri", "tag")) - >>> serialize(elem) # 1.3 - '' - >>> elem = ET.Element(ET.QName("uri", "tag")) - >>> subelem = ET.SubElement(elem, ET.QName("uri", "tag1")) - >>> subelem = ET.SubElement(elem, ET.QName("uri", "tag2")) - >>> serialize(elem) # 1.4 - '' + elem = ET.XML("") + self.serialize_check(elem, + '') # 2.3 - 2) decorated attributes + # 3) unknown namespaces + elem = ET.XML(SAMPLE_XML_NS) + self.serialize_check(elem, + '\n' + ' text\n' + ' \n' + ' \n' + ' subtext\n' + ' \n' + '') - >>> elem.clear() - >>> elem.attrib["{uri}key"] = "value" - >>> serialize(elem) # 2.1 - '' + def test_qname(self): + # Test QName handling. - >>> elem.clear() - >>> elem.attrib[ET.QName("{uri}key")] = "value" - >>> serialize(elem) # 2.2 - '' + # 1) decorated tags - 3) decorated values are not converted by default, but the - QName wrapper can be used for values + elem = ET.Element("{uri}tag") + self.serialize_check(elem, '') # 1.1 + elem = ET.Element(ET.QName("{uri}tag")) + self.serialize_check(elem, '') # 1.2 + elem = ET.Element(ET.QName("uri", "tag")) + self.serialize_check(elem, '') # 1.3 + elem = ET.Element(ET.QName("uri", "tag")) + subelem = ET.SubElement(elem, ET.QName("uri", "tag1")) + subelem = ET.SubElement(elem, ET.QName("uri", "tag2")) + self.serialize_check(elem, + '') # 1.4 - >>> elem.clear() - >>> elem.attrib["{uri}key"] = "{uri}value" - >>> serialize(elem) # 3.1 - '' + # 2) decorated attributes - >>> elem.clear() - >>> elem.attrib["{uri}key"] = ET.QName("{uri}value") - >>> serialize(elem) # 3.2 - '' + elem.clear() + elem.attrib["{uri}key"] = "value" + self.serialize_check(elem, + '') # 2.1 - >>> elem.clear() - >>> subelem = ET.Element("tag") - >>> subelem.attrib["{uri1}key"] = ET.QName("{uri2}value") - >>> elem.append(subelem) - >>> elem.append(subelem) - >>> serialize(elem) # 3.3 - '' + elem.clear() + elem.attrib[ET.QName("{uri}key")] = "value" + self.serialize_check(elem, + '') # 2.2 - 4) Direct QName tests + # 3) decorated values are not converted by default, but the + # QName wrapper can be used for values - >>> str(ET.QName('ns', 'tag')) - '{ns}tag' - >>> str(ET.QName('{ns}tag')) - '{ns}tag' - >>> q1 = ET.QName('ns', 'tag') - >>> q2 = ET.QName('ns', 'tag') - >>> q1 == q2 - True - >>> q2 = ET.QName('ns', 'other-tag') - >>> q1 == q2 - False - >>> q1 == 'ns:tag' - False - >>> q1 == '{ns}tag' - True - """ + elem.clear() + elem.attrib["{uri}key"] = "{uri}value" + self.serialize_check(elem, + '') # 3.1 -def doctype_public(): - """ - Test PUBLIC doctype. + elem.clear() + elem.attrib["{uri}key"] = ET.QName("{uri}value") + self.serialize_check(elem, + '') # 3.2 - >>> elem = ET.XML('' - ... 'text') + elem.clear() + subelem = ET.Element("tag") + subelem.attrib["{uri1}key"] = ET.QName("{uri2}value") + elem.append(subelem) + elem.append(subelem) + self.serialize_check(elem, + '' + '' + '' + '') # 3.3 - """ + # 4) Direct QName tests -def xpath_tokenizer(p): - """ - Test the XPath tokenizer. + self.assertEqual(str(ET.QName('ns', 'tag')), '{ns}tag') + self.assertEqual(str(ET.QName('{ns}tag')), '{ns}tag') + q1 = ET.QName('ns', 'tag') + q2 = ET.QName('ns', 'tag') + self.assertEqual(q1, q2) + q2 = ET.QName('ns', 'other-tag') + self.assertNotEqual(q1, q2) + self.assertNotEqual(q1, 'ns:tag') + self.assertEqual(q1, '{ns}tag') - >>> # tests from the xml specification - >>> xpath_tokenizer("*") - ['*'] - >>> xpath_tokenizer("text()") - ['text', '()'] - >>> xpath_tokenizer("@name") - ['@', 'name'] - >>> xpath_tokenizer("@*") - ['@', '*'] - >>> xpath_tokenizer("para[1]") - ['para', '[', '1', ']'] - >>> xpath_tokenizer("para[last()]") - ['para', '[', 'last', '()', ']'] - >>> xpath_tokenizer("*/para") - ['*', '/', 'para'] - >>> xpath_tokenizer("/doc/chapter[5]/section[2]") - ['/', 'doc', '/', 'chapter', '[', '5', ']', '/', 'section', '[', '2', ']'] - >>> xpath_tokenizer("chapter//para") - ['chapter', '//', 'para'] - >>> xpath_tokenizer("//para") - ['//', 'para'] - >>> xpath_tokenizer("//olist/item") - ['//', 'olist', '/', 'item'] - >>> xpath_tokenizer(".") - ['.'] - >>> xpath_tokenizer(".//para") - ['.', '//', 'para'] - >>> xpath_tokenizer("..") - ['..'] - >>> xpath_tokenizer("../@lang") - ['..', '/', '@', 'lang'] - >>> xpath_tokenizer("chapter[title]") - ['chapter', '[', 'title', ']'] - >>> xpath_tokenizer("employee[@secretary and @assistant]") - ['employee', '[', '@', 'secretary', '', 'and', '', '@', 'assistant', ']'] + def test_doctype_public(self): + # Test PUBLIC doctype. - >>> # additional tests - >>> xpath_tokenizer("{http://spam}egg") - ['{http://spam}egg'] - >>> xpath_tokenizer("./spam.egg") - ['.', '/', 'spam.egg'] - >>> xpath_tokenizer(".//{http://spam}egg") - ['.', '//', '{http://spam}egg'] - """ - from xml.etree import ElementPath - out = [] - for op, tag in ElementPath.xpath_tokenizer(p): - out.append(op or tag) - return out + elem = ET.XML('' + 'text') -def processinginstruction(): - """ - Test ProcessingInstruction directly + def test_xpath_tokenizer(self): + # Test the XPath tokenizer. + from xml.etree import ElementPath + def check(p, expected): + self.assertEqual([op or tag + for op, tag in ElementPath.xpath_tokenizer(p)], + expected) - >>> ET.tostring(ET.ProcessingInstruction('test', 'instruction')) - b'' - >>> ET.tostring(ET.PI('test', 'instruction')) - b'' + # tests from the xml specification + check("*", ['*']) + check("text()", ['text', '()']) + check("@name", ['@', 'name']) + check("@*", ['@', '*']) + check("para[1]", ['para', '[', '1', ']']) + check("para[last()]", ['para', '[', 'last', '()', ']']) + check("*/para", ['*', '/', 'para']) + check("/doc/chapter[5]/section[2]", + ['/', 'doc', '/', 'chapter', '[', '5', ']', + '/', 'section', '[', '2', ']']) + check("chapter//para", ['chapter', '//', 'para']) + check("//para", ['//', 'para']) + check("//olist/item", ['//', 'olist', '/', 'item']) + check(".", ['.']) + check(".//para", ['.', '//', 'para']) + check("..", ['..']) + check("../@lang", ['..', '/', '@', 'lang']) + check("chapter[title]", ['chapter', '[', 'title', ']']) + check("employee[@secretary and @assistant]", ['employee', + '[', '@', 'secretary', '', 'and', '', '@', 'assistant', ']']) - Issue #2746 + # additional tests + check("{http://spam}egg", ['{http://spam}egg']) + check("./spam.egg", ['.', '/', 'spam.egg']) + check(".//{http://spam}egg", ['.', '//', '{http://spam}egg']) - >>> ET.tostring(ET.PI('test', '')) - b'?>' - >>> ET.tostring(ET.PI('test', '\xe3'), 'latin-1') - b"\\n\\xe3?>" - """ + def test_processinginstruction(self): + # Test ProcessingInstruction directly + + self.assertEqual(ET.tostring(ET.ProcessingInstruction('test', 'instruction')), + b'') + self.assertEqual(ET.tostring(ET.PI('test', 'instruction')), + b'') + + # Issue #2746 + + self.assertEqual(ET.tostring(ET.PI('test', '')), + b'?>') + self.assertEqual(ET.tostring(ET.PI('test', '\xe3'), 'latin-1'), + b"\n" + b"\xe3?>") + + def test_html_empty_elems_serialization(self): + # issue 15970 + # from http://www.w3.org/TR/html401/index/elements.html + for element in ['AREA', 'BASE', 'BASEFONT', 'BR', 'COL', 'FRAME', 'HR', + 'IMG', 'INPUT', 'ISINDEX', 'LINK', 'META', 'PARAM']: + for elem in [element, element.lower()]: + expected = '<%s>' % elem + serialized = serialize(ET.XML('<%s />' % elem), method='html') + self.assertEqual(serialized, expected) + serialized = serialize(ET.XML('<%s>' % (elem,elem)), + method='html') + self.assertEqual(serialized, expected) + # # xinclude tests (samples from appendix C of the xinclude specification) @@ -1120,79 +986,6 @@ """.format(html.escape(SIMPLE_XMLFILE, True)) - -def xinclude_loader(href, parse="xml", encoding=None): - try: - data = XINCLUDE[href] - except KeyError: - raise OSError("resource not found") - if parse == "xml": - data = ET.XML(data) - return data - -def xinclude(): - r""" - Basic inclusion example (XInclude C.1) - - >>> from xml.etree import ElementInclude - - >>> document = xinclude_loader("C1.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C1 - -

120 Mz is adequate for an average home user.

- -

The opinions represented herein represent those of the individual - and should not be interpreted as official policy endorsed by this - organization.

-
-
- - Textual inclusion example (XInclude C.2) - - >>> document = xinclude_loader("C2.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C2 - -

This document has been accessed - 324387 times.

-
- - Textual inclusion after sibling element (based on modified XInclude C.2) - - >>> document = xinclude_loader("C2b.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C2b - -

This document has been accessed - 324387 times.

-
- - Textual inclusion of XML example (XInclude C.3) - - >>> document = xinclude_loader("C3.xml") - >>> ElementInclude.include(document, xinclude_loader) - >>> print(serialize(document)) # C3 - -

The following is the source of the "data.xml" resource:

- <?xml version='1.0'?> - <data> - <item><![CDATA[Brooks & Shields]]></item> - </data> - -
- - Fallback example (XInclude C.5) - Note! Fallback support is not yet implemented - - >>> document = xinclude_loader("C5.xml") - >>> ElementInclude.include(document, xinclude_loader) - Traceback (most recent call last): - OSError: resource not found - >>> # print(serialize(document)) # C5 - """ - - # # badly formatted xi:include tags @@ -1213,410 +1006,412 @@ """ -def xinclude_failures(): - r""" - Test failure to locate included XML file. +class XIncludeTest(unittest.TestCase): - >>> from xml.etree import ElementInclude + def xinclude_loader(self, href, parse="xml", encoding=None): + try: + data = XINCLUDE[href] + except KeyError: + raise OSError("resource not found") + if parse == "xml": + data = ET.XML(data) + return data - >>> def none_loader(href, parser, encoding=None): - ... return None + def none_loader(self, href, parser, encoding=None): + return None - >>> document = ET.XML(XINCLUDE["C1.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: cannot load 'disclaimer.xml' as 'xml' + def _my_loader(self, href, parse): + # Used to avoid a test-dependency problem where the default loader + # of ElementInclude uses the pyET parser for cET tests. + if parse == 'xml': + with open(href, 'rb') as f: + return ET.parse(f).getroot() + else: + return None - Test failure to locate included text file. + def test_xinclude_default(self): + from xml.etree import ElementInclude + doc = self.xinclude_loader('default.xml') + ElementInclude.include(doc, self._my_loader) + self.assertEqual(serialize(doc), + '\n' + '

Example.

\n' + ' \n' + ' text\n' + ' texttail\n' + ' \n' + '\n' + '
') - >>> document = ET.XML(XINCLUDE["C2.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: cannot load 'count.txt' as 'text' + def test_xinclude(self): + from xml.etree import ElementInclude - Test bad parse type. + # Basic inclusion example (XInclude C.1) + document = self.xinclude_loader("C1.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

120 Mz is adequate for an average home user.

\n' + ' \n' + '

The opinions represented herein represent those of the individual\n' + ' and should not be interpreted as official policy endorsed by this\n' + ' organization.

\n' + '
\n' + '
') # C1 - >>> document = ET.XML(XINCLUDE_BAD["B1.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: unknown parse type in xi:include tag ('BAD_TYPE') + # Textual inclusion example (XInclude C.2) + document = self.xinclude_loader("C2.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

This document has been accessed\n' + ' 324387 times.

\n' + '
') # C2 - Test xi:fallback outside xi:include. + # Textual inclusion after sibling element (based on modified XInclude C.2) + document = self.xinclude_loader("C2b.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

This document has been accessed\n' + ' 324387 times.

\n' + '
') # C2b - >>> document = ET.XML(XINCLUDE_BAD["B2.xml"]) - >>> ElementInclude.include(document, loader=none_loader) - Traceback (most recent call last): - xml.etree.ElementInclude.FatalIncludeError: xi:fallback tag must be child of xi:include ('{http://www.w3.org/2001/XInclude}fallback') - """ + # Textual inclusion of XML example (XInclude C.3) + document = self.xinclude_loader("C3.xml") + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(serialize(document), + '\n' + '

The following is the source of the "data.xml" resource:

\n' + " <?xml version='1.0'?>\n" + '<data>\n' + ' <item><![CDATA[Brooks & Shields]]></item>\n' + '</data>\n' + '\n' + '
') # C3 + + # Fallback example (XInclude C.5) + # Note! Fallback support is not yet implemented + document = self.xinclude_loader("C5.xml") + with self.assertRaises(OSError) as cm: + ElementInclude.include(document, self.xinclude_loader) + self.assertEqual(str(cm.exception), 'resource not found') + self.assertEqual(serialize(document), + '
\n' + ' \n' + ' \n' + ' \n' + ' Report error\n' + ' \n' + ' \n' + ' \n' + '
') # C5 + + def test_xinclude_failures(self): + from xml.etree import ElementInclude + + # Test failure to locate included XML file. + document = ET.XML(XINCLUDE["C1.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "cannot load 'disclaimer.xml' as 'xml'") + + # Test failure to locate included text file. + document = ET.XML(XINCLUDE["C2.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "cannot load 'count.txt' as 'text'") + + # Test bad parse type. + document = ET.XML(XINCLUDE_BAD["B1.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "unknown parse type in xi:include tag ('BAD_TYPE')") + + # Test xi:fallback outside xi:include. + document = ET.XML(XINCLUDE_BAD["B2.xml"]) + with self.assertRaises(ElementInclude.FatalIncludeError) as cm: + ElementInclude.include(document, loader=self.none_loader) + self.assertEqual(str(cm.exception), + "xi:fallback tag must be child of xi:include " + "('{http://www.w3.org/2001/XInclude}fallback')") # -------------------------------------------------------------------- # reported bugs -def bug_xmltoolkit21(): - """ +class BugsTest(unittest.TestCase): - marshaller gives obscure errors for non-string values + def test_bug_xmltoolkit21(self): + # marshaller gives obscure errors for non-string values - >>> elem = ET.Element(123) - >>> serialize(elem) # tag - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.text = 123 - >>> serialize(elem) # text - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.tail = 123 - >>> serialize(elem) # tail - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.set(123, "123") - >>> serialize(elem) # attribute key - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) - >>> elem = ET.Element("elem") - >>> elem.set("123", 123) - >>> serialize(elem) # attribute value - Traceback (most recent call last): - TypeError: cannot serialize 123 (type int) + def check(elem): + with self.assertRaises(TypeError) as cm: + serialize(elem) + self.assertEqual(str(cm.exception), + 'cannot serialize 123 (type int)') - """ + elem = ET.Element(123) + check(elem) # tag -def bug_xmltoolkit25(): - """ + elem = ET.Element("elem") + elem.text = 123 + check(elem) # text - typo in ElementTree.findtext + elem = ET.Element("elem") + elem.tail = 123 + check(elem) # tail - >>> elem = ET.XML(SAMPLE_XML) - >>> tree = ET.ElementTree(elem) - >>> tree.findtext("tag") - 'text' - >>> tree.findtext("section/tag") - 'subtext' + elem = ET.Element("elem") + elem.set(123, "123") + check(elem) # attribute key - """ + elem = ET.Element("elem") + elem.set("123", 123) + check(elem) # attribute value -def bug_xmltoolkit28(): - """ + def test_bug_xmltoolkit25(self): + # typo in ElementTree.findtext - .//tag causes exceptions + elem = ET.XML(SAMPLE_XML) + tree = ET.ElementTree(elem) + self.assertEqual(tree.findtext("tag"), 'text') + self.assertEqual(tree.findtext("section/tag"), 'subtext') - >>> tree = ET.XML("
") - >>> summarize_list(tree.findall(".//thead")) - [] - >>> summarize_list(tree.findall(".//tbody")) - ['tbody'] + def test_bug_xmltoolkit28(self): + # .//tag causes exceptions - """ + tree = ET.XML("
") + self.assertEqual(summarize_list(tree.findall(".//thead")), []) + self.assertEqual(summarize_list(tree.findall(".//tbody")), ['tbody']) -def bug_xmltoolkitX1(): - """ + def test_bug_xmltoolkitX1(self): + # dump() doesn't flush the output buffer - dump() doesn't flush the output buffer + tree = ET.XML("
") + with support.captured_stdout() as stdout: + ET.dump(tree) + self.assertEqual(stdout.getvalue(), '
\n') - >>> tree = ET.XML("
") - >>> ET.dump(tree); print("tail") -
- tail + def test_bug_xmltoolkit39(self): + # non-ascii element and attribute names doesn't work - """ + tree = ET.XML(b"") + self.assertEqual(ET.tostring(tree, "utf-8"), b'') -def bug_xmltoolkit39(): - """ + tree = ET.XML(b"" + b"") + self.assertEqual(tree.attrib, {'\xe4ttr': 'v\xe4lue'}) + self.assertEqual(ET.tostring(tree, "utf-8"), + b'') - non-ascii element and attribute names doesn't work + tree = ET.XML(b"" + b'text') + self.assertEqual(ET.tostring(tree, "utf-8"), + b'text') - >>> tree = ET.XML(b"") - >>> ET.tostring(tree, "utf-8") - b'' + tree = ET.Element("t\u00e4g") + self.assertEqual(ET.tostring(tree, "utf-8"), b'') - >>> tree = ET.XML(b"") - >>> tree.attrib - {'\\xe4ttr': 'v\\xe4lue'} - >>> ET.tostring(tree, "utf-8") - b'' + tree = ET.Element("tag") + tree.set("\u00e4ttr", "v\u00e4lue") + self.assertEqual(ET.tostring(tree, "utf-8"), + b'') - >>> tree = ET.XML(b"text") - >>> ET.tostring(tree, "utf-8") - b'text' + def test_bug_xmltoolkit54(self): + # problems handling internally defined entities - >>> tree = ET.Element("t\u00e4g") - >>> ET.tostring(tree, "utf-8") - b'' + e = ET.XML("]>" + '&ldots;') + self.assertEqual(serialize(e, encoding="us-ascii"), + b'') + self.assertEqual(serialize(e), '\u8230') - >>> tree = ET.Element("tag") - >>> tree.set("\u00e4ttr", "v\u00e4lue") - >>> ET.tostring(tree, "utf-8") - b'' + def test_bug_xmltoolkit55(self): + # make sure we're reporting the first error, not the last - """ + with self.assertRaises(ET.ParseError) as cm: + ET.XML(b"" + b'&ldots;&ndots;&rdots;') + self.assertEqual(str(cm.exception), + 'undefined entity &ldots;: line 1, column 36') -def bug_xmltoolkit54(): - """ + def test_bug_xmltoolkit60(self): + # Handle crash in stream source. - problems handling internally defined entities + class ExceptionFile: + def read(self, x): + raise OSError - >>> e = ET.XML("]>&ldots;") - >>> serialize(e, encoding="us-ascii") - b'' - >>> serialize(e) - '\u8230' + self.assertRaises(OSError, ET.parse, ExceptionFile()) - """ + def test_bug_xmltoolkit62(self): + # Don't crash when using custom entities. -def bug_xmltoolkit55(): - """ - - make sure we're reporting the first error, not the last - - >>> normalize_exception(ET.XML, b"&ldots;&ndots;&rdots;") - Traceback (most recent call last): - ParseError: undefined entity &ldots;: line 1, column 36 - - """ - -class ExceptionFile: - def read(self, x): - raise OSError - -def xmltoolkit60(): - """ - - Handle crash in stream source. - >>> tree = ET.parse(ExceptionFile()) - Traceback (most recent call last): - OSError - - """ - -XMLTOOLKIT62_DOC = """ + ENTITIES = {'rsquo': '\u2019', 'lsquo': '\u2018'} + parser = ET.XMLTreeBuilder() + parser.entity.update(ENTITIES) + parser.feed(""" A new cultivar of Begonia plant named ‘BCT9801BEG’. -""" +""") + t = parser.close() + self.assertEqual(t.find('.//paragraph').text, + 'A new cultivar of Begonia plant named \u2018BCT9801BEG\u2019.') + def test_bug_xmltoolkit63(self): + # Check reference leak. + def xmltoolkit63(): + tree = ET.TreeBuilder() + tree.start("tag", {}) + tree.data("text") + tree.end("tag") -def xmltoolkit62(): - """ + xmltoolkit63() + count = sys.getrefcount(None) + for i in range(1000): + xmltoolkit63() + self.assertEqual(sys.getrefcount(None), count) - Don't crash when using custom entities. + def test_bug_200708_newline(self): + # Preserve newlines in attributes. - >>> xmltoolkit62() - 'A new cultivar of Begonia plant named \u2018BCT9801BEG\u2019.' + e = ET.Element('SomeTag', text="def _f():\n return 3\n") + self.assertEqual(ET.tostring(e), + b'') + self.assertEqual(ET.XML(ET.tostring(e)).get("text"), + 'def _f():\n return 3\n') + self.assertEqual(ET.tostring(ET.XML(ET.tostring(e))), + b'') - """ - ENTITIES = {'rsquo': '\u2019', 'lsquo': '\u2018'} - parser = ET.XMLTreeBuilder() - parser.entity.update(ENTITIES) - parser.feed(XMLTOOLKIT62_DOC) - t = parser.close() - return t.find('.//paragraph').text + def test_bug_200708_close(self): + # Test default builder. + parser = ET.XMLParser() # default + parser.feed("some text") + self.assertEqual(parser.close().tag, 'element') -def xmltoolkit63(): - """ + # Test custom builder. + class EchoTarget: + def close(self): + return ET.Element("element") # simulate root + parser = ET.XMLParser(EchoTarget()) + parser.feed("some text") + self.assertEqual(parser.close().tag, 'element') - Check reference leak. - >>> xmltoolkit63() - >>> count = sys.getrefcount(None) - >>> for i in range(1000): - ... xmltoolkit63() - >>> sys.getrefcount(None) - count - 0 + def test_bug_200709_default_namespace(self): + e = ET.Element("{default}elem") + s = ET.SubElement(e, "{default}elem") + self.assertEqual(serialize(e, default_namespace="default"), # 1 + '') - """ - tree = ET.TreeBuilder() - tree.start("tag", {}) - tree.data("text") - tree.end("tag") + e = ET.Element("{default}elem") + s = ET.SubElement(e, "{default}elem") + s = ET.SubElement(e, "{not-default}elem") + self.assertEqual(serialize(e, default_namespace="default"), # 2 + '' + '' + '' + '') -# -------------------------------------------------------------------- + e = ET.Element("{default}elem") + s = ET.SubElement(e, "{default}elem") + s = ET.SubElement(e, "elem") # unprefixed name + with self.assertRaises(ValueError) as cm: + serialize(e, default_namespace="default") # 3 + self.assertEqual(str(cm.exception), + 'cannot use non-qualified names with default_namespace option') + def test_bug_200709_register_namespace(self): + e = ET.Element("{http://namespace.invalid/does/not/exist/}title") + self.assertEqual(ET.tostring(e), + b'') + ET.register_namespace("foo", "http://namespace.invalid/does/not/exist/") + e = ET.Element("{http://namespace.invalid/does/not/exist/}title") + self.assertEqual(ET.tostring(e), + b'') -def bug_200708_newline(): - r""" + # And the Dublin Core namespace is in the default list: - Preserve newlines in attributes. + e = ET.Element("{http://purl.org/dc/elements/1.1/}title") + self.assertEqual(ET.tostring(e), + b'') - >>> e = ET.Element('SomeTag', text="def _f():\n return 3\n") - >>> ET.tostring(e) - b'' - >>> ET.XML(ET.tostring(e)).get("text") - 'def _f():\n return 3\n' - >>> ET.tostring(ET.XML(ET.tostring(e))) - b'' + def test_bug_200709_element_comment(self): + # Not sure if this can be fixed, really (since the serializer needs + # ET.Comment, not cET.comment). - """ + a = ET.Element('a') + a.append(ET.Comment('foo')) + self.assertEqual(a[0].tag, ET.Comment) -def bug_200708_close(): - """ + a = ET.Element('a') + a.append(ET.PI('foo')) + self.assertEqual(a[0].tag, ET.PI) - Test default builder. - >>> parser = ET.XMLParser() # default - >>> parser.feed("some text") - >>> summarize(parser.close()) - 'element' + def test_bug_200709_element_insert(self): + a = ET.Element('a') + b = ET.SubElement(a, 'b') + c = ET.SubElement(a, 'c') + d = ET.Element('d') + a.insert(0, d) + self.assertEqual(summarize_list(a), ['d', 'b', 'c']) + a.insert(-1, d) + self.assertEqual(summarize_list(a), ['d', 'b', 'd', 'c']) - Test custom builder. - >>> class EchoTarget: - ... def close(self): - ... return ET.Element("element") # simulate root - >>> parser = ET.XMLParser(EchoTarget()) - >>> parser.feed("some text") - >>> summarize(parser.close()) - 'element' + def test_bug_200709_iter_comment(self): + a = ET.Element('a') + b = ET.SubElement(a, 'b') + comment_b = ET.Comment("TEST-b") + b.append(comment_b) + self.assertEqual(summarize_list(a.iter(ET.Comment)), [ET.Comment]) - """ + # -------------------------------------------------------------------- + # reported on bugs.python.org -def bug_200709_default_namespace(): - """ + def test_bug_1534630(self): + bob = ET.TreeBuilder() + e = bob.data("data") + e = bob.start("tag", {}) + e = bob.end("tag") + e = bob.close() + self.assertEqual(serialize(e), '') - >>> e = ET.Element("{default}elem") - >>> s = ET.SubElement(e, "{default}elem") - >>> serialize(e, default_namespace="default") # 1 - '' + def test_issue6233(self): + e = ET.XML(b"" + b't\xc3\xa3g') + self.assertEqual(ET.tostring(e, 'ascii'), + b"\n" + b'tãg') + e = ET.XML(b"" + b't\xe3g') + self.assertEqual(ET.tostring(e, 'ascii'), + b"\n" + b'tãg') - >>> e = ET.Element("{default}elem") - >>> s = ET.SubElement(e, "{default}elem") - >>> s = ET.SubElement(e, "{not-default}elem") - >>> serialize(e, default_namespace="default") # 2 - '' + def test_issue3151(self): + e = ET.XML('') + self.assertEqual(e.tag, '{${stuff}}localname') + t = ET.ElementTree(e) + self.assertEqual(ET.tostring(e), b'') - >>> e = ET.Element("{default}elem") - >>> s = ET.SubElement(e, "{default}elem") - >>> s = ET.SubElement(e, "elem") # unprefixed name - >>> serialize(e, default_namespace="default") # 3 - Traceback (most recent call last): - ValueError: cannot use non-qualified names with default_namespace option + def test_issue6565(self): + elem = ET.XML("") + self.assertEqual(summarize_list(elem), ['tag']) + newelem = ET.XML(SAMPLE_XML) + elem[:] = newelem[:] + self.assertEqual(summarize_list(elem), ['tag', 'tag', 'section']) - """ + def test_issue10777(self): + # Registering a namespace twice caused a "dictionary changed size during + # iteration" bug. -def bug_200709_register_namespace(): - """ - - >>> ET.tostring(ET.Element("{http://namespace.invalid/does/not/exist/}title")) - b'' - >>> ET.register_namespace("foo", "http://namespace.invalid/does/not/exist/") - >>> ET.tostring(ET.Element("{http://namespace.invalid/does/not/exist/}title")) - b'' - - And the Dublin Core namespace is in the default list: - - >>> ET.tostring(ET.Element("{http://purl.org/dc/elements/1.1/}title")) - b'' - - """ - -def bug_200709_element_comment(): - """ - - Not sure if this can be fixed, really (since the serializer needs - ET.Comment, not cET.comment). - - >>> a = ET.Element('a') - >>> a.append(ET.Comment('foo')) - >>> a[0].tag == ET.Comment - True - - >>> a = ET.Element('a') - >>> a.append(ET.PI('foo')) - >>> a[0].tag == ET.PI - True - - """ - -def bug_200709_element_insert(): - """ - - >>> a = ET.Element('a') - >>> b = ET.SubElement(a, 'b') - >>> c = ET.SubElement(a, 'c') - >>> d = ET.Element('d') - >>> a.insert(0, d) - >>> summarize_list(a) - ['d', 'b', 'c'] - >>> a.insert(-1, d) - >>> summarize_list(a) - ['d', 'b', 'd', 'c'] - - """ - -def bug_200709_iter_comment(): - """ - - >>> a = ET.Element('a') - >>> b = ET.SubElement(a, 'b') - >>> comment_b = ET.Comment("TEST-b") - >>> b.append(comment_b) - >>> summarize_list(a.iter(ET.Comment)) - [''] - - """ - -# -------------------------------------------------------------------- -# reported on bugs.python.org - -def bug_1534630(): - """ - - >>> bob = ET.TreeBuilder() - >>> e = bob.data("data") - >>> e = bob.start("tag", {}) - >>> e = bob.end("tag") - >>> e = bob.close() - >>> serialize(e) - '' - - """ - -def check_issue6233(): - """ - - >>> e = ET.XML(b"t\\xc3\\xa3g") - >>> ET.tostring(e, 'ascii') - b"\\ntãg" - >>> e = ET.XML(b"t\\xe3g") - >>> ET.tostring(e, 'ascii') - b"\\ntãg" - - """ - -def check_issue3151(): - """ - - >>> e = ET.XML('') - >>> e.tag - '{${stuff}}localname' - >>> t = ET.ElementTree(e) - >>> ET.tostring(e) - b'' - - """ - -def check_issue6565(): - """ - - >>> elem = ET.XML("") - >>> summarize_list(elem) - ['tag'] - >>> newelem = ET.XML(SAMPLE_XML) - >>> elem[:] = newelem[:] - >>> summarize_list(elem) - ['tag', 'tag', 'section'] - - """ - -def check_issue10777(): - """ - Registering a namespace twice caused a "dictionary changed size during - iteration" bug. - - >>> ET.register_namespace('test10777', 'http://myuri/') - >>> ET.register_namespace('test10777', 'http://myuri/') - """ + ET.register_namespace('test10777', 'http://myuri/') + ET.register_namespace('test10777', 'http://myuri/') # -------------------------------------------------------------------- @@ -1698,7 +1493,7 @@ self.assertEqual(len(e2), 2) self.assertEqualElements(e, e2) -class ElementTreeTest(unittest.TestCase): +class ElementTreeTypeTest(unittest.TestCase): def test_istype(self): self.assertIsInstance(ET.ParseError, type) self.assertIsInstance(ET.QName, type) @@ -1738,19 +1533,6 @@ mye = MyElement('joe') self.assertEqual(mye.newmethod(), 'joe') - def test_html_empty_elems_serialization(self): - # issue 15970 - # from http://www.w3.org/TR/html401/index/elements.html - for element in ['AREA', 'BASE', 'BASEFONT', 'BR', 'COL', 'FRAME', 'HR', - 'IMG', 'INPUT', 'ISINDEX', 'LINK', 'META', 'PARAM']: - for elem in [element, element.lower()]: - expected = '<%s>' % elem - serialized = serialize(ET.XML('<%s />' % elem), method='html') - self.assertEqual(serialized, expected) - serialized = serialize(ET.XML('<%s>' % (elem,elem)), - method='html') - self.assertEqual(serialized, expected) - class ElementFindTest(unittest.TestCase): def test_find_simple(self): @@ -2064,31 +1846,6 @@ 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd')) -class XincludeTest(unittest.TestCase): - def _my_loader(self, href, parse): - # Used to avoid a test-dependency problem where the default loader - # of ElementInclude uses the pyET parser for cET tests. - if parse == 'xml': - with open(href, 'rb') as f: - return ET.parse(f).getroot() - else: - return None - - def test_xinclude_default(self): - from xml.etree import ElementInclude - doc = xinclude_loader('default.xml') - ElementInclude.include(doc, self._my_loader) - s = serialize(doc) - self.assertEqual(s.strip(), ''' -

Example.

- - text - texttail - - -
''') - - class XMLParserTest(unittest.TestCase): sample1 = '22' sample2 = (' http://hg.python.org/peps/rev/7aa92fb33436 changeset: 4776:7aa92fb33436 user: Brett Cannon date: Mon Feb 25 11:39:56 2013 -0500 summary: Add PEP 445: The Argument Clinic DSL files: pep-0445.txt | 481 +++++++++++++++++++++++++++++++++++++++ 1 files changed, 481 insertions(+), 0 deletions(-) diff --git a/pep-0445.txt b/pep-0445.txt new file mode 100644 --- /dev/null +++ b/pep-0445.txt @@ -0,0 +1,481 @@ +PEP: 445 +Title: The Argument Clinic DSL +Version: $Revision$ +Last-Modified: $Date$ +Author: Larry Hastings +Discussions-To: Python-Dev +Status: Draft +Type: Standards Track +Content-Type: text/x-rst +Created: 22-Feb-2013 + + +Abstract +======== + +This document proposes "Argument Clinic", a DSL designed +to facilitate argument processing for built-in functions +in the implementation of CPython. + +Rationale and Goals +=================== + +The primary implementation of Python, "CPython", is written in +a mixture of Python and C. One of the implementation details +of CPython is what are called "built-in" functions--functions +available to Python programs but written in C. When a +Python program calls a built-in function and passes in +arguments, those arguments must be translated from Python +values into C values. This process is called "parsing arguments". + +As of CPython 3.3, arguments to functions are primarily +parsed with one of two functions: the original +``PyArg_ParseTuple()``, [1]_ and the more modern +``PyArg_ParseTupleAndKeywords()``. [2]_ +The former function only handles positional parameters; the +latter also accomodates keyword and keyword-only parameters, +and is preferred for new code. + +``PyArg_ParseTuple()`` was a reasonable approach when it was +first concieved. The programmer specified the translation for +the arguments in a "format string": [3]_ each parameter matched to +a "format unit", a one-or-two character sequence telling +``PyArg_ParseTuple()`` what Python types to accept and how +to translate them into the appropriate C value for that +parameter. There were only a dozen or so of these "format +units", and each one was distinct and easy to understand. + +Over the years the ``PyArg_Parse`` interface has been extended in +numerous ways. The modern API is quite complex, to the point +that it is somewhat painful to use. Consider: + + * There are now forty different "format units"; a few are + even three characters long. + This overload of symbology makes it difficult to understand + what the format string says without constantly cross-indexing + it with the documentation. + * There are also six meta-format units that may be buried + in the format string. (They are: ``"()|$:;"``.) + * The more format units are added, the less likely it is the + implementor can pick an easy-to-use mnemonic for the format + unit, because the character of choice is probably already in + use. In other words, the more format units we have, the more + obtuse the format units become. + * Several format units are nearly identical to others, having + only subtle differences. This makes understanding the exact + semantics of the format string even harder. + * The docstring is specified as a static C string, + which is mildly bothersome to read and edit. + * When adding a new parameter to a function using + ``PyArg_ParseTupleAndKeywords()``, it's necessary to + touch six different places in the code: [4]_ + + * Declaring the variable to store the argument. + * Passing in a pointer to that variable in the correct + spot in ``PyArg_ParseTupleAndKeywords()``, also passing + in any "length" or "converter" arguments in the correct + order. + * Adding the name of the argument in the correct spot + of the "keywords" array passed in to + ``PyArg_ParseTupleAndKeywords()``. + * Adding the format unit to the correct spot in the + format string. + * Adding the parameter to the prototype in the + docstring. + * Documenting the parameter in the docstring. + + * There is currently no mechanism for builtin functions + to provide their "signature" information (see + ``inspect.getfullargspec`` and ``inspect.Signature``). + Adding this information using a mechanism similar to + the existing ``PyArg_Parse`` functions would require + repeating ourselves yet again. + +The goal of Argument Clinic is to replace this API with a +mechanism inheriting none of these downsides: + + * You need specify each parameter only once. + * All information about a parameter is kept together in one place. + * For each parameter, you specify its type in C; + Argument Clinic handles the translation from + Python value into C value for you. + * Argument Clinic also allows for fine-tuning + of argument processing behavior with + highly-readable "flags", both per-parameter + and applying across the whole function. + * Docstrings are written in plain text. + * From this, Argument Clinic generates for you all + the mundane, repetitious code and data structures + CPython needs internally. Once you've specified + the interface, the next step is simply to write your + implementation using native C types. Every detail + of argument parsing is handled for you. + +Future goals of Argument Clinic include: + + * providing signature information for builtins, and + * speed improvements to the generated code. + +DSL Syntax Summary +================== + +The Argument Clinic DSL is specified as a comment +embedded in a C file, as follows. The "Example" column on the +right shows you sample input to the Argument Clinic DSL, +and the "Section" column on the left specifies what each line +represents in turn. + +:: + + +-----------------------+-----------------------------------------------------+ + | Section | Example | + +-----------------------+-----------------------------------------------------+ + | Clinic DSL start | /*[clinic] | + | Function declaration | module.function_name -> return_annotation | + | Function flags | flag flag2 flag3=value | + | Parameter declaration | type name = default | + | Parameter flags | flag flag2 flag3=value | + | Parameter docstring | Lorem ipsum dolor sit amet, consectetur | + | | adipisicing elit, sed do eiusmod tempor | + | Function docstring | Lorem ipsum dolor sit amet, consectetur adipisicing | + | | elit, sed do eiusmod tempor incididunt ut labore et | + | Clinic DSL end | [clinic]*/ | + | Clinic output | ... | + | Clinic output end | /*[clinic end output:]*/ | + +-----------------------+-----------------------------------------------------+ + + +General Behavior Of the Argument Clinic DSL +------------------------------------------- + +All lines support ``#`` as a line comment delimiter *except* docstrings. +Blank lines are always ignored. + +Like Python itself, leading whitespace is significant in the Argument Clinic +DSL. The first line of the "function" section is the declaration; +all subsequent lines at the same indent are function flags. Once you indent, +the first line is a parameter declaration; subsequent lines at that indent +are parameter flags. Indent one more time for the lines of the parameter +docstring. Finally, outdent back to the same level as the function +declaration for the function docstring. + +Function Declaration +-------------------- + +The return annotation is optional. If skipped, the arrow ("``->``") must also be omitted. + +Parameter Declaration +--------------------- + +The "type" is a C type. If it's a pointer type, you must specify +a single space between the type and the "``*``", and zero spaces between +the "``*``" and the name. (e.g. "``PyObject *foo``", not "``PyObject* foo``") + +The "name" must be a legal C identifier. + +The "default" is a Python value. Default values are optional; +if not specified you must omit the equals sign too. Parameters +which don't have a default are implicitly required. The default +value is dynamically assigned, "live" in the generated C code, +and although it's specified as a Python value, it's translated +into a native C value in the generated C code. + +It's explicitly permitted to end the parameter declaration line +with a semicolon, though the semicolon is optional. This is +intended to allow directly cutting and pasting in declarations +from C code. However, the preferred style is without the semicolon. + + +Flags +----- + +"Flags" are like "``make -D``" arguments. They're unordered. Flags lines +are parsed much like the shell (specifically, using ``shlex.split()`` [5]_ ). +You can have as many flag lines as you like. Specifying a flag twice +is currently an error. + +Supported flags for functions: + +``basename`` + The basename to use for the generated C functions. + By default this is the name of the function from + the DSL, only with periods replaced by underscores. + +``positional-only`` + This function only supports positional parameters, + not keyword parameters. See `Functions With + Positional-Only Parameters`_ below. + +Supported flags for parameters: + +``bitwise`` + If the Python integer passed in is signed, copy the + bits directly even if it is negative. Only valid + for unsigned integer types. + +``converter`` + Backwards-compatibility support for parameter "converter" + functions. [6]_ The value should be the name of the converter + function in C. Only valid when the type of the parameter + is ``void *``. + +``default`` + The Python value to use in place of the parameter's actual + default in Python contexts. Specifically, when specified, + this value will be used for the parameter's default in the + docstring, and in the ``Signature``. (TBD: If the string is a + valid Python expression, renderable into a Python value + using ``eval()``, then the result of ``eval()`` on it will be used + as the default in the ``Signature``.) Ignored if there is no + default. + +``encoding`` + Encoding to use when encoding a Unicode string to a ``char *``. + Only valid when the type of the parameter is ``char *``. + +``group=`` + This parameter is part of a group of options that must either + all be specified or none specified. Parameters in the same + "group" must be contiguous. The value of the group flag + is the name used for the group variable, and therefore must + be legal as a C identifier. Only valid for functions + marked "``positional-only``"; see `Functions With + Positional-Only Parameters`_ below. + +``immutable`` + Only accept immutable values. + +``keyword-only`` + This parameter (and all subsequent parameters) is + keyword-only. Keyword-only parameters must also be + optional parameters. Not valid for positional-only functions. + +``length`` + This is an iterable type, and we also want its length. The + DSL will generate a second ``Py_ssize_t`` variable; + its name will be this parameter's name appended with + "``_length``". + +``nullable`` + ``None`` is a legal argument for this parameter. If ``None`` is + supplied on the Python side, the equivalent C argument will be + ``NULL``. Only valid for pointer types. + +``required`` + Normally any parameter that has a default value is + automatically optional. A parameter that has "required" + set will be considered required (non-optional) even if + it has a default value. The generated documentation + will also not show any default value. + +``types`` + Space-separated list of acceptable Python types for this + object. There are also four special-case types which + represent Python protocols: + + * buffer + * mapping + * number + * sequence + +``zeroes`` + This parameter is a string type, and its value should be + allowed to have embedded zeroes. Not valid for all + varieties of string parameters. + + +Python Code +----------- + +Argument Clinic also permits embedding Python code inside C files, +which is executed in-place when Argument Clinic processes the file. +Embedded code looks like this: + +:: + + /*[python] + + # this is python code! + print("/" + "* Hello world! *" + "/") + + [python]*/ + +Any Python code is valid. Python code sections in Argument Clinic +can also be used to modify Clinic's behavior at runtime; for example, +see `Extending Argument Clinic`_. + + +Output +====== + +Argument Clinic writes its output in-line in the C file, immediately after +the section of Clinic code. For "python" sections, the output is +everything printed using ``builtins.print``. For "clinic" sections, the +output is valid C code, including: + + * a ``#define`` providing the correct ``methoddef`` structure for the + function + * a prototype for the "impl" function--this is what you'll write to + implement this function + * a function that handles all argument processing, which calls your + "impl" function + * the definition line of the "impl" function + * and a comment indicating the end of output. + +The intention is that you will write the body of your impl function +immediately after the output--as in, you write a left-curly-brace +immediately after the end-of-output comment and write the implementation +of the builtin in the body there. (It's a bit strange at first--but oddly +convenient.) + +Argument Clinic will define the parameters of the impl function for you. +The function will take the "self" parameter passed in originally, all +the parameters you define, and possibly some extra generated parameters +("length" parameters; also "group" parameters, see next section). + +Argument Clinic also writes a checksum for the output section. This +is a valuable safety feature: if you modify the output by hand, Clinic +will notice that the checksum doesn't match, and will refuse to +overwrite the file. (You can force Clinic to overwrite with the "``-f``" +command-line argument; Clinic will also ignore the checksums when +using the "``-o``" command-line argument.) + + +Functions With Positional-Only Parameters +========================================= + +A significant fraction of Python builtins implemented in C use the +older positional-only API for processing arguments (``PyArg_ParseTuple()``). +In some instances, these builtins parse their arguments differently +based on how many arguments were passed in. This can provide some +bewildering flexibility: there may be groups of optional parameters, +which must either all be specified or none specified. And occasionally +these groups are on the *left!* (For example: ``curses.window.addch()``.) + +Argument Clinic supports these legacy use-cases with a special set +of flags. First, set the flag "``positional-only``" on the entire +function. Then, for every group of parameters that is collectively +optional, add a "``group=``" flag with a unique string to all the +parameters in that group. Note that these groups are permitted on +the right *or left* of any required parameters! However, all groups +(including the group of required parameters) must be contiguous. + +The impl function generated by Clinic will add an extra parameter for +every group, "``int _group``". This argument will be nonzero if +the group was specified on this call, and zero if it was not. + +Note that when operating in this mode, you cannot specify default +arguments. You can simulate defaults by putting parameters in +individual groups and detecting whether or not they were +specified--but generally speaking it's better to simply not +use "positional-only" where it isn't absolutely necessary. (TBD: It +might be possible to relax this restriction. But adding default +arguments into the mix of groups would seemingly make calculating which +groups are active a good deal harder.) + +Also, note that it's possible--even easy--to specify a set of groups +to a function such that there are several valid mappings from the number +of arguments to a valid set of groups. If this happens, Clinic will exit +with an error message. This should not be a problem, as positional-only +operation is only intended for legacy use cases, and all the legacy +functions using this quirky behavior should have unambiguous mappings. + + +Current Status +============== + +As of this writing, there is a working prototype implementation of +Argument Clinic available online. [7]_ The prototype implements +the syntax above, and generates code using the existing ``PyArg_Parse`` +APIs. It supports translating to all current format units except ``"w*"``. +Sample functions using Argument Clinic exercise all major features, +including positional-only argument parsing. + +Extending Argument Clinic +------------------------- + +The prototype also currently provides an experimental extension mechanism, +allowing adding support for new types on-the-fly. See ``Modules/posixmodule.c`` +in the prototype for an example of its use. + + +Notes / TBD +=========== + +* Guido proposed having the "function docstring" be hand-written inline, + in the middle of the output, something like this: + + :: + + /*[clinic] + ... prototype and parameters (including parameter docstrings) go here + [clinic]*/ + ... some output ... + /*[clinic docstring start]*/ + ... hand-edited function docstring goes here <-- you edit this by hand! + /*[clinic docstring end]*/ + ... more output + /*[clinic output end]*/ + + I tried it this way and don't like it--I think it's clumsy. I prefer that + everything you write goes in one place, rather than having an island of + hand-edited stuff in the middle of the DSL output. + +* Do we need to support tuple unpacking? (The "``(OOO)``" style format string.) + Boy I sure hope not. + +* What about Python functions that take no arguments? This syntax doesn't + provide for that. Perhaps a lone indented "None" should mean "no arguments"? + +* This approach removes some dynamism / flexibility. With the existing + syntax one could theoretically pass in different encodings at runtime for + the "``es``"/"``et``" format units. AFAICT CPython doesn't do this itself, + however it's possible external users might do this. (Trivia: there are no + uses of "``es``" exercised by regrtest, and all the uses of "``et``" + exercised are in socketmodule.c, except for one in _ssl.c. They're all + static, specifying the encoding ``"idna"``.) + +* Right now the "basename" flag on a function changes the ``#define methoddef`` name + too. Should it, or should the #define'd methoddef name always be + ``{module_name}_{function_name}`` ? + + +References +========== + +.. [1] ``PyArg_ParseTuple()``: + http://docs.python.org/3/c-api/arg.html#PyArg_ParseTuple + +.. [2] ``PyArg_ParseTupleAndKeywords()``: + http://docs.python.org/3/c-api/arg.html#PyArg_ParseTupleAndKeywords + +.. [3] ``PyArg_`` format units: + http://docs.python.org/3/c-api/arg.html#strings-and-buffers + +.. [4] Keyword parameters for extension functions: + http://docs.python.org/3/extending/extending.html#keyword-parameters-for-extension-functions + +.. [5] ``shlex.split()``: + http://docs.python.org/3/library/shlex.html#shlex.split + +.. [6] ``PyArg_`` "converter" functions, see ``"O&"`` in this section: + http://docs.python.org/3/c-api/arg.html#other-objects + +.. [7] Argument Clinic prototype: + https://bitbucket.org/larry/python-clinic/ + +Copyright +========= + +This document has been placed in the public domain. + + + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: -- Repository URL: http://hg.python.org/peps From barry at python.org Mon Feb 25 17:42:14 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 25 Feb 2013 11:42:14 -0500 Subject: [Python-checkins] peps: Add PEP 445: The Argument Clinic DSL In-Reply-To: <3ZF85b4VPXzNl2@mail.python.org> References: <3ZF85b4VPXzNl2@mail.python.org> Message-ID: <20130225114214.4aeadb15@anarchist.wooz.org> On Feb 25, 2013, at 05:40 PM, brett.cannon wrote: >http://hg.python.org/peps/rev/7aa92fb33436 >changeset: 4776:7aa92fb33436 >user: Brett Cannon >date: Mon Feb 25 11:39:56 2013 -0500 >summary: > Add PEP 445: The Argument Clinic DSL I beat you with PEP 436. :) -Barry From python-checkins at python.org Mon Feb 25 18:23:34 2013 From: python-checkins at python.org (brett.cannon) Date: Mon, 25 Feb 2013 18:23:34 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?peps=3A_Larry=27s_PEP_was_already_com?= =?utf-8?q?mitted=2C_plus_PEP_444_is_not_the_end_of_the_numbers?= Message-ID: <3ZF93p4LPjzPM8@mail.python.org> http://hg.python.org/peps/rev/38d03a8c6734 changeset: 4777:38d03a8c6734 user: Brett Cannon date: Mon Feb 25 12:23:26 2013 -0500 summary: Larry's PEP was already committed, plus PEP 444 is not the end of the numbers files: pep-0445.txt | 481 --------------------------------------- 1 files changed, 0 insertions(+), 481 deletions(-) diff --git a/pep-0445.txt b/pep-0445.txt deleted file mode 100644 --- a/pep-0445.txt +++ /dev/null @@ -1,481 +0,0 @@ -PEP: 445 -Title: The Argument Clinic DSL -Version: $Revision$ -Last-Modified: $Date$ -Author: Larry Hastings -Discussions-To: Python-Dev -Status: Draft -Type: Standards Track -Content-Type: text/x-rst -Created: 22-Feb-2013 - - -Abstract -======== - -This document proposes "Argument Clinic", a DSL designed -to facilitate argument processing for built-in functions -in the implementation of CPython. - -Rationale and Goals -=================== - -The primary implementation of Python, "CPython", is written in -a mixture of Python and C. One of the implementation details -of CPython is what are called "built-in" functions--functions -available to Python programs but written in C. When a -Python program calls a built-in function and passes in -arguments, those arguments must be translated from Python -values into C values. This process is called "parsing arguments". - -As of CPython 3.3, arguments to functions are primarily -parsed with one of two functions: the original -``PyArg_ParseTuple()``, [1]_ and the more modern -``PyArg_ParseTupleAndKeywords()``. [2]_ -The former function only handles positional parameters; the -latter also accomodates keyword and keyword-only parameters, -and is preferred for new code. - -``PyArg_ParseTuple()`` was a reasonable approach when it was -first concieved. The programmer specified the translation for -the arguments in a "format string": [3]_ each parameter matched to -a "format unit", a one-or-two character sequence telling -``PyArg_ParseTuple()`` what Python types to accept and how -to translate them into the appropriate C value for that -parameter. There were only a dozen or so of these "format -units", and each one was distinct and easy to understand. - -Over the years the ``PyArg_Parse`` interface has been extended in -numerous ways. The modern API is quite complex, to the point -that it is somewhat painful to use. Consider: - - * There are now forty different "format units"; a few are - even three characters long. - This overload of symbology makes it difficult to understand - what the format string says without constantly cross-indexing - it with the documentation. - * There are also six meta-format units that may be buried - in the format string. (They are: ``"()|$:;"``.) - * The more format units are added, the less likely it is the - implementor can pick an easy-to-use mnemonic for the format - unit, because the character of choice is probably already in - use. In other words, the more format units we have, the more - obtuse the format units become. - * Several format units are nearly identical to others, having - only subtle differences. This makes understanding the exact - semantics of the format string even harder. - * The docstring is specified as a static C string, - which is mildly bothersome to read and edit. - * When adding a new parameter to a function using - ``PyArg_ParseTupleAndKeywords()``, it's necessary to - touch six different places in the code: [4]_ - - * Declaring the variable to store the argument. - * Passing in a pointer to that variable in the correct - spot in ``PyArg_ParseTupleAndKeywords()``, also passing - in any "length" or "converter" arguments in the correct - order. - * Adding the name of the argument in the correct spot - of the "keywords" array passed in to - ``PyArg_ParseTupleAndKeywords()``. - * Adding the format unit to the correct spot in the - format string. - * Adding the parameter to the prototype in the - docstring. - * Documenting the parameter in the docstring. - - * There is currently no mechanism for builtin functions - to provide their "signature" information (see - ``inspect.getfullargspec`` and ``inspect.Signature``). - Adding this information using a mechanism similar to - the existing ``PyArg_Parse`` functions would require - repeating ourselves yet again. - -The goal of Argument Clinic is to replace this API with a -mechanism inheriting none of these downsides: - - * You need specify each parameter only once. - * All information about a parameter is kept together in one place. - * For each parameter, you specify its type in C; - Argument Clinic handles the translation from - Python value into C value for you. - * Argument Clinic also allows for fine-tuning - of argument processing behavior with - highly-readable "flags", both per-parameter - and applying across the whole function. - * Docstrings are written in plain text. - * From this, Argument Clinic generates for you all - the mundane, repetitious code and data structures - CPython needs internally. Once you've specified - the interface, the next step is simply to write your - implementation using native C types. Every detail - of argument parsing is handled for you. - -Future goals of Argument Clinic include: - - * providing signature information for builtins, and - * speed improvements to the generated code. - -DSL Syntax Summary -================== - -The Argument Clinic DSL is specified as a comment -embedded in a C file, as follows. The "Example" column on the -right shows you sample input to the Argument Clinic DSL, -and the "Section" column on the left specifies what each line -represents in turn. - -:: - - +-----------------------+-----------------------------------------------------+ - | Section | Example | - +-----------------------+-----------------------------------------------------+ - | Clinic DSL start | /*[clinic] | - | Function declaration | module.function_name -> return_annotation | - | Function flags | flag flag2 flag3=value | - | Parameter declaration | type name = default | - | Parameter flags | flag flag2 flag3=value | - | Parameter docstring | Lorem ipsum dolor sit amet, consectetur | - | | adipisicing elit, sed do eiusmod tempor | - | Function docstring | Lorem ipsum dolor sit amet, consectetur adipisicing | - | | elit, sed do eiusmod tempor incididunt ut labore et | - | Clinic DSL end | [clinic]*/ | - | Clinic output | ... | - | Clinic output end | /*[clinic end output:]*/ | - +-----------------------+-----------------------------------------------------+ - - -General Behavior Of the Argument Clinic DSL -------------------------------------------- - -All lines support ``#`` as a line comment delimiter *except* docstrings. -Blank lines are always ignored. - -Like Python itself, leading whitespace is significant in the Argument Clinic -DSL. The first line of the "function" section is the declaration; -all subsequent lines at the same indent are function flags. Once you indent, -the first line is a parameter declaration; subsequent lines at that indent -are parameter flags. Indent one more time for the lines of the parameter -docstring. Finally, outdent back to the same level as the function -declaration for the function docstring. - -Function Declaration --------------------- - -The return annotation is optional. If skipped, the arrow ("``->``") must also be omitted. - -Parameter Declaration ---------------------- - -The "type" is a C type. If it's a pointer type, you must specify -a single space between the type and the "``*``", and zero spaces between -the "``*``" and the name. (e.g. "``PyObject *foo``", not "``PyObject* foo``") - -The "name" must be a legal C identifier. - -The "default" is a Python value. Default values are optional; -if not specified you must omit the equals sign too. Parameters -which don't have a default are implicitly required. The default -value is dynamically assigned, "live" in the generated C code, -and although it's specified as a Python value, it's translated -into a native C value in the generated C code. - -It's explicitly permitted to end the parameter declaration line -with a semicolon, though the semicolon is optional. This is -intended to allow directly cutting and pasting in declarations -from C code. However, the preferred style is without the semicolon. - - -Flags ------ - -"Flags" are like "``make -D``" arguments. They're unordered. Flags lines -are parsed much like the shell (specifically, using ``shlex.split()`` [5]_ ). -You can have as many flag lines as you like. Specifying a flag twice -is currently an error. - -Supported flags for functions: - -``basename`` - The basename to use for the generated C functions. - By default this is the name of the function from - the DSL, only with periods replaced by underscores. - -``positional-only`` - This function only supports positional parameters, - not keyword parameters. See `Functions With - Positional-Only Parameters`_ below. - -Supported flags for parameters: - -``bitwise`` - If the Python integer passed in is signed, copy the - bits directly even if it is negative. Only valid - for unsigned integer types. - -``converter`` - Backwards-compatibility support for parameter "converter" - functions. [6]_ The value should be the name of the converter - function in C. Only valid when the type of the parameter - is ``void *``. - -``default`` - The Python value to use in place of the parameter's actual - default in Python contexts. Specifically, when specified, - this value will be used for the parameter's default in the - docstring, and in the ``Signature``. (TBD: If the string is a - valid Python expression, renderable into a Python value - using ``eval()``, then the result of ``eval()`` on it will be used - as the default in the ``Signature``.) Ignored if there is no - default. - -``encoding`` - Encoding to use when encoding a Unicode string to a ``char *``. - Only valid when the type of the parameter is ``char *``. - -``group=`` - This parameter is part of a group of options that must either - all be specified or none specified. Parameters in the same - "group" must be contiguous. The value of the group flag - is the name used for the group variable, and therefore must - be legal as a C identifier. Only valid for functions - marked "``positional-only``"; see `Functions With - Positional-Only Parameters`_ below. - -``immutable`` - Only accept immutable values. - -``keyword-only`` - This parameter (and all subsequent parameters) is - keyword-only. Keyword-only parameters must also be - optional parameters. Not valid for positional-only functions. - -``length`` - This is an iterable type, and we also want its length. The - DSL will generate a second ``Py_ssize_t`` variable; - its name will be this parameter's name appended with - "``_length``". - -``nullable`` - ``None`` is a legal argument for this parameter. If ``None`` is - supplied on the Python side, the equivalent C argument will be - ``NULL``. Only valid for pointer types. - -``required`` - Normally any parameter that has a default value is - automatically optional. A parameter that has "required" - set will be considered required (non-optional) even if - it has a default value. The generated documentation - will also not show any default value. - -``types`` - Space-separated list of acceptable Python types for this - object. There are also four special-case types which - represent Python protocols: - - * buffer - * mapping - * number - * sequence - -``zeroes`` - This parameter is a string type, and its value should be - allowed to have embedded zeroes. Not valid for all - varieties of string parameters. - - -Python Code ------------ - -Argument Clinic also permits embedding Python code inside C files, -which is executed in-place when Argument Clinic processes the file. -Embedded code looks like this: - -:: - - /*[python] - - # this is python code! - print("/" + "* Hello world! *" + "/") - - [python]*/ - -Any Python code is valid. Python code sections in Argument Clinic -can also be used to modify Clinic's behavior at runtime; for example, -see `Extending Argument Clinic`_. - - -Output -====== - -Argument Clinic writes its output in-line in the C file, immediately after -the section of Clinic code. For "python" sections, the output is -everything printed using ``builtins.print``. For "clinic" sections, the -output is valid C code, including: - - * a ``#define`` providing the correct ``methoddef`` structure for the - function - * a prototype for the "impl" function--this is what you'll write to - implement this function - * a function that handles all argument processing, which calls your - "impl" function - * the definition line of the "impl" function - * and a comment indicating the end of output. - -The intention is that you will write the body of your impl function -immediately after the output--as in, you write a left-curly-brace -immediately after the end-of-output comment and write the implementation -of the builtin in the body there. (It's a bit strange at first--but oddly -convenient.) - -Argument Clinic will define the parameters of the impl function for you. -The function will take the "self" parameter passed in originally, all -the parameters you define, and possibly some extra generated parameters -("length" parameters; also "group" parameters, see next section). - -Argument Clinic also writes a checksum for the output section. This -is a valuable safety feature: if you modify the output by hand, Clinic -will notice that the checksum doesn't match, and will refuse to -overwrite the file. (You can force Clinic to overwrite with the "``-f``" -command-line argument; Clinic will also ignore the checksums when -using the "``-o``" command-line argument.) - - -Functions With Positional-Only Parameters -========================================= - -A significant fraction of Python builtins implemented in C use the -older positional-only API for processing arguments (``PyArg_ParseTuple()``). -In some instances, these builtins parse their arguments differently -based on how many arguments were passed in. This can provide some -bewildering flexibility: there may be groups of optional parameters, -which must either all be specified or none specified. And occasionally -these groups are on the *left!* (For example: ``curses.window.addch()``.) - -Argument Clinic supports these legacy use-cases with a special set -of flags. First, set the flag "``positional-only``" on the entire -function. Then, for every group of parameters that is collectively -optional, add a "``group=``" flag with a unique string to all the -parameters in that group. Note that these groups are permitted on -the right *or left* of any required parameters! However, all groups -(including the group of required parameters) must be contiguous. - -The impl function generated by Clinic will add an extra parameter for -every group, "``int _group``". This argument will be nonzero if -the group was specified on this call, and zero if it was not. - -Note that when operating in this mode, you cannot specify default -arguments. You can simulate defaults by putting parameters in -individual groups and detecting whether or not they were -specified--but generally speaking it's better to simply not -use "positional-only" where it isn't absolutely necessary. (TBD: It -might be possible to relax this restriction. But adding default -arguments into the mix of groups would seemingly make calculating which -groups are active a good deal harder.) - -Also, note that it's possible--even easy--to specify a set of groups -to a function such that there are several valid mappings from the number -of arguments to a valid set of groups. If this happens, Clinic will exit -with an error message. This should not be a problem, as positional-only -operation is only intended for legacy use cases, and all the legacy -functions using this quirky behavior should have unambiguous mappings. - - -Current Status -============== - -As of this writing, there is a working prototype implementation of -Argument Clinic available online. [7]_ The prototype implements -the syntax above, and generates code using the existing ``PyArg_Parse`` -APIs. It supports translating to all current format units except ``"w*"``. -Sample functions using Argument Clinic exercise all major features, -including positional-only argument parsing. - -Extending Argument Clinic -------------------------- - -The prototype also currently provides an experimental extension mechanism, -allowing adding support for new types on-the-fly. See ``Modules/posixmodule.c`` -in the prototype for an example of its use. - - -Notes / TBD -=========== - -* Guido proposed having the "function docstring" be hand-written inline, - in the middle of the output, something like this: - - :: - - /*[clinic] - ... prototype and parameters (including parameter docstrings) go here - [clinic]*/ - ... some output ... - /*[clinic docstring start]*/ - ... hand-edited function docstring goes here <-- you edit this by hand! - /*[clinic docstring end]*/ - ... more output - /*[clinic output end]*/ - - I tried it this way and don't like it--I think it's clumsy. I prefer that - everything you write goes in one place, rather than having an island of - hand-edited stuff in the middle of the DSL output. - -* Do we need to support tuple unpacking? (The "``(OOO)``" style format string.) - Boy I sure hope not. - -* What about Python functions that take no arguments? This syntax doesn't - provide for that. Perhaps a lone indented "None" should mean "no arguments"? - -* This approach removes some dynamism / flexibility. With the existing - syntax one could theoretically pass in different encodings at runtime for - the "``es``"/"``et``" format units. AFAICT CPython doesn't do this itself, - however it's possible external users might do this. (Trivia: there are no - uses of "``es``" exercised by regrtest, and all the uses of "``et``" - exercised are in socketmodule.c, except for one in _ssl.c. They're all - static, specifying the encoding ``"idna"``.) - -* Right now the "basename" flag on a function changes the ``#define methoddef`` name - too. Should it, or should the #define'd methoddef name always be - ``{module_name}_{function_name}`` ? - - -References -========== - -.. [1] ``PyArg_ParseTuple()``: - http://docs.python.org/3/c-api/arg.html#PyArg_ParseTuple - -.. [2] ``PyArg_ParseTupleAndKeywords()``: - http://docs.python.org/3/c-api/arg.html#PyArg_ParseTupleAndKeywords - -.. [3] ``PyArg_`` format units: - http://docs.python.org/3/c-api/arg.html#strings-and-buffers - -.. [4] Keyword parameters for extension functions: - http://docs.python.org/3/extending/extending.html#keyword-parameters-for-extension-functions - -.. [5] ``shlex.split()``: - http://docs.python.org/3/library/shlex.html#shlex.split - -.. [6] ``PyArg_`` "converter" functions, see ``"O&"`` in this section: - http://docs.python.org/3/c-api/arg.html#other-objects - -.. [7] Argument Clinic prototype: - https://bitbucket.org/larry/python-clinic/ - -Copyright -========= - -This document has been placed in the public domain. - - - -.. - Local Variables: - mode: indented-text - indent-tabs-mode: nil - sentence-end-double-space: t - fill-column: 70 - coding: utf-8 - End: -- Repository URL: http://hg.python.org/peps From python-checkins at python.org Mon Feb 25 22:42:03 2013 From: python-checkins at python.org (antoine.pitrou) Date: Mon, 25 Feb 2013 22:42:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_Remove_misleading_stateme?= =?utf-8?q?nt_=28patchcheck_doesn=27t_run_the_test_suite=29?= Message-ID: <3ZFGp33LMwzPM8@mail.python.org> http://hg.python.org/devguide/rev/dc3f8e2c34b7 changeset: 601:dc3f8e2c34b7 user: Antoine Pitrou date: Mon Feb 25 22:38:27 2013 +0100 summary: Remove misleading statement (patchcheck doesn't run the test suite) files: committing.rst | 2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/committing.rst b/committing.rst --- a/committing.rst +++ b/committing.rst @@ -34,8 +34,6 @@ * Has ``Misc/ACKS`` been updated? * Has ``configure`` been regenerated, if necessary? * Has ``pyconfig.h.in`` been regenerated, if necessary? -* Has the test suite been run? -* Are there any reference leaks? Note that the automated patch check can't actually *answer* all of these questions, and even if it could, it still wouldn't know whether or not -- Repository URL: http://hg.python.org/devguide From python-checkins at python.org Mon Feb 25 23:10:22 2013 From: python-checkins at python.org (brett.cannon) Date: Mon, 25 Feb 2013 23:10:22 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2317220=3A_two_fixe?= =?utf-8?q?s_for_changeset_2528e4aea338=2E?= Message-ID: <3ZFHQk2zK0zNXt@mail.python.org> http://hg.python.org/cpython/rev/d98a82f4c9bd changeset: 82385:d98a82f4c9bd user: Brett Cannon date: Mon Feb 25 17:10:11 2013 -0500 summary: Issue #17220: two fixes for changeset 2528e4aea338. First, because the mtime can exceed 4 bytes, make sure to mask it down to 4 bytes before getting its little-endian representation for writing out to a .pyc file. Two, cap an rsplit() call to 1 split, else can lead to too many values being returned for unpacking. files: Lib/importlib/_bootstrap.py | 4 +- Python/importlib.h | 8448 +++++++++++----------- 2 files changed, 4227 insertions(+), 4225 deletions(-) diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -48,7 +48,7 @@ XXX Temporary until marshal's long functions are exposed. """ - return int(x).to_bytes(4, 'little') + return (int(x) & 0xFFFFFFFF).to_bytes(4, 'little') # TODO: Expose from marshal @@ -74,7 +74,7 @@ return front, tail for x in reversed(path): if x in path_separators: - front, tail = path.rsplit(x) + front, tail = path.rsplit(x, maxsplit=1) return front, tail return '', path diff --git a/Python/importlib.h b/Python/importlib.h --- a/Python/importlib.h +++ b/Python/importlib.h [stripped] -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 00:20:35 2013 From: python-checkins at python.org (victor.stinner) Date: Tue, 26 Feb 2013 00:20:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MjIz?= =?utf-8?q?=3A_Fix_PyUnicode=5FFromUnicode=28=29_for_string_of_1_character?= =?utf-8?q?_outside?= Message-ID: <3ZFJzl1CZyzPhL@mail.python.org> http://hg.python.org/cpython/rev/c354afedb866 changeset: 82386:c354afedb866 branch: 3.3 parent: 82383:af570205b978 user: Victor Stinner date: Tue Feb 26 00:15:54 2013 +0100 summary: Issue #17223: Fix PyUnicode_FromUnicode() for string of 1 character outside the range U+0000-U+10ffff. files: Misc/NEWS | 3 +++ Objects/unicodeobject.c | 14 +++++++------- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,9 @@ Core and Builtins ----------------- +- Issue #17223: Fix PyUnicode_FromUnicode() for string of 1 character outside + the range U+0000-U+10ffff. + - Issue #17275: Corrected class name in init error messages of the C version of BufferedWriter and BufferedRandom. diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -249,7 +249,7 @@ static PyObject * -_PyUnicode_FromUCS1(const unsigned char *s, Py_ssize_t size); +_PyUnicode_FromUCS1(const Py_UCS1 *s, Py_ssize_t size); static PyObject * _PyUnicode_FromUCS2(const Py_UCS2 *s, Py_ssize_t size); static PyObject * @@ -442,7 +442,7 @@ if (len == 1) { wchar_t ch = _PyUnicode_WSTR(unicode)[0]; - if (ch < 256) { + if ((Py_UCS4)ch < 256) { PyObject *latin1_char = get_latin1_char((unsigned char)ch); Py_DECREF(unicode); return latin1_char; @@ -1761,7 +1761,7 @@ /* Single character Unicode objects in the Latin-1 range are shared when using this constructor */ - if (size == 1 && *u < 256) + if (size == 1 && (Py_UCS4)*u < 256) return get_latin1_char((unsigned char)*u); /* If not empty and not single character, copy the Unicode data @@ -1869,7 +1869,7 @@ PyObject *unicode; if (size == 1) { #ifdef Py_DEBUG - assert(s[0] < 128); + assert((unsigned char)s[0] < 128); #endif return get_latin1_char(s[0]); } @@ -1911,7 +1911,7 @@ } static PyObject* -_PyUnicode_FromUCS1(const unsigned char* u, Py_ssize_t size) +_PyUnicode_FromUCS1(const Py_UCS1* u, Py_ssize_t size) { PyObject *res; unsigned char max_char; @@ -2974,8 +2974,8 @@ return NULL; } - if (ordinal < 256) - return get_latin1_char(ordinal); + if ((Py_UCS4)ordinal < 256) + return get_latin1_char((unsigned char)ordinal); v = PyUnicode_New(1, ordinal); if (v == NULL) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 00:20:36 2013 From: python-checkins at python.org (victor.stinner) Date: Tue, 26 Feb 2013 00:20:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_=28Merge_3=2E3=29_Issue_=2317223=3A_Fix_PyUnicode=5FFrom?= =?utf-8?q?Unicode=28=29_for_string_of_1_character?= Message-ID: <3ZFJzm43x9zPJh@mail.python.org> http://hg.python.org/cpython/rev/a4295ab52427 changeset: 82387:a4295ab52427 parent: 82385:d98a82f4c9bd parent: 82386:c354afedb866 user: Victor Stinner date: Tue Feb 26 00:16:57 2013 +0100 summary: (Merge 3.3) Issue #17223: Fix PyUnicode_FromUnicode() for string of 1 character outside the range U+0000-U+10ffff. files: Misc/NEWS | 3 +++ Objects/unicodeobject.c | 14 +++++++------- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #17223: Fix PyUnicode_FromUnicode() for string of 1 character outside + the range U+0000-U+10ffff. + - Issue #17275: Corrected class name in init error messages of the C version of BufferedWriter and BufferedRandom. diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -241,7 +241,7 @@ static PyObject * -_PyUnicode_FromUCS1(const unsigned char *s, Py_ssize_t size); +_PyUnicode_FromUCS1(const Py_UCS1 *s, Py_ssize_t size); static PyObject * _PyUnicode_FromUCS2(const Py_UCS2 *s, Py_ssize_t size); static PyObject * @@ -432,7 +432,7 @@ if (len == 1) { wchar_t ch = _PyUnicode_WSTR(unicode)[0]; - if (ch < 256) { + if ((Py_UCS4)ch < 256) { PyObject *latin1_char = get_latin1_char((unsigned char)ch); Py_DECREF(unicode); return latin1_char; @@ -1757,7 +1757,7 @@ /* Single character Unicode objects in the Latin-1 range are shared when using this constructor */ - if (size == 1 && *u < 256) + if (size == 1 && (Py_UCS4)*u < 256) return get_latin1_char((unsigned char)*u); /* If not empty and not single character, copy the Unicode data @@ -1865,7 +1865,7 @@ PyObject *unicode; if (size == 1) { #ifdef Py_DEBUG - assert(s[0] < 128); + assert((unsigned char)s[0] < 128); #endif return get_latin1_char(s[0]); } @@ -1907,7 +1907,7 @@ } static PyObject* -_PyUnicode_FromUCS1(const unsigned char* u, Py_ssize_t size) +_PyUnicode_FromUCS1(const Py_UCS1* u, Py_ssize_t size) { PyObject *res; unsigned char max_char; @@ -2792,8 +2792,8 @@ return NULL; } - if (ordinal < 256) - return get_latin1_char(ordinal); + if ((Py_UCS4)ordinal < 256) + return get_latin1_char((unsigned char)ordinal); v = PyUnicode_New(1, ordinal); if (v == NULL) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 00:28:53 2013 From: python-checkins at python.org (victor.stinner) Date: Tue, 26 Feb 2013 00:28:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MjIz?= =?utf-8?q?=3A_array_module=3A_Fix_a_crasher_when_converting_an_array_cont?= =?utf-8?q?aining?= Message-ID: <3ZFK9K5XmQzNvn@mail.python.org> http://hg.python.org/cpython/rev/ebeed44702ec changeset: 82388:ebeed44702ec branch: 3.3 parent: 82386:c354afedb866 user: Victor Stinner date: Tue Feb 26 00:27:38 2013 +0100 summary: Issue #17223: array module: Fix a crasher when converting an array containing invalid characters (outside range [U+0000; U+10ffff]) to Unicode: repr(array), str(array) and array.tounicode(). Patch written by Manuel Jacob. files: Lib/test/test_array.py | 6 ++++++ Misc/NEWS | 4 ++++ Modules/arraymodule.c | 2 ++ 3 files changed, 12 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_array.py b/Lib/test/test_array.py --- a/Lib/test/test_array.py +++ b/Lib/test/test_array.py @@ -1069,6 +1069,12 @@ self.assertRaises(TypeError, a.fromunicode) + def test_issue17223(self): + # this used to crash + a = array.array('u', b'\xff' * 4) + self.assertRaises(ValueError, a.tounicode) + self.assertRaises(ValueError, str, a) + class NumberTest(BaseTest): def test_extslice(self): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -12,6 +12,10 @@ Core and Builtins ----------------- +- Issue #17223: array module: Fix a crasher when converting an array containing + invalid characters (outside range [U+0000; U+10ffff]) to Unicode: + repr(array), str(array) and array.tounicode(). Patch written by Manuel Jacob. + - Issue #17223: Fix PyUnicode_FromUnicode() for string of 1 character outside the range U+0000-U+10ffff. diff --git a/Modules/arraymodule.c b/Modules/arraymodule.c --- a/Modules/arraymodule.c +++ b/Modules/arraymodule.c @@ -2180,6 +2180,8 @@ } else { v = array_tolist(a, NULL); } + if (v == NULL) + return NULL; s = PyUnicode_FromFormat("array('%c', %R)", (int)typecode, v); Py_DECREF(v); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 00:28:55 2013 From: python-checkins at python.org (victor.stinner) Date: Tue, 26 Feb 2013 00:28:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_=28Merge_3=2E3=29_Issue_=2317223=3A_array_module=3A_Fix_?= =?utf-8?q?a_crasher_when_converting_an_array?= Message-ID: <3ZFK9M1BB3zNvn@mail.python.org> http://hg.python.org/cpython/rev/381de621ff6a changeset: 82389:381de621ff6a parent: 82387:a4295ab52427 parent: 82388:ebeed44702ec user: Victor Stinner date: Tue Feb 26 00:27:56 2013 +0100 summary: (Merge 3.3) Issue #17223: array module: Fix a crasher when converting an array containing invalid characters (outside range [U+0000; U+10ffff]) to Unicode: repr(array), str(array) and array.tounicode(). Patch written by Manuel Jacob. files: Lib/test/test_array.py | 6 ++++++ Misc/NEWS | 4 ++++ Modules/arraymodule.c | 2 ++ 3 files changed, 12 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_array.py b/Lib/test/test_array.py --- a/Lib/test/test_array.py +++ b/Lib/test/test_array.py @@ -1069,6 +1069,12 @@ self.assertRaises(TypeError, a.fromunicode) + def test_issue17223(self): + # this used to crash + a = array.array('u', b'\xff' * 4) + self.assertRaises(ValueError, a.tounicode) + self.assertRaises(ValueError, str, a) + class NumberTest(BaseTest): def test_extslice(self): diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,10 @@ Core and Builtins ----------------- +- Issue #17223: array module: Fix a crasher when converting an array containing + invalid characters (outside range [U+0000; U+10ffff]) to Unicode: + repr(array), str(array) and array.tounicode(). Patch written by Manuel Jacob. + - Issue #17223: Fix PyUnicode_FromUnicode() for string of 1 character outside the range U+0000-U+10ffff. diff --git a/Modules/arraymodule.c b/Modules/arraymodule.c --- a/Modules/arraymodule.c +++ b/Modules/arraymodule.c @@ -2177,6 +2177,8 @@ } else { v = array_tolist(a, NULL); } + if (v == NULL) + return NULL; s = PyUnicode_FromFormat("array('%c', %R)", (int)typecode, v); Py_DECREF(v); -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 03:33:34 2013 From: python-checkins at python.org (ezio.melotti) Date: Tue, 26 Feb 2013 03:33:34 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_=2314468=3A_document_the_?= =?utf-8?q?use_of_the_share_extension_as_the_suggested_approach_for?= Message-ID: <3ZFPGQ1zBHzN3d@mail.python.org> http://hg.python.org/devguide/rev/a50e537c5914 changeset: 602:a50e537c5914 user: Ezio Melotti date: Tue Feb 26 04:29:58 2013 +0200 summary: #14468: document the use of the share extension as the suggested approach for core developers. files: committing.rst | 409 +++++++++++++++++------------------- 1 files changed, 191 insertions(+), 218 deletions(-) diff --git a/committing.rst b/committing.rst --- a/committing.rst +++ b/committing.rst @@ -42,6 +42,61 @@ making a complete patch. +Handling Others' Code +--------------------- + +As a core developer you will occasionally want to commit a patch created by +someone else. When doing so you will want to make sure of some things. + +First, make sure the patch is in a good state. Both :ref:`patch` and +:ref:`helptriage` +explain what is to be expected of a patch. Typically patches that get cleared by +triagers are good to go except maybe lacking ``Misc/ACKS`` and ``Misc/NEWS`` +entries. + +Second, make sure the patch does not break backwards-compatibility without a +good reason. This means :ref:`running the test suite ` to make sure +everything still passes. It also means that if semantics do change there must +be a good reason for the breakage of code the change will cause (and it +**will** break someone's code). If you are unsure if the breakage is worth it, +ask on python-dev. + +Third, ensure the patch is attributed correctly by adding the contributor's +name to ``Misc/ACKS`` if they aren't already there (and didn't add themselves +in their patch) and by mentioning "Patch by " in the ``Misc/NEWS`` entry +and the checkin message. If the patch has been heavily modified then "Initial +patch by " is an appropriate alternate wording. + +If you omit correct attribution in the initial checkin, then update ``ACKS`` +and ``NEWS`` in a subsequent checkin (don't worry about trying to fix the +original checkin message in that case). + + +Contributor Licensing Agreements +-------------------------------- + +It's unlikely bug fixes will require a `Contributor Licensing Agreement`_ +unless they touch a *lot* of code. For new features, it is preferable to +ask that the contributor submit a signed CLA to the PSF as the associated +comments, docstrings and documentation are far more likely to reach a +copyrightable standard. + +For Python sprints we now recommend collecting CLAs as a matter of course, as +the folks leading the sprints can then handle the task of scanning (or otherwise +digitising) the forms and passing them on to the PSF secretary. (Yes, we +realise this process is quite archaic. Yes, we're in the process of fixing +it. No, it's not fixed yet). + +As discussed on the PSF Contribution_ page, it is the CLA itself that gives +the PSF the necessary relicensing rights to redistribute contributions under +the Python license stack. This is an additional permission granted above and +beyond the normal permissions provided by the chosen open source license. + +.. _Contribution: http://www.python.org/psf/contrib/ +.. _Contributor Licensing Agreement: + http://www.python.org/psf/contrib/contrib-form/ + + NEWS Entries ------------ @@ -129,7 +184,7 @@ automatically closed as "fixed". Working with Mercurial_ ------------------------ +======================= As a core developer, the ability to push changes to the official Python repositories means you have to be more careful with your workflow: @@ -201,253 +256,171 @@ .. _eol extension: http://mercurial.selenic.com/wiki/EolExtension -Handling Others' Code ---------------------- +Clones Setup +------------ -As a core developer you will occasionally want to commit a patch created by -someone else. When doing so you will want to make sure of some things. +There are several possible ways to set up your Mercurial clone(s). If you are +a core developer, you often need to work on the different branches, so the best +approach is to have a separate clone/directory for each active branch. If you +are a contributor, having a single clone might be enough. -First, make sure the patch is in a good state. Both :ref:`patch` and -:ref:`helptriage` -explain what is to be expected of a patch. Typically patches that get cleared by -triagers are good to go except maybe lacking ``Misc/ACKS`` and ``Misc/NEWS`` -entries. +Single Clone Approach +''''''''''''''''''''' -Second, make sure the patch does not break backwards-compatibility without a -good reason. This means :ref:`running the test suite ` to make sure -everything still passes. It also means that if semantics do change there must -be a good reason for the breakage of code the change will cause (and it -**will** break someone's code). If you are unsure if the breakage is worth it, -ask on python-dev. +This approach has the advantage of being simpler because it requires a single +clone/directory, but, on the other hand, it requires you to recompile Python +every time you need to switch branch. For this reason, this approach is not +suggested to core developers, but it's usually suitable for contributors. -Third, ensure the patch is attributed correctly by adding the contributor's -name to ``Misc/ACKS`` if they aren't already there (and didn't add themselves -in their patch) and by mentioning "Patch by " in the ``Misc/NEWS`` entry -and the checkin message. If the patch has been heavily modified then "Initial -patch by " is an appropriate alternate wording. +See :ref:`checkout` to find information about cloning and switching branches. -If you omit correct attribution in the initial checkin, then update ``ACKS`` -and ``NEWS`` in a subsequent checkin (don't worry about trying to fix the -original checkin message in that case). +Multiple Clones Approach +'''''''''''''''''''''''' +This approach requires you to keep a separate clone/directory for each active +branch, but, on the other hand, it doesn't require you to switch branches and +recompile Python, so it saves times while merging and testing a patch on the +different branches. For this reason, this approach is suggested to core +developers. -Contributor Licensing Agreements --------------------------------- +The easiest way to do this is by using the `share extension`_, that can be +enabled by adding the following lines to your ``~/.hgrc``:: -It's unlikely bug fixes will require a `Contributor Licensing Agreement`_ -unless they touch a *lot* of code. For new features, it is preferable to -ask that the contributor submit a signed CLA to the PSF as the associated -comments, docstrings and documentation are far more likely to reach a -copyrightable standard. + [extensions] + share = -For Python sprints we now recommend collecting CLAs as a matter of course, as -the folks leading the sprints can then handle the task of scanning (or otherwise -digitising) the forms and passing them on to the PSF secretary. (Yes, we -realise this process is quite archaic. Yes, we're in the process of fixing -it. No, it's not fixed yet). +Once you have :ref:`cloned the hg.python.org/cpython repository ` +you can create the other shared clones using:: -As discussed on the PSF Contribution_ page, it is the CLA itself that gives -the PSF the necessary relicensing rights to redistribute contributions under -the Python license stack. This is an additional permission granted above and -beyond the normal permissions provided by the chosen open source license. + $ hg share cpython 2.7 # create a new shared clone + $ cd 2.7 # enter the directory + $ hg up 2.7 # switch to the 2.7 branch -.. _Contribution: http://www.python.org/psf/contrib/ -.. _Contributor Licensing Agreement: - http://www.python.org/psf/contrib/contrib-form/ +You can then repeat the same operation for the other active branches. +This will create different clones/directories that share the same history. +This means that once you commit or pull new changesets in one of the clones, +they will be immediately available in all the other clones (note however that +while you only need to use ``hg pull`` once, you still need to use ``hg up`` +in each clone to update its working copy). +In order to apply a patch, commit, and merge it on all the branches, you can do +as follow:: -Forward-Porting + $ cd 2.7 + $ hg pull ssh://hg at hg.python.org/cpython + $ hg up + $ hg import --no-c http://bugs.python.org/url/to/the/patch.diff + $ # review, run tests, run `make patchcheck` + $ hg ci -m '#12345: fix some issue.' + $ # switch to 3.2 and port the changeset using `hg graft` + $ cd ../3.2 + $ hg up + $ hg graft 2.7 + $ # switch to 3.3, merge, and commit + $ cd ../3.3 + $ hg up + $ hg merge 3.2 + $ hg ci -m '#12345: merge with 3.2.' + $ # switch to 3.x, merge, commit, and push everything + $ cd ../3.x + $ hg up + $ hg merge 3.3 + $ hg ci -m '#12345: merge with 3.3.' + $ hg push ssh://hg at hg.python.org/cpython + +If you don't want to specify ssh://hg at hg.python.org/cpython every time, you +should add to the ``.hg/hgrc`` files of the clones:: + + [paths] + default = ssh://hg at hg.python.org/cpython + +Unless noted otherwise, the rest of the page will assume you are using the +multiple clone approach, and explain in more detail these basic steps. + +.. _share extension: http://mercurial.selenic.com/wiki/ShareExtension + + +Active branches --------------- -If the patch is a bugfix and it does not break -backwards-compatibility *at all*, then it should be applied to the oldest -branch applicable and forward-ported until it reaches the in-development branch -of Python (for example, first in ``3.2``, then in ``3.3`` and finally in -``default``). A forward-port instead of a back-port is preferred as it allows -the :abbr:`DAG (directed acyclic graph)` used by hg to work with the movement of -the patch through the codebase instead of against it. +If you do ``hg branches`` you will see a list of branches. ``default`` is the +in-development branch, and is the only branch that receives new features. The +other branches only receive bug fixes (``2.7``, ``3.2``, ``3.3``), or security +fixes (``2.6``, ``3.1``). Depending on what you are committing (feature, bug +fix, or security fix), you should commit to the oldest branch applicable, and +then forward-port until the in-development branch. -Note that this policy applies only within a major version - the ``2.7`` branch -is an independent thread of development, and should *never* be merged to any -of the ``3.x`` branches or ``default``. If a bug fix applies to both ``2.x`` -and ``3.x``, the two additions are handled as separate commits. It doesn't -matter which is updated first, but any associated tracker issues should be -closed only after all affected versions have been modified in the main -repository. -.. warning:: - Even when porting an already committed patch, you should **still** check the +Merging order +------------- + +There are two separate lines of development: one for Python 2 (the ``2.x`` +branches) and one for Python 3 (the ``3.x`` branches and ``default``). +You should *never* merge between the two major versions (2.x and 3.x) --- +only between minor versions (e.g. 3.x->3.y). The merge always happens from +the oldest applicable branch to the newest branch within the same major +Python version. + + +Merging between different branches (within the same major version) +------------------------------------------------------------------ + +Assume that Python 3.4 is the current in-development version of Python and that +you have a patch that should also be applied to Python 3.3. To properly port +the patch to both versions of Python, you should first apply the patch to +Python 3.3:: + + cd 3.3 + hg import --no-commit patch.diff + # Compile; run the test suite + hg ci -m '#12345: fix some issue.' + +Then you can switch to the ``3.x`` clone, merge, run the tests and commit:: + + cd ../3.x + hg merge 3.3 + # Fix any conflicts; compile; run the test suite + hg ci -m '#12345: merge with 3.3.' + +If you are not using the share extension, you will need to use +``hg pull ../3.3`` before being able to merge. + +.. note:: + Even when porting an already committed patch, you should *still* check the test suite runs successfully before committing the patch to another branch. Subtle differences between two branches sometimes make a patch bogus if ported without any modifications. -Porting Within a Major Version -'''''''''''''''''''''''''''''' +Porting changesets between the two major Python versions (2.x and 3.x) +---------------------------------------------------------------------- -Assume that Python 3.4 is the current in-development version of Python and that -you have a patch that should also be applied to Python 3.3. To properly port -the patch to both versions of Python, you should first apply the patch to -Python 3.3:: +Assume you just committed something on ``2.7``, and want to port it to ``3.2``. +You can use ``hg graft`` as follow:: - hg update 3.3 - hg import --no-commit patch.diff - # Compile; run the test suite - hg commit + cd ../3.2 + hg graft 2.7 -With the patch now committed, you want to merge the patch up into Python 3.4. -This should be done *before* pushing your changes to hg.python.org, so that -the branches are in sync on the public repository. Assuming you are doing -all of your work in a single clone, do:: +This will port the latest changeset committed in the 2.7 clone to the 3.2 clone. +``hg graft`` always commits automatically, except in case of conflicts, when +you have to resolve them and run ``hg graft --continue`` afterwards. +Instead of the branch name you can also specify a changeset id, and you can +also graft changesets from 3.x to 2.7. - hg update default - hg merge 3.3 - # Fix any conflicts; compile; run the test suite - hg commit +On older version of Mercurial where ``hg graft`` is not available, you can use:: -.. index:: null merging + cd ../3.2 + hg export 2.7 | hg import - -.. note:: - If the patch should *not* be ported from Python 3.3 to Python 3.4, you must - also make this explicit by doing a *null merge*: merge the changes but - revert them before committing:: +The result will be the same, but in case of conflict this will create ``.rej`` +files rather than using Mercurial merge capabilities. - hg update default - hg merge 3.3 - hg revert -ar default - hg resolve -am # needed only if the merge created conflicts - hg commit +A third option is to apply manually the patch on ``3.2``. This is convenient +when there are too many differences with ``2.7`` or when there is already a +specific patch for ``3.2``. - This is necessary so that the merge gets recorded; otherwise, somebody - else will have to make a decision about your patch when they try to merge. - (Using a three-way merge tool generally makes the ``hg resolve`` step - in the above unnecessary; also see `this bug report - `__.) - -When you have finished your porting work (you can port several patches one -after another in your local repository), you can push **all** outstanding -changesets to hg.python.org:: - - hg push - -This will push changes in both the Python 3.3 and Python 3.4 branches to -hg.python.org. - - -Porting Between Major Versions -'''''''''''''''''''''''''''''' - -Let's say you have committed your changes as changeset ``a7df1a869e4a`` -in the 3.3 branch and now want to port it to 2.7. This is simple using -the "graft" command, which uses Mercurial's merge functionality to -cherry-pick:: - - hg update 2.7 - hg graft a7df1a869e4a - # Compile; run the test suite - -Graft always commits automatically, except in case of conflicts, when you -have to resolve them and run ``hg graft --continue`` afterwards. - -Another method is using "export" and "import": this has the advantage that -you can run the test suite before committing, but the disadvantage that -in case of conflicts, you will only get ``.rej`` files, not inline merge -markers. :: - - hg update 2.7 - hg export a7df1a869e4a | hg import --no-commit - - # Compile; run the test suite - hg commit - - -Using several working copies -'''''''''''''''''''''''''''' - -If you often work on bug fixes, you may want to avoid switching branches -in your local repository. The reason is that rebuilding takes time -when many files are updated. Instead, it is desirable to use a separate -working copy for each maintenance branch. - -There are various ways to achieve this, but here is a possible scenario: - -* First do a clone of the public repository, whose working copy will be - updated to the ``default`` branch:: - - $ hg clone ssh://hg at hg.python.org/cpython py3k - -* Then clone it to create another local repository which is then used to - checkout branch 3.3:: - - $ hg clone py3k py3.3 - $ cd py3.3 - $ hg update 3.3 - -* Then clone it to create another local repository which is then used to - checkout branch 3.2:: - - $ hg clone py3.3 py3.2 - $ cd py3.2 - $ hg update 3.2 - -* If you also need the 3.1 branch to work on security fixes, you can similarly - clone it, either from the ``py3.2`` or the ``py3k`` repository. It is - suggested, though, that you clone from ``py3.2`` as that it will force you - to push changes back up your clone chain so that you make sure to port - changes to all proper versions. - -* You can also clone a 2.7-dedicated repository from the ``py3k`` branch:: - - $ hg clone py3k py2.7 - $ cd py2.7 - $ hg update 2.7 - -Given this arrangement of local repositories, pushing from the ``py3.2`` -repository will update the ``py3.3`` repository, where you can then merge your -3.2 changes into the 3.3 branch. In turn, pushing changes from the ``py3.3`` -repository will update the ``py3k`` repository. Finally, once you have -merged (and tested!) your ``3.3`` changes into the ``default`` branch, pushing -from the ``py3k`` repository will publish your changes in the public -repository. - -When working with this kind of arrangement, it can be useful to have a simple -script that runs the necessary commands to update all branches with upstream -changes:: - - cd ~/py3k - hg pull -u - cd ~/py3.3 - hg pull -u - cd ~/py3.2 - hg pull -u - cd ~/py2.7 - hg pull -u - -Only the first of those updates will touch the network - the latter two will -just transfer the changes locally between the relevant repositories. - -If you want, you can later :ref:`change the flow of changes ` implied -by the cloning of repositories. For example, you may choose to add a separate -``sandbox`` repository for experimental code (potentially published somewhere -other than python.org) or an additional pristine repository that is -never modified locally. - - -Differences with ``svnmerge`` -''''''''''''''''''''''''''''' - -If you are coming from Subversion, you might be surprised by Mercurial -:ref:`merges `. -Despite its name, ``svnmerge`` is different from ``hg merge``: while ``svnmerge`` -allows to cherry-pick individual revisions, ``hg merge`` can only merge whole -lines of development in the repository's :abbr:`DAG (directed acyclic graph)`. -Therefore, ``hg merge`` might force you to review outstanding changesets by -someone else that haven't been merged yet. - - -.. seealso:: - `Merging work - `_, - in `Mercurial: The Definitive Guide `_. +.. warning:: + Never use ``hg merge`` to port changes between 2.x and 3.x (or vice versa). Long-term development of features -- Repository URL: http://hg.python.org/devguide From python-checkins at python.org Tue Feb 26 03:33:36 2013 From: python-checkins at python.org (ezio.melotti) Date: Tue, 26 Feb 2013 03:33:36 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_=2314468=3A_regroup_the_?= =?utf-8?q?=22Version_control=22_FAQs_in_two_sections=3A_=22for_everyone?= =?utf-8?q?=22_and?= Message-ID: <3ZFPGS05qRzPJh@mail.python.org> http://hg.python.org/devguide/rev/3e213eaf85a6 changeset: 603:3e213eaf85a6 user: Ezio Melotti date: Tue Feb 26 04:32:10 2013 +0200 summary: #14468: regroup the "Version control" FAQs in two sections: "for everyone" and "for committers". files: faq.rst | 437 ++++++++++++++++++++++--------------------- 1 files changed, 224 insertions(+), 213 deletions(-) diff --git a/faq.rst b/faq.rst --- a/faq.rst +++ b/faq.rst @@ -132,8 +132,13 @@ Version Control =============== +For everyone +------------ + +The following FAQs are intended for both core developers and contributors. + Where can I learn about the version control system used, Mercurial (hg)? -------------------------------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Mercurial_'s (also known as ``hg``) official web site is at http://mercurial.selenic.com/. A book on Mercurial published by @@ -158,7 +163,7 @@ I already know how to use Git, can I use that instead? ------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''' While the main workflow for core developers requires Mercurial, if you just want to generate patches with ``git diff`` and post them to the @@ -182,10 +187,10 @@ What do I need to use Mercurial? -------------------------------------------------------------------------------- +'''''''''''''''''''''''''''''''' UNIX -''''''''''''''''''' +^^^^ First, you need to `download Mercurial`_. Most UNIX-based operating systems have binary packages available. Most package management systems also @@ -209,7 +214,7 @@ Windows -''''''''''''''''''' +^^^^^^^ The recommended option on Windows is to `download TortoiseHg`_ which integrates with Windows Explorer and also bundles the command line client @@ -236,7 +241,7 @@ What's a working copy? What's a repository? -------------------------------------------- +''''''''''''''''''''''''''''''''''''''''''' Mercurial is a "distributed" version control system. This means that each participant, even casual contributors, download a complete copy (called a @@ -259,7 +264,7 @@ Which branches are in my local repository? ------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''' Typing ``hg branches`` displays the open branches in your local repository:: @@ -272,7 +277,7 @@ 2.6 76213:f130ce67387d (inactive) Why are some branches marked "inactive"? ----------------------------------------- +'''''''''''''''''''''''''''''''''''''''' Assuming you get the following output:: @@ -287,7 +292,7 @@ .. _hg-current-branch: Which branch is currently checked out in my working copy? ---------------------------------------------------------- +''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Use:: @@ -307,7 +312,7 @@ .. _hg-switch-branches: How do I switch between branches inside my working copy? --------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''''' Simply use ``hg update`` to checkout another branch in the current directory:: @@ -328,7 +333,7 @@ I want to keep a separate working copy per development branch, is it possible? ------------------------------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Just clone your local repository and update each clone to a different branch:: @@ -343,21 +348,10 @@ changes, ``hg update`` will update to the head of the *current branch*. -How do I avoid repeated pulls and pushes between my local repositories? ------------------------------------------------------------------------ - -The "`share extension`_" allows you to share a single local repository -between several working copies: each commit you make in a working copy will -be immediately available in other working copies, even though they might -be checked out on different branches. - -.. _share extension: http://mercurial.selenic.com/wiki/ShareExtension - - .. _hg-paths: How do I link my local repository to a particular remote repository? -------------------------------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Your local repository is linked by default to the remote repository it was *cloned* from. If you created it from scratch, however, it is not linked @@ -375,7 +369,7 @@ How do I create a shorthand alias for a remote repository? -------------------------------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''''''' In your global ``.hgrc`` file add a section similar to the following:: @@ -392,7 +386,7 @@ How do I compare my local repository to a remote repository? -------------------------------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' To display the list of changes that are in your local repository, but not in the remote, use:: @@ -420,7 +414,7 @@ How do I update my local repository to be in sync with a remote repository? -------------------------------------------------------------------------------- +''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Run:: @@ -436,7 +430,7 @@ How do I update my working copy with the latest changes? --------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''''''''' Do:: @@ -455,7 +449,7 @@ .. _hg-local-workflow: How do I apply a patch? -------------------------------------------------------------------------------- +''''''''''''''''''''''' If you want to try out or review a patch generated using Mercurial, do:: @@ -493,7 +487,7 @@ .. _merge-patch: How do I solve conflicts when applying a patch fails? ------------------------------------------------------ +''''''''''''''''''''''''''''''''''''''''''''''''''''' The standard ``patch`` command, as well as ``hg import``, will produce unhelpful ``*.rej`` files when it fails applying parts of a patch. @@ -514,7 +508,7 @@ How do I add a file or directory to the repository? -------------------------------------------------------------------------------- +''''''''''''''''''''''''''''''''''''''''''''''''''' Simply specify the path to the file or directory to add and run:: @@ -536,7 +530,7 @@ What's the best way to split a file into several files? -------------------------------------------------------------------------------- +''''''''''''''''''''''''''''''''''''''''''''''''''''''' To split a file into several files (e.g. a module converted to a package or a long doc file divided in two separate documents) use ``hg copy``:: @@ -553,10 +547,200 @@ related. +How do I delete a file or directory in the repository? +'''''''''''''''''''''''''''''''''''''''''''''''''''''' + +Specify the path to be removed with:: + + hg remove PATH + +This will remove the file or the directory from your working copy; you will +have to :ref:`commit your changes ` for the removal to be recorded +in your local repository. + + +.. _hg-status: + +What files are modified in my working copy? +''''''''''''''''''''''''''''''''''''''''''' + +Running:: + + hg status + +will list any pending changes in the working copy. These changes will get +committed to the local repository if you issue an ``hg commit`` without +specifying any path. + +Some +key indicators that can appear in the first column of output are: + + = =========================== + A Scheduled to be added + R Scheduled to be removed + M Modified locally + ? Not under version control + = =========================== + +If you want a line-by-line listing of the differences, use:: + + hg diff + + +How do I revert a file I have modified back to the version in the repository? +''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' + +Running:: + + hg revert PATH + +will revert ``PATH`` to its version in the repository, throwing away any +changes you made locally. If you run:: + + hg revert -a + +from the root of your working copy it will recursively restore everything +to match up with the repository. + + +How do I find out who edited or what revision changed a line last? +'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' + +You want:: + + hg annotate PATH + +This will output to stdout every line of the file along with which revision +last modified that line. When you have the revision number, it is then +easy to :ref:`display it in detail `. + + +.. _hg-log: + +How can I see a list of log messages for a file or specific revision? +''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' + +To see the history of changes for a specific file, run:: + + hg log -v [PATH] + +That will list all messages of revisions which modified the file specified +in ``PATH``. If ``PATH`` is omitted, all revisions are listed. + +If you want to display line-by-line differences for each revision as well, +add the ``-p`` option:: + + hg log -vp [PATH] + +.. _hg-log-rev: + +If you want to view the differences for a specific revision, run:: + + hg log -vp -r + + +How can I see the changeset graph in my repository? +''''''''''''''''''''''''''''''''''''''''''''''''''' + +In Mercurial repositories, changesets don't form a simple list, but rather +a graph: every changeset has one or two parents (it's called a merge changeset +in the latter case), and can have any number of children. + +The graphlog_ extension is very useful for examining the structure of the +changeset graph. It is bundled with Mercurial. + +Graphical tools, such as TortoiseHG, will display the changeset graph +by default. + +.. _graphlog: http://mercurial.selenic.com/wiki/GraphlogExtension + + +How do I update to a specific release tag? +'''''''''''''''''''''''''''''''''''''''''' + +Run:: + + hg tags + +to get a list of tags. To update your working copy to a specific tag, use:: + + hg update + + +How do I find which changeset introduced a bug or regression? +''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' + +``hg bisect``, as the name indicates, helps you do a bisection of a range of +changesets. + +You need two changesets to start the search: one that is "good" +(doesn't have the bug), and one that is "bad" (has the bug). Usually, you +have just noticed the bug in your working copy, so you can start with:: + + hg bisect --bad + +Then you must find a changeset that doesn't have the bug. You can conveniently +choose a faraway changeset (for example a former release), and check that it +is indeed "good". Then type:: + + hg bisect --good + +Mercurial will automatically bisect so as to narrow the range of possible +culprits, until a single changeset is isolated. Each time Mercurial presents +you with a new changeset, re-compile Python and run the offending test, for +example:: + + make -j2 + ./python -m test -uall test_sometest + +Then, type either ``hg bisect --good`` or ``hg bisect --bad`` depending on +whether the test succeeded or failed. + + +How come feature XYZ isn't available in Mercurial? +'''''''''''''''''''''''''''''''''''''''''''''''''' + +Mercurial comes with many bundled extensions which can be explicitly enabled. +You can get a list of them by typing ``hg help extensions``. Some of these +extensions, such as ``color``, can prettify output; others, such as ``fetch`` +or ``graphlog``, add new Mercurial commands. + +There are also many `configuration options`_ to tweak various aspects of the +command line and other Mercurial behaviour; typing `man hgrc`_ displays +their documentation inside your terminal. + +In the end, please refer to the Mercurial `wiki`_, especially the pages about +`extensions`_ (including third-party ones) and the `tips and tricks`_. + + +.. _man hgrc: http://www.selenic.com/mercurial/hgrc.5.html +.. _wiki: http://mercurial.selenic.com/wiki/ +.. _extensions: http://mercurial.selenic.com/wiki/UsingExtensions +.. _tips and tricks: http://mercurial.selenic.com/wiki/TipsAndTricks +.. _configuration options: http://www.selenic.com/mercurial/hgrc.5.html + + +For core developers +------------------- + +These FAQs are intended mainly for core developers. + + +How do I avoid repeated pulls and pushes between my local repositories? +''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' + +The "`share extension`_" allows you to share a single local repository +between several working copies: each commit you make in a working copy will +be immediately available in other working copies, even though they might +be checked out on different branches. + +.. _share extension: http://mercurial.selenic.com/wiki/ShareExtension + + .. _hg-commit: How do I commit a change to a file? -------------------------------------------------------------------------------- +''''''''''''''''''''''''''''''''''' To commit any changes to a file (which includes adding a new file or deleting an existing one), you use the command:: @@ -586,66 +770,10 @@ option in the ``[ui]`` section. -How do I delete a file or directory in the repository? -------------------------------------------------------------------------------- - -Specify the path to be removed with:: - - hg remove PATH - -This will remove the file or the directory from your working copy; you will -have to :ref:`commit your changes ` for the removal to be recorded -in your local repository. - - -.. _hg-status: - -What files are modified in my working copy? -------------------------------------------------------------------------------- - -Running:: - - hg status - -will list any pending changes in the working copy. These changes will get -committed to the local repository if you issue an ``hg commit`` without -specifying any path. - -Some -key indicators that can appear in the first column of output are: - - = =========================== - A Scheduled to be added - R Scheduled to be removed - M Modified locally - ? Not under version control - = =========================== - -If you want a line-by-line listing of the differences, use:: - - hg diff - - -How do I revert a file I have modified back to the version in the repository? -------------------------------------------------------------------------------- - -Running:: - - hg revert PATH - -will revert ``PATH`` to its version in the repository, throwing away any -changes you made locally. If you run:: - - hg revert -a - -from the root of your working copy it will recursively restore everything -to match up with the repository. - - .. _hg-merge: How do I find out which revisions need merging? ------------------------------------------------ +''''''''''''''''''''''''''''''''''''''''''''''' In unambiguous cases, Mercurial will find out for you if you simply try:: @@ -664,7 +792,7 @@ How do I list the files in conflict after a merge? --------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''' Use:: @@ -674,7 +802,7 @@ How I mark a file resolved after I have resolved merge conflicts? ------------------------------------------------------------------ +''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Type:: @@ -685,60 +813,8 @@ If you are sure you have resolved all conflicts, use ``hg resolve -am``. -How do I find out who edited or what revision changed a line last? -------------------------------------------------------------------------------- - -You want:: - - hg annotate PATH - -This will output to stdout every line of the file along with which revision -last modified that line. When you have the revision number, it is then -easy to :ref:`display it in detail `. - - -.. _hg-log: - -How can I see a list of log messages for a file or specific revision? ---------------------------------------------------------------------- - -To see the history of changes for a specific file, run:: - - hg log -v [PATH] - -That will list all messages of revisions which modified the file specified -in ``PATH``. If ``PATH`` is omitted, all revisions are listed. - -If you want to display line-by-line differences for each revision as well, -add the ``-p`` option:: - - hg log -vp [PATH] - -.. _hg-log-rev: - -If you want to view the differences for a specific revision, run:: - - hg log -vp -r - - -How can I see the changeset graph in my repository? ---------------------------------------------------- - -In Mercurial repositories, changesets don't form a simple list, but rather -a graph: every changeset has one or two parents (it's called a merge changeset -in the latter case), and can have any number of children. - -The graphlog_ extension is very useful for examining the structure of the -changeset graph. It is bundled with Mercurial. - -Graphical tools, such as TortoiseHG, will display the changeset graph -by default. - -.. _graphlog: http://mercurial.selenic.com/wiki/GraphlogExtension - - How do I undo the changes made in a recent commit? -------------------------------------------------------------------------------- +'''''''''''''''''''''''''''''''''''''''''''''''''' First, this should not happen if you take the habit of :ref:`reviewing changes ` before committing them. @@ -756,71 +832,6 @@ a slightly different behaviour in versions before 1.7. -How do I update to a specific release tag? -------------------------------------------------------------------------------- - -Run:: - - hg tags - -to get a list of tags. To update your working copy to a specific tag, use:: - - hg update - - -How do I find which changeset introduced a bug or regression? -------------------------------------------------------------- - -``hg bisect``, as the name indicates, helps you do a bisection of a range of -changesets. - -You need two changesets to start the search: one that is "good" -(doesn't have the bug), and one that is "bad" (has the bug). Usually, you -have just noticed the bug in your working copy, so you can start with:: - - hg bisect --bad - -Then you must find a changeset that doesn't have the bug. You can conveniently -choose a faraway changeset (for example a former release), and check that it -is indeed "good". Then type:: - - hg bisect --good - -Mercurial will automatically bisect so as to narrow the range of possible -culprits, until a single changeset is isolated. Each time Mercurial presents -you with a new changeset, re-compile Python and run the offending test, for -example:: - - make -j2 - ./python -m test -uall test_sometest - -Then, type either ``hg bisect --good`` or ``hg bisect --bad`` depending on -whether the test succeeded or failed. - - -How come feature XYZ isn't available in Mercurial? --------------------------------------------------- - -Mercurial comes with many bundled extensions which can be explicitly enabled. -You can get a list of them by typing ``hg help extensions``. Some of these -extensions, such as ``color``, can prettify output; others, such as ``fetch`` -or ``graphlog``, add new Mercurial commands. - -There are also many `configuration options`_ to tweak various aspects of the -command line and other Mercurial behaviour; typing `man hgrc`_ displays -their documentation inside your terminal. - -In the end, please refer to the Mercurial `wiki`_, especially the pages about -`extensions`_ (including third-party ones) and the `tips and tricks`_. - - -.. _man hgrc: http://www.selenic.com/mercurial/hgrc.5.html -.. _wiki: http://mercurial.selenic.com/wiki/ -.. _extensions: http://mercurial.selenic.com/wiki/UsingExtensions -.. _tips and tricks: http://mercurial.selenic.com/wiki/TipsAndTricks -.. _configuration options: http://www.selenic.com/mercurial/hgrc.5.html - - SSH ======= @@ -831,7 +842,7 @@ adding to the list of keys. UNIX -''''''''''''''''''' +'''' Run:: @@ -841,7 +852,7 @@ public key is the file ending in ``.pub``. Windows -''''''''''''''''''' +''''''' Use PuTTYgen_ to generate your public key. Choose the "SSH2 DSA" radio button, have it create an OpenSSH formatted key, choose a password, and save the private @@ -855,7 +866,7 @@ --------------------------------------------------------------------------------------- UNIX -''''''''''''''''''' +'''' Use ``ssh-agent`` and ``ssh-add`` to register your private key with SSH for your current session. The simplest solution, though, is to use KeyChain_, @@ -868,7 +879,7 @@ .. _pageant: Windows -''''''''''''''''''' +''''''' The Pageant program is bundled with TortoiseHg. You can find it in its installation directory (usually ``C:\Program Files (x86)\TortoiseHg\``); -- Repository URL: http://hg.python.org/devguide From python-checkins at python.org Tue Feb 26 03:33:37 2013 From: python-checkins at python.org (ezio.melotti) Date: Tue, 26 Feb 2013 03:33:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?devguide=3A_=2314468=3A_update_FAQs_a?= =?utf-8?q?bout_multiple_clones_and_share_extension=2E?= Message-ID: <3ZFPGT2WZgzPhL@mail.python.org> http://hg.python.org/devguide/rev/ec43cf291255 changeset: 604:ec43cf291255 user: Ezio Melotti date: Tue Feb 26 04:33:17 2013 +0200 summary: #14468: update FAQs about multiple clones and share extension. files: committing.rst | 2 ++ faq.rst | 21 +++++++++------------ 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/committing.rst b/committing.rst --- a/committing.rst +++ b/committing.rst @@ -274,6 +274,8 @@ See :ref:`checkout` to find information about cloning and switching branches. +.. _multiple-clones: + Multiple Clones Approach '''''''''''''''''''''''' diff --git a/faq.rst b/faq.rst --- a/faq.rst +++ b/faq.rst @@ -335,7 +335,13 @@ I want to keep a separate working copy per development branch, is it possible? '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' -Just clone your local repository and update each clone to a different branch:: +There are two ways: + +1) Use the "`share extension`_" as described in the :ref:`multiple-clones` + section; +2) Create several clones of your local repository; + +If you want to use the second way, you can do:: $ hg clone cpython py33 updating to branch default @@ -347,6 +353,8 @@ The current branch in a working copy is "sticky": if you pull in some new changes, ``hg update`` will update to the head of the *current branch*. +.. _share extension: http://mercurial.selenic.com/wiki/ShareExtension + .. _hg-paths: @@ -726,17 +734,6 @@ These FAQs are intended mainly for core developers. -How do I avoid repeated pulls and pushes between my local repositories? -''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' - -The "`share extension`_" allows you to share a single local repository -between several working copies: each commit you make in a working copy will -be immediately available in other working copies, even though they might -be checked out on different branches. - -.. _share extension: http://mercurial.selenic.com/wiki/ShareExtension - - .. _hg-commit: How do I commit a change to a file? -- Repository URL: http://hg.python.org/devguide From solipsis at pitrou.net Tue Feb 26 06:02:35 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Tue, 26 Feb 2013 06:02:35 +0100 Subject: [Python-checkins] Daily reference leaks (381de621ff6a): sum=4 Message-ID: results for 381de621ff6a on branch "default" -------------------------------------------- test_concurrent_futures leaked [2, 1, 1] memory blocks, sum=4 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogXYIfDd', '-x'] From python-checkins at python.org Tue Feb 26 09:08:42 2013 From: python-checkins at python.org (serhiy.storchaka) Date: Tue, 26 Feb 2013 09:08:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzEzNTU1?= =?utf-8?q?=3A_Fix_an_integer_overflow_check=2E?= Message-ID: <3ZFXj64nhyzNNc@mail.python.org> http://hg.python.org/cpython/rev/f3f23ecdb1c6 changeset: 82390:f3f23ecdb1c6 branch: 2.7 parent: 82378:d707e3345a74 user: Serhiy Storchaka date: Tue Feb 26 10:07:36 2013 +0200 summary: Issue #13555: Fix an integer overflow check. files: Modules/cPickle.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Modules/cPickle.c b/Modules/cPickle.c --- a/Modules/cPickle.c +++ b/Modules/cPickle.c @@ -595,7 +595,7 @@ return i + 1; } } - if (self->buf_size < (PY_SSIZE_T_MAX >> 1)) { + if (self->buf_size > (PY_SSIZE_T_MAX >> 1)) { PyErr_NoMemory(); return -1; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 10:02:28 2013 From: python-checkins at python.org (senthil.kumaran) Date: Tue, 26 Feb 2013 10:02:28 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_issue16932?= =?utf-8?q?=3A_Fix_the_urlparse_example=2E_Remote_=3Aport_when_scheme_is_n?= =?utf-8?q?ot?= Message-ID: <3ZFYv84QjYzNll@mail.python.org> http://hg.python.org/cpython/rev/33895c474b4d changeset: 82391:33895c474b4d branch: 2.7 user: Senthil Kumaran date: Tue Feb 26 01:02:14 2013 -0800 summary: Fix issue16932: Fix the urlparse example. Remote :port when scheme is not specified to demonstrate correct behavior files: Doc/library/urlparse.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/urlparse.rst b/Doc/library/urlparse.rst --- a/Doc/library/urlparse.rst +++ b/Doc/library/urlparse.rst @@ -71,7 +71,7 @@ >>> urlparse('//www.cwi.nl:80/%7Eguido/Python.html') ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html', params='', query='', fragment='') - >>> urlparse('www.cwi.nl:80/%7Eguido/Python.html') + >>> urlparse('www.cwi.nl/%7Eguido/Python.html') ParseResult(scheme='', netloc='', path='www.cwi.nl:80/%7Eguido/Python.html', params='', query='', fragment='') >>> urlparse('help/Python.html') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 10:02:30 2013 From: python-checkins at python.org (senthil.kumaran) Date: Tue, 26 Feb 2013 10:02:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fix_issue16932?= =?utf-8?q?=3A_Fix_the_urlparse_example=2E_Remote_=3Aport_when_scheme_is_n?= =?utf-8?q?ot?= Message-ID: <3ZFYvB0F86zP8w@mail.python.org> http://hg.python.org/cpython/rev/5442a77b925c changeset: 82392:5442a77b925c branch: 3.2 parent: 82379:1c03e499cdc2 user: Senthil Kumaran date: Tue Feb 26 01:02:58 2013 -0800 summary: Fix issue16932: Fix the urlparse example. Remote :port when scheme is not specified to demonstrate correct behavior files: Doc/library/urllib.parse.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/urllib.parse.rst b/Doc/library/urllib.parse.rst --- a/Doc/library/urllib.parse.rst +++ b/Doc/library/urllib.parse.rst @@ -69,7 +69,7 @@ >>> urlparse('//www.cwi.nl:80/%7Eguido/Python.html') ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html', params='', query='', fragment='') - >>> urlparse('www.cwi.nl:80/%7Eguido/Python.html') + >>> urlparse('www.cwi.nl/%7Eguido/Python.html') ParseResult(scheme='', netloc='', path='www.cwi.nl:80/%7Eguido/Python.html', params='', query='', fragment='') >>> urlparse('help/Python.html') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 10:02:31 2013 From: python-checkins at python.org (senthil.kumaran) Date: Tue, 26 Feb 2013 10:02:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Fix_issue16932=3A_Fix_the_urlparse_example=2E_Remote_=3Aport_w?= =?utf-8?q?hen_scheme_is_not?= Message-ID: <3ZFYvC3SHJzPyN@mail.python.org> http://hg.python.org/cpython/rev/8928205f57f6 changeset: 82393:8928205f57f6 branch: 3.3 parent: 82388:ebeed44702ec parent: 82392:5442a77b925c user: Senthil Kumaran date: Tue Feb 26 01:04:22 2013 -0800 summary: Fix issue16932: Fix the urlparse example. Remote :port when scheme is not specified to demonstrate correct behavior files: Doc/library/urllib.parse.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/urllib.parse.rst b/Doc/library/urllib.parse.rst --- a/Doc/library/urllib.parse.rst +++ b/Doc/library/urllib.parse.rst @@ -69,7 +69,7 @@ >>> urlparse('//www.cwi.nl:80/%7Eguido/Python.html') ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html', params='', query='', fragment='') - >>> urlparse('www.cwi.nl:80/%7Eguido/Python.html') + >>> urlparse('www.cwi.nl/%7Eguido/Python.html') ParseResult(scheme='', netloc='', path='www.cwi.nl:80/%7Eguido/Python.html', params='', query='', fragment='') >>> urlparse('help/Python.html') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 10:02:32 2013 From: python-checkins at python.org (senthil.kumaran) Date: Tue, 26 Feb 2013 10:02:32 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Fix_issue16932=3A_Fix_the_urlparse_example=2E_Remote_=3A?= =?utf-8?q?port_when_scheme_is_not?= Message-ID: <3ZFYvD6P39zPy9@mail.python.org> http://hg.python.org/cpython/rev/9caad461936e changeset: 82394:9caad461936e parent: 82389:381de621ff6a parent: 82393:8928205f57f6 user: Senthil Kumaran date: Tue Feb 26 01:04:45 2013 -0800 summary: Fix issue16932: Fix the urlparse example. Remote :port when scheme is not specified to demonstrate correct behavior files: Doc/library/urllib.parse.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/urllib.parse.rst b/Doc/library/urllib.parse.rst --- a/Doc/library/urllib.parse.rst +++ b/Doc/library/urllib.parse.rst @@ -69,7 +69,7 @@ >>> urlparse('//www.cwi.nl:80/%7Eguido/Python.html') ParseResult(scheme='', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html', params='', query='', fragment='') - >>> urlparse('www.cwi.nl:80/%7Eguido/Python.html') + >>> urlparse('www.cwi.nl/%7Eguido/Python.html') ParseResult(scheme='', netloc='', path='www.cwi.nl:80/%7Eguido/Python.html', params='', query='', fragment='') >>> urlparse('help/Python.html') -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 14:13:11 2013 From: python-checkins at python.org (richard.oudkerk) Date: Tue, 26 Feb 2013 14:13:11 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE3MDE4?= =?utf-8?q?=3A_Make_Process=2Ejoin=28=29_retry_if_os=2Ewaitpid=28=29_fails?= =?utf-8?q?_with_EINTR=2E?= Message-ID: <3ZFgSR1qwHzNT9@mail.python.org> http://hg.python.org/cpython/rev/92003d9aae0e changeset: 82395:92003d9aae0e branch: 2.7 parent: 82391:33895c474b4d user: Richard Oudkerk date: Tue Feb 26 12:37:07 2013 +0000 summary: Issue #17018: Make Process.join() retry if os.waitpid() fails with EINTR. files: Lib/multiprocessing/forking.py | 18 +++++++--- Lib/test/test_multiprocessing.py | 32 ++++++++++++++++++++ Misc/NEWS | 2 + 3 files changed, 46 insertions(+), 6 deletions(-) diff --git a/Lib/multiprocessing/forking.py b/Lib/multiprocessing/forking.py --- a/Lib/multiprocessing/forking.py +++ b/Lib/multiprocessing/forking.py @@ -35,6 +35,7 @@ import os import sys import signal +import errno from multiprocessing import util, process @@ -129,12 +130,17 @@ def poll(self, flag=os.WNOHANG): if self.returncode is None: - try: - pid, sts = os.waitpid(self.pid, flag) - except os.error: - # Child process not yet created. See #1731717 - # e.errno == errno.ECHILD == 10 - return None + while True: + try: + pid, sts = os.waitpid(self.pid, flag) + except os.error as e: + if e.errno == errno.EINTR: + continue + # Child process not yet created. See #1731717 + # e.errno == errno.ECHILD == 10 + return None + else: + break if pid == self.pid: if os.WIFSIGNALED(sts): self.returncode = -os.WTERMSIG(sts) diff --git a/Lib/test/test_multiprocessing.py b/Lib/test/test_multiprocessing.py --- a/Lib/test/test_multiprocessing.py +++ b/Lib/test/test_multiprocessing.py @@ -2105,6 +2105,38 @@ # assert self.__handled # +# Check that Process.join() retries if os.waitpid() fails with EINTR +# + +class _TestPollEintr(BaseTestCase): + + ALLOWED_TYPES = ('processes',) + + @classmethod + def _killer(cls, pid): + time.sleep(0.5) + os.kill(pid, signal.SIGUSR1) + + @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), 'requires SIGUSR1') + def test_poll_eintr(self): + got_signal = [False] + def record(*args): + got_signal[0] = True + pid = os.getpid() + oldhandler = signal.signal(signal.SIGUSR1, record) + try: + killer = self.Process(target=self._killer, args=(pid,)) + killer.start() + p = self.Process(target=time.sleep, args=(1,)) + p.start() + p.join() + self.assertTrue(got_signal[0]) + self.assertEqual(p.exitcode, 0) + killer.join() + finally: + signal.signal(signal.SIGUSR1, oldhandler) + +# # Test to verify handle verification, see issue 3321 # diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -211,6 +211,8 @@ Library ------- +- Issue #17018: Make Process.join() retry if os.waitpid() fails with EINTR. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 14:13:12 2013 From: python-checkins at python.org (richard.oudkerk) Date: Tue, 26 Feb 2013 14:13:12 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE3MDE4?= =?utf-8?q?=3A_Make_Process=2Ejoin=28=29_retry_if_os=2Ewaitpid=28=29_fails?= =?utf-8?q?_with_EINTR=2E?= Message-ID: <3ZFgSS4f9kzQ5B@mail.python.org> http://hg.python.org/cpython/rev/5fae31006724 changeset: 82396:5fae31006724 branch: 3.2 parent: 82392:5442a77b925c user: Richard Oudkerk date: Tue Feb 26 12:39:57 2013 +0000 summary: Issue #17018: Make Process.join() retry if os.waitpid() fails with EINTR. files: Lib/multiprocessing/forking.py | 18 +++++++--- Lib/test/test_multiprocessing.py | 32 ++++++++++++++++++++ Misc/NEWS | 2 + 3 files changed, 46 insertions(+), 6 deletions(-) diff --git a/Lib/multiprocessing/forking.py b/Lib/multiprocessing/forking.py --- a/Lib/multiprocessing/forking.py +++ b/Lib/multiprocessing/forking.py @@ -35,6 +35,7 @@ import os import sys import signal +import errno from multiprocessing import util, process @@ -128,12 +129,17 @@ def poll(self, flag=os.WNOHANG): if self.returncode is None: - try: - pid, sts = os.waitpid(self.pid, flag) - except os.error: - # Child process not yet created. See #1731717 - # e.errno == errno.ECHILD == 10 - return None + while True: + try: + pid, sts = os.waitpid(self.pid, flag) + except os.error as e: + if e.errno == errno.EINTR: + continue + # Child process not yet created. See #1731717 + # e.errno == errno.ECHILD == 10 + return None + else: + break if pid == self.pid: if os.WIFSIGNALED(sts): self.returncode = -os.WTERMSIG(sts) diff --git a/Lib/test/test_multiprocessing.py b/Lib/test/test_multiprocessing.py --- a/Lib/test/test_multiprocessing.py +++ b/Lib/test/test_multiprocessing.py @@ -2168,6 +2168,38 @@ # assert self.__handled # +# Check that Process.join() retries if os.waitpid() fails with EINTR +# + +class _TestPollEintr(BaseTestCase): + + ALLOWED_TYPES = ('processes',) + + @classmethod + def _killer(cls, pid): + time.sleep(0.5) + os.kill(pid, signal.SIGUSR1) + + @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), 'requires SIGUSR1') + def test_poll_eintr(self): + got_signal = [False] + def record(*args): + got_signal[0] = True + pid = os.getpid() + oldhandler = signal.signal(signal.SIGUSR1, record) + try: + killer = self.Process(target=self._killer, args=(pid,)) + killer.start() + p = self.Process(target=time.sleep, args=(1,)) + p.start() + p.join() + self.assertTrue(got_signal[0]) + self.assertEqual(p.exitcode, 0) + killer.join() + finally: + signal.signal(signal.SIGUSR1, oldhandler) + +# # Test to verify handle verification, see issue 3321 # diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -230,6 +230,8 @@ Library ------- +- Issue #17018: Make Process.join() retry if os.waitpid() fails with EINTR. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 14:13:14 2013 From: python-checkins at python.org (richard.oudkerk) Date: Tue, 26 Feb 2013 14:13:14 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge?= Message-ID: <3ZFgSV0DBpzPyC@mail.python.org> http://hg.python.org/cpython/rev/c29e588fdd57 changeset: 82397:c29e588fdd57 branch: 3.3 parent: 82393:8928205f57f6 parent: 82396:5fae31006724 user: Richard Oudkerk date: Tue Feb 26 13:00:15 2013 +0000 summary: Merge files: Lib/multiprocessing/forking.py | 18 +++++++--- Lib/test/test_multiprocessing.py | 32 ++++++++++++++++++++ Misc/NEWS | 2 + 3 files changed, 46 insertions(+), 6 deletions(-) diff --git a/Lib/multiprocessing/forking.py b/Lib/multiprocessing/forking.py --- a/Lib/multiprocessing/forking.py +++ b/Lib/multiprocessing/forking.py @@ -10,6 +10,7 @@ import os import sys import signal +import errno from multiprocessing import util, process @@ -109,12 +110,17 @@ def poll(self, flag=os.WNOHANG): if self.returncode is None: - try: - pid, sts = os.waitpid(self.pid, flag) - except os.error: - # Child process not yet created. See #1731717 - # e.errno == errno.ECHILD == 10 - return None + while True: + try: + pid, sts = os.waitpid(self.pid, flag) + except os.error as e: + if e.errno == errno.EINTR: + continue + # Child process not yet created. See #1731717 + # e.errno == errno.ECHILD == 10 + return None + else: + break if pid == self.pid: if os.WIFSIGNALED(sts): self.returncode = -os.WTERMSIG(sts) diff --git a/Lib/test/test_multiprocessing.py b/Lib/test/test_multiprocessing.py --- a/Lib/test/test_multiprocessing.py +++ b/Lib/test/test_multiprocessing.py @@ -2894,6 +2894,38 @@ # assert self.__handled # +# Check that Process.join() retries if os.waitpid() fails with EINTR +# + +class _TestPollEintr(BaseTestCase): + + ALLOWED_TYPES = ('processes',) + + @classmethod + def _killer(cls, pid): + time.sleep(0.5) + os.kill(pid, signal.SIGUSR1) + + @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), 'requires SIGUSR1') + def test_poll_eintr(self): + got_signal = [False] + def record(*args): + got_signal[0] = True + pid = os.getpid() + oldhandler = signal.signal(signal.SIGUSR1, record) + try: + killer = self.Process(target=self._killer, args=(pid,)) + killer.start() + p = self.Process(target=time.sleep, args=(1,)) + p.start() + p.join() + self.assertTrue(got_signal[0]) + self.assertEqual(p.exitcode, 0) + killer.join() + finally: + signal.signal(signal.SIGUSR1, oldhandler) + +# # Test to verify handle verification, see issue 3321 # diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -191,6 +191,8 @@ Library ------- +- Issue #17018: Make Process.join() retry if os.waitpid() fails with EINTR. + - Issue #14720: sqlite3: Convert datetime microseconds correctly. Patch by Lowe Thiderman. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 14:13:15 2013 From: python-checkins at python.org (richard.oudkerk) Date: Tue, 26 Feb 2013 14:13:15 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge?= Message-ID: <3ZFgSW3FkNzPyC@mail.python.org> http://hg.python.org/cpython/rev/292cd0f213ff changeset: 82398:292cd0f213ff parent: 82394:9caad461936e parent: 82397:c29e588fdd57 user: Richard Oudkerk date: Tue Feb 26 13:11:11 2013 +0000 summary: Merge files: Lib/multiprocessing/forking.py | 18 +++++++--- Lib/test/test_multiprocessing.py | 32 ++++++++++++++++++++ Misc/NEWS | 2 + 3 files changed, 46 insertions(+), 6 deletions(-) diff --git a/Lib/multiprocessing/forking.py b/Lib/multiprocessing/forking.py --- a/Lib/multiprocessing/forking.py +++ b/Lib/multiprocessing/forking.py @@ -10,6 +10,7 @@ import os import sys import signal +import errno from multiprocessing import util, process @@ -109,12 +110,17 @@ def poll(self, flag=os.WNOHANG): if self.returncode is None: - try: - pid, sts = os.waitpid(self.pid, flag) - except OSError: - # Child process not yet created. See #1731717 - # e.errno == errno.ECHILD == 10 - return None + while True: + try: + pid, sts = os.waitpid(self.pid, flag) + except OSError as e: + if e.errno == errno.EINTR: + continue + # Child process not yet created. See #1731717 + # e.errno == errno.ECHILD == 10 + return None + else: + break if pid == self.pid: if os.WIFSIGNALED(sts): self.returncode = -os.WTERMSIG(sts) diff --git a/Lib/test/test_multiprocessing.py b/Lib/test/test_multiprocessing.py --- a/Lib/test/test_multiprocessing.py +++ b/Lib/test/test_multiprocessing.py @@ -2869,6 +2869,38 @@ # assert self.__handled # +# Check that Process.join() retries if os.waitpid() fails with EINTR +# + +class _TestPollEintr(BaseTestCase): + + ALLOWED_TYPES = ('processes',) + + @classmethod + def _killer(cls, pid): + time.sleep(0.5) + os.kill(pid, signal.SIGUSR1) + + @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), 'requires SIGUSR1') + def test_poll_eintr(self): + got_signal = [False] + def record(*args): + got_signal[0] = True + pid = os.getpid() + oldhandler = signal.signal(signal.SIGUSR1, record) + try: + killer = self.Process(target=self._killer, args=(pid,)) + killer.start() + p = self.Process(target=time.sleep, args=(1,)) + p.start() + p.join() + self.assertTrue(got_signal[0]) + self.assertEqual(p.exitcode, 0) + killer.join() + finally: + signal.signal(signal.SIGUSR1, oldhandler) + +# # Test to verify handle verification, see issue 3321 # diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -270,6 +270,8 @@ Library ------- +- Issue #17018: Make Process.join() retry if os.waitpid() fails with EINTR. + - Issue #17197: profile/cProfile modules refactored so that code of run() and runctx() utility functions is not duplicated in both modules. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 14:54:29 2013 From: python-checkins at python.org (eli.bendersky) Date: Tue, 26 Feb 2013 14:54:29 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E3=29=3A_Some_cosmetic_?= =?utf-8?q?changes?= Message-ID: <3ZFhN51rhYzQ9x@mail.python.org> http://hg.python.org/cpython/rev/2678fd10f689 changeset: 82399:2678fd10f689 branch: 3.3 parent: 82397:c29e588fdd57 user: Eli Bendersky date: Tue Feb 26 05:53:23 2013 -0800 summary: Some cosmetic changes files: Lib/test/test_xml_etree.py | 22 +++++++++------------- Lib/test/test_xml_etree_c.py | 6 ++++-- 2 files changed, 13 insertions(+), 15 deletions(-) diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -1,18 +1,9 @@ -# xml.etree test. This file contains enough tests to make sure that -# all included components work as they should. -# Large parts are extracted from the upstream test suite. -# -# PLEASE write all new tests using the standard unittest infrastructure and -# not doctest. -# # IMPORTANT: the same tests are run from "test_xml_etree_c" in order # to ensure consistency between the C implementation and the Python # implementation. # # For this purpose, the module-level "ET" symbol is temporarily # monkey-patched when running the "test_xml_etree_c" test suite. -# Don't re-import "xml.etree.ElementTree" module in the docstring, -# except if the test is specific to the Python implementation. import html import io @@ -24,7 +15,7 @@ from itertools import product from test import support -from test.support import TESTFN, findfile, unlink, import_fresh_module, gc_collect +from test.support import TESTFN, findfile, import_fresh_module, gc_collect # pyET is the pure-Python implementation. # @@ -97,6 +88,7 @@ class ModuleTest(unittest.TestCase): + # TODO: this should be removed once we get rid of the global module vars def test_sanity(self): # Import sanity. @@ -528,7 +520,8 @@ events = ("start", "end", "start-ns", "end-ns") context = iterparse(SIMPLE_NS_XMLFILE, events) - self.assertEqual([(action, elem.tag) if action in ("start", "end") else (action, elem) + self.assertEqual([(action, elem.tag) if action in ("start", "end") + else (action, elem) for action, elem in context], [ ('start-ns', ('', 'namespace')), ('start', '{namespace}root'), @@ -1493,6 +1486,7 @@ self.assertEqual(len(e2), 2) self.assertEqualElements(e, e2) + class ElementTreeTypeTest(unittest.TestCase): def test_istype(self): self.assertIsInstance(ET.ParseError, type) @@ -1861,7 +1855,9 @@ self._check_sample_element(parser.close()) # Now as keyword args. - parser2 = ET.XMLParser(encoding='utf-8', html=[{}], target=ET.TreeBuilder()) + parser2 = ET.XMLParser(encoding='utf-8', + html=[{}], + target=ET.TreeBuilder()) parser2.feed(self.sample1) self._check_sample_element(parser2.close()) @@ -1974,7 +1970,7 @@ class IOTest(unittest.TestCase): def tearDown(self): - unlink(TESTFN) + support.unlink(TESTFN) def test_encoding(self): # Test encoding issues. diff --git a/Lib/test/test_xml_etree_c.py b/Lib/test/test_xml_etree_c.py --- a/Lib/test/test_xml_etree_c.py +++ b/Lib/test/test_xml_etree_c.py @@ -4,8 +4,10 @@ from test.support import import_fresh_module import unittest -cET = import_fresh_module('xml.etree.ElementTree', fresh=['_elementtree']) -cET_alias = import_fresh_module('xml.etree.cElementTree', fresh=['_elementtree', 'xml.etree']) +cET = import_fresh_module('xml.etree.ElementTree', + fresh=['_elementtree']) +cET_alias = import_fresh_module('xml.etree.cElementTree', + fresh=['_elementtree', 'xml.etree']) class MiscTests(unittest.TestCase): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 14:54:30 2013 From: python-checkins at python.org (eli.bendersky) Date: Tue, 26 Feb 2013 14:54:30 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Some_cosmetic_changes?= Message-ID: <3ZFhN651zRzQ6L@mail.python.org> http://hg.python.org/cpython/rev/5412ce2cffca changeset: 82400:5412ce2cffca parent: 82398:292cd0f213ff parent: 82399:2678fd10f689 user: Eli Bendersky date: Tue Feb 26 05:54:04 2013 -0800 summary: Some cosmetic changes files: Lib/test/test_xml_etree.py | 22 +++++++++------------- Lib/test/test_xml_etree_c.py | 6 ++++-- 2 files changed, 13 insertions(+), 15 deletions(-) diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -1,18 +1,9 @@ -# xml.etree test. This file contains enough tests to make sure that -# all included components work as they should. -# Large parts are extracted from the upstream test suite. -# -# PLEASE write all new tests using the standard unittest infrastructure and -# not doctest. -# # IMPORTANT: the same tests are run from "test_xml_etree_c" in order # to ensure consistency between the C implementation and the Python # implementation. # # For this purpose, the module-level "ET" symbol is temporarily # monkey-patched when running the "test_xml_etree_c" test suite. -# Don't re-import "xml.etree.ElementTree" module in the docstring, -# except if the test is specific to the Python implementation. import html import io @@ -24,7 +15,7 @@ from itertools import product from test import support -from test.support import TESTFN, findfile, unlink, import_fresh_module, gc_collect +from test.support import TESTFN, findfile, import_fresh_module, gc_collect # pyET is the pure-Python implementation. # @@ -97,6 +88,7 @@ class ModuleTest(unittest.TestCase): + # TODO: this should be removed once we get rid of the global module vars def test_sanity(self): # Import sanity. @@ -528,7 +520,8 @@ events = ("start", "end", "start-ns", "end-ns") context = iterparse(SIMPLE_NS_XMLFILE, events) - self.assertEqual([(action, elem.tag) if action in ("start", "end") else (action, elem) + self.assertEqual([(action, elem.tag) if action in ("start", "end") + else (action, elem) for action, elem in context], [ ('start-ns', ('', 'namespace')), ('start', '{namespace}root'), @@ -1493,6 +1486,7 @@ self.assertEqual(len(e2), 2) self.assertEqualElements(e, e2) + class ElementTreeTypeTest(unittest.TestCase): def test_istype(self): self.assertIsInstance(ET.ParseError, type) @@ -1866,7 +1860,9 @@ self._check_sample_element(parser.close()) # Now as keyword args. - parser2 = ET.XMLParser(encoding='utf-8', html=[{}], target=ET.TreeBuilder()) + parser2 = ET.XMLParser(encoding='utf-8', + html=[{}], + target=ET.TreeBuilder()) parser2.feed(self.sample1) self._check_sample_element(parser2.close()) @@ -1979,7 +1975,7 @@ class IOTest(unittest.TestCase): def tearDown(self): - unlink(TESTFN) + support.unlink(TESTFN) def test_encoding(self): # Test encoding issues. diff --git a/Lib/test/test_xml_etree_c.py b/Lib/test/test_xml_etree_c.py --- a/Lib/test/test_xml_etree_c.py +++ b/Lib/test/test_xml_etree_c.py @@ -4,8 +4,10 @@ from test.support import import_fresh_module import unittest -cET = import_fresh_module('xml.etree.ElementTree', fresh=['_elementtree']) -cET_alias = import_fresh_module('xml.etree.cElementTree', fresh=['_elementtree', 'xml.etree']) +cET = import_fresh_module('xml.etree.ElementTree', + fresh=['_elementtree']) +cET_alias = import_fresh_module('xml.etree.cElementTree', + fresh=['_elementtree', 'xml.etree']) class MiscTests(unittest.TestCase): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 17:29:59 2013 From: python-checkins at python.org (vinay.sajip) Date: Tue, 26 Feb 2013 17:29:59 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogQ2xvc2VzICMxNzI5?= =?utf-8?q?0=3A_Loading_cursor_now_does_not_persist_when_launching_GUI_scr?= =?utf-8?q?ipts=2E?= Message-ID: <3ZFlqW62LkzQ5f@mail.python.org> http://hg.python.org/cpython/rev/5fddaa709d6b changeset: 82401:5fddaa709d6b branch: 3.3 parent: 82399:2678fd10f689 user: Vinay Sajip date: Tue Feb 26 16:29:06 2013 +0000 summary: Closes #17290: Loading cursor now does not persist when launching GUI scripts. files: PC/launcher.c | 17 ++++++++++++++++- 1 files changed, 16 insertions(+), 1 deletions(-) diff --git a/PC/launcher.c b/PC/launcher.c --- a/PC/launcher.c +++ b/PC/launcher.c @@ -500,6 +500,21 @@ STARTUPINFOW si; PROCESS_INFORMATION pi; +#if defined(_WINDOWS) + // When explorer launches a Windows (GUI) application, it displays + // the "app starting" (the "pointer + hourglass") cursor for a number + // of seconds, or until the app does something UI-ish (eg, creating a + // window, or fetching a message). As this launcher doesn't do this + // directly, that cursor remains even after the child process does these + // things. We avoid that by doing a simple post+get message. + // See http://bugs.python.org/issue17290 and + // https://bitbucket.org/vinay.sajip/pylauncher/issue/20/busy-cursor-for-a-long-time-when-running + MSG msg; + + PostMessage(0, 0, 0, 0); + GetMessage(&msg, 0, 0, 0); +#endif + debug(L"run_child: about to run '%s'\n", cmdline); job = CreateJobObject(NULL, NULL); ok = QueryInformationJobObject(job, JobObjectExtendedLimitInformation, @@ -1362,4 +1377,4 @@ return process(argc, argv); } -#endif \ No newline at end of file +#endif -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 17:30:01 2013 From: python-checkins at python.org (vinay.sajip) Date: Tue, 26 Feb 2013 17:30:01 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Closes_=2317290=3A_Merged_fix_from_3=2E3=2E?= Message-ID: <3ZFlqY1Mq5zQ5f@mail.python.org> http://hg.python.org/cpython/rev/0d55fb0217f1 changeset: 82402:0d55fb0217f1 parent: 82400:5412ce2cffca parent: 82401:5fddaa709d6b user: Vinay Sajip date: Tue Feb 26 16:29:46 2013 +0000 summary: Closes #17290: Merged fix from 3.3. files: PC/launcher.c | 15 +++++++++++++++ 1 files changed, 15 insertions(+), 0 deletions(-) diff --git a/PC/launcher.c b/PC/launcher.c --- a/PC/launcher.c +++ b/PC/launcher.c @@ -500,6 +500,21 @@ STARTUPINFOW si; PROCESS_INFORMATION pi; +#if defined(_WINDOWS) + // When explorer launches a Windows (GUI) application, it displays + // the "app starting" (the "pointer + hourglass") cursor for a number + // of seconds, or until the app does something UI-ish (eg, creating a + // window, or fetching a message). As this launcher doesn't do this + // directly, that cursor remains even after the child process does these + // things. We avoid that by doing a simple post+get message. + // See http://bugs.python.org/issue17290 and + // https://bitbucket.org/vinay.sajip/pylauncher/issue/20/busy-cursor-for-a-long-time-when-running + MSG msg; + + PostMessage(0, 0, 0, 0); + GetMessage(&msg, 0, 0, 0); +#endif + debug(L"run_child: about to run '%s'\n", cmdline); job = CreateJobObject(NULL, NULL); ok = QueryInformationJobObject(job, JobObjectExtendedLimitInformation, -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 20:47:31 2013 From: python-checkins at python.org (petri.lehtinen) Date: Tue, 26 Feb 2013 20:47:31 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE0NzIw?= =?utf-8?q?=3A_Enhance_sqlite3_microsecond_conversion=2C_document_its_beha?= =?utf-8?q?vior?= Message-ID: <3ZFrCR0CnNzRkH@mail.python.org> http://hg.python.org/cpython/rev/eb45fd74db34 changeset: 82403:eb45fd74db34 branch: 2.7 parent: 82395:92003d9aae0e user: Petri Lehtinen date: Tue Feb 26 21:32:02 2013 +0200 summary: Issue #14720: Enhance sqlite3 microsecond conversion, document its behavior files: Doc/library/sqlite3.rst | 4 ++++ Lib/sqlite3/dbapi2.py | 4 ++-- Lib/sqlite3/test/regression.py | 13 +++++++++++-- 3 files changed, 17 insertions(+), 4 deletions(-) diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -832,6 +832,10 @@ .. literalinclude:: ../includes/sqlite3/pysqlite_datetime.py +If a timestamp stored in SQLite has a fractional part longer than 6 +numbers, its value will be truncated to microsecond precision by the +timestamp converter. + .. _sqlite3-controlling-transactions: diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -1,4 +1,4 @@ -#-*- coding: ISO-8859-1 -*- +# -*- coding: iso-8859-1 -*- # pysqlite2/dbapi2.py: the DB-API 2.0 interface # # Copyright (C) 2004-2005 Gerhard H?ring @@ -68,7 +68,7 @@ timepart_full = timepart.split(".") hours, minutes, seconds = map(int, timepart_full[0].split(":")) if len(timepart_full) == 2: - microseconds = int('{:0<6}'.format(timepart_full[1].decode())) + microseconds = int('{:0<6.6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -296,11 +296,20 @@ con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) cur = con.cursor() cur.execute("CREATE TABLE t (x TIMESTAMP)") + + # Microseconds should be 456000 cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + + # Microseconds should be truncated to 123456 + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.123456789')") + cur.execute("SELECT * FROM t") - date = cur.fetchall()[0][0] + values = [x[0] for x in cur.fetchall()] - self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + self.assertEqual(values, [ + datetime.datetime(2012, 4, 4, 15, 6, 0, 456000), + datetime.datetime(2012, 4, 4, 15, 6, 0, 123456), + ]) def suite(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 20:47:32 2013 From: python-checkins at python.org (petri.lehtinen) Date: Tue, 26 Feb 2013 20:47:32 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE0NzIw?= =?utf-8?q?=3A_Enhance_sqlite3_microsecond_conversion=2C_document_its_beha?= =?utf-8?q?vior?= Message-ID: <3ZFrCS2vwczRkH@mail.python.org> http://hg.python.org/cpython/rev/ae25a38e6c17 changeset: 82404:ae25a38e6c17 branch: 3.2 parent: 82396:5fae31006724 user: Petri Lehtinen date: Tue Feb 26 21:32:02 2013 +0200 summary: Issue #14720: Enhance sqlite3 microsecond conversion, document its behavior files: Doc/library/sqlite3.rst | 4 ++++ Lib/sqlite3/dbapi2.py | 2 +- Lib/sqlite3/test/regression.py | 13 +++++++++++-- 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -814,6 +814,10 @@ .. literalinclude:: ../includes/sqlite3/pysqlite_datetime.py +If a timestamp stored in SQLite has a fractional part longer than 6 +numbers, its value will be truncated to microsecond precision by the +timestamp converter. + .. _sqlite3-controlling-transactions: diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -67,7 +67,7 @@ timepart_full = timepart.split(b".") hours, minutes, seconds = map(int, timepart_full[0].split(b":")) if len(timepart_full) == 2: - microseconds = int('{:0<6}'.format(timepart_full[1].decode())) + microseconds = int('{:0<6.6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -313,11 +313,20 @@ con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) cur = con.cursor() cur.execute("CREATE TABLE t (x TIMESTAMP)") + + # Microseconds should be 456000 cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + + # Microseconds should be truncated to 123456 + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.123456789')") + cur.execute("SELECT * FROM t") - date = cur.fetchall()[0][0] + values = [x[0] for x in cur.fetchall()] - self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + self.assertEqual(values, [ + datetime.datetime(2012, 4, 4, 15, 6, 0, 456000), + datetime.datetime(2012, 4, 4, 15, 6, 0, 123456), + ]) def suite(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 20:47:33 2013 From: python-checkins at python.org (petri.lehtinen) Date: Tue, 26 Feb 2013 20:47:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2314720=3A_Enhance_sqlite3_microsecond_conversion=2C_do?= =?utf-8?q?cument_its_behavior?= Message-ID: <3ZFrCT630zzR1F@mail.python.org> http://hg.python.org/cpython/rev/17673a8c7083 changeset: 82405:17673a8c7083 branch: 3.3 parent: 82401:5fddaa709d6b parent: 82404:ae25a38e6c17 user: Petri Lehtinen date: Tue Feb 26 21:45:09 2013 +0200 summary: Issue #14720: Enhance sqlite3 microsecond conversion, document its behavior files: Doc/library/sqlite3.rst | 4 ++++ Lib/sqlite3/dbapi2.py | 2 +- Lib/sqlite3/test/regression.py | 13 +++++++++++-- 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -830,6 +830,10 @@ .. literalinclude:: ../includes/sqlite3/pysqlite_datetime.py +If a timestamp stored in SQLite has a fractional part longer than 6 +numbers, its value will be truncated to microsecond precision by the +timestamp converter. + .. _sqlite3-controlling-transactions: diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -67,7 +67,7 @@ timepart_full = timepart.split(b".") hours, minutes, seconds = map(int, timepart_full[0].split(b":")) if len(timepart_full) == 2: - microseconds = int('{:0<6}'.format(timepart_full[1].decode())) + microseconds = int('{:0<6.6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -313,11 +313,20 @@ con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) cur = con.cursor() cur.execute("CREATE TABLE t (x TIMESTAMP)") + + # Microseconds should be 456000 cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + + # Microseconds should be truncated to 123456 + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.123456789')") + cur.execute("SELECT * FROM t") - date = cur.fetchall()[0][0] + values = [x[0] for x in cur.fetchall()] - self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + self.assertEqual(values, [ + datetime.datetime(2012, 4, 4, 15, 6, 0, 456000), + datetime.datetime(2012, 4, 4, 15, 6, 0, 123456), + ]) def suite(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 20:47:35 2013 From: python-checkins at python.org (petri.lehtinen) Date: Tue, 26 Feb 2013 20:47:35 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2314720=3A_Enhance_sqlite3_microsecond_conversion?= =?utf-8?q?=2C_document_its_behavior?= Message-ID: <3ZFrCW1nYhzRpN@mail.python.org> http://hg.python.org/cpython/rev/0db66afbd746 changeset: 82406:0db66afbd746 parent: 82402:0d55fb0217f1 parent: 82405:17673a8c7083 user: Petri Lehtinen date: Tue Feb 26 21:46:12 2013 +0200 summary: Issue #14720: Enhance sqlite3 microsecond conversion, document its behavior files: Doc/library/sqlite3.rst | 4 ++++ Lib/sqlite3/dbapi2.py | 2 +- Lib/sqlite3/test/regression.py | 13 +++++++++++-- 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -842,6 +842,10 @@ .. literalinclude:: ../includes/sqlite3/pysqlite_datetime.py +If a timestamp stored in SQLite has a fractional part longer than 6 +numbers, its value will be truncated to microsecond precision by the +timestamp converter. + .. _sqlite3-controlling-transactions: diff --git a/Lib/sqlite3/dbapi2.py b/Lib/sqlite3/dbapi2.py --- a/Lib/sqlite3/dbapi2.py +++ b/Lib/sqlite3/dbapi2.py @@ -67,7 +67,7 @@ timepart_full = timepart.split(b".") hours, minutes, seconds = map(int, timepart_full[0].split(b":")) if len(timepart_full) == 2: - microseconds = int('{:0<6}'.format(timepart_full[1].decode())) + microseconds = int('{:0<6.6}'.format(timepart_full[1].decode())) else: microseconds = 0 diff --git a/Lib/sqlite3/test/regression.py b/Lib/sqlite3/test/regression.py --- a/Lib/sqlite3/test/regression.py +++ b/Lib/sqlite3/test/regression.py @@ -313,11 +313,20 @@ con = sqlite.connect(":memory:", detect_types=sqlite.PARSE_DECLTYPES) cur = con.cursor() cur.execute("CREATE TABLE t (x TIMESTAMP)") + + # Microseconds should be 456000 cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.456')") + + # Microseconds should be truncated to 123456 + cur.execute("INSERT INTO t (x) VALUES ('2012-04-04 15:06:00.123456789')") + cur.execute("SELECT * FROM t") - date = cur.fetchall()[0][0] + values = [x[0] for x in cur.fetchall()] - self.assertEqual(date, datetime.datetime(2012, 4, 4, 15, 6, 0, 456000)) + self.assertEqual(values, [ + datetime.datetime(2012, 4, 4, 15, 6, 0, 456000), + datetime.datetime(2012, 4, 4, 15, 6, 0, 123456), + ]) def suite(): -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 22:52:43 2013 From: python-checkins at python.org (victor.stinner) Date: Tue, 26 Feb 2013 22:52:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogSXNzdWUgIzE3MjIz?= =?utf-8?q?=3A_Fix_test=5Farray_on_Windows_=2816-bit_wchar=5Ft/Py=5FUNICOD?= =?utf-8?q?E=29?= Message-ID: <3ZFtzv0RMkzRf5@mail.python.org> http://hg.python.org/cpython/rev/66e9d0185b0f changeset: 82407:66e9d0185b0f branch: 3.3 parent: 82405:17673a8c7083 user: Victor Stinner date: Tue Feb 26 22:52:11 2013 +0100 summary: Issue #17223: Fix test_array on Windows (16-bit wchar_t/Py_UNICODE) files: Lib/test/test_array.py | 29 ++++++++++++++++++----------- 1 files changed, 18 insertions(+), 11 deletions(-) diff --git a/Lib/test/test_array.py b/Lib/test/test_array.py --- a/Lib/test/test_array.py +++ b/Lib/test/test_array.py @@ -24,6 +24,17 @@ except struct.error: have_long_long = False +try: + import ctypes + sizeof_wchar = ctypes.sizeof(ctypes.c_wchar) +except ImportError: + import sys + if sys.platform == 'win32': + sizeof_wchar = 2 + else: + sizeof_wchar = 4 + + class ArraySubclass(array.array): pass @@ -1040,16 +1051,6 @@ minitemsize = 2 def test_unicode(self): - try: - import ctypes - sizeof_wchar = ctypes.sizeof(ctypes.c_wchar) - except ImportError: - import sys - if sys.platform == 'win32': - sizeof_wchar = 2 - else: - sizeof_wchar = 4 - self.assertRaises(TypeError, array.array, 'b', 'foo') a = array.array('u', '\xa0\xc2\u1234') @@ -1071,7 +1072,13 @@ def test_issue17223(self): # this used to crash - a = array.array('u', b'\xff' * 4) + if sizeof_wchar == 4: + # U+FFFFFFFF is an invalid code point in Unicode 6.0 + invalid_str = b'\xff\xff\xff\xff' + else: + # invalid UTF-16 surrogate pair + invalid_str = b'\xff\xdf\x61\x00' + a = array.array('u', invalid_str) self.assertRaises(ValueError, a.tounicode) self.assertRaises(ValueError, str, a) -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Tue Feb 26 22:52:44 2013 From: python-checkins at python.org (victor.stinner) Date: Tue, 26 Feb 2013 22:52:44 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_=28Merge_3=2E3=29_Issue_=2317223=3A_Fix_test=5Farray_on_?= =?utf-8?q?Windows_=2816-bit_wchar=5Ft/Py=5FUNICODE=29?= Message-ID: <3ZFtzw3SPMzRrB@mail.python.org> http://hg.python.org/cpython/rev/5aaf6bc1d502 changeset: 82408:5aaf6bc1d502 parent: 82406:0db66afbd746 parent: 82407:66e9d0185b0f user: Victor Stinner date: Tue Feb 26 22:52:25 2013 +0100 summary: (Merge 3.3) Issue #17223: Fix test_array on Windows (16-bit wchar_t/Py_UNICODE) files: Lib/test/test_array.py | 29 ++++++++++++++++++----------- 1 files changed, 18 insertions(+), 11 deletions(-) diff --git a/Lib/test/test_array.py b/Lib/test/test_array.py --- a/Lib/test/test_array.py +++ b/Lib/test/test_array.py @@ -24,6 +24,17 @@ except struct.error: have_long_long = False +try: + import ctypes + sizeof_wchar = ctypes.sizeof(ctypes.c_wchar) +except ImportError: + import sys + if sys.platform == 'win32': + sizeof_wchar = 2 + else: + sizeof_wchar = 4 + + class ArraySubclass(array.array): pass @@ -1040,16 +1051,6 @@ minitemsize = 2 def test_unicode(self): - try: - import ctypes - sizeof_wchar = ctypes.sizeof(ctypes.c_wchar) - except ImportError: - import sys - if sys.platform == 'win32': - sizeof_wchar = 2 - else: - sizeof_wchar = 4 - self.assertRaises(TypeError, array.array, 'b', 'foo') a = array.array('u', '\xa0\xc2\u1234') @@ -1071,7 +1072,13 @@ def test_issue17223(self): # this used to crash - a = array.array('u', b'\xff' * 4) + if sizeof_wchar == 4: + # U+FFFFFFFF is an invalid code point in Unicode 6.0 + invalid_str = b'\xff\xff\xff\xff' + else: + # invalid UTF-16 surrogate pair + invalid_str = b'\xff\xdf\x61\x00' + a = array.array('u', invalid_str) self.assertRaises(ValueError, a.tounicode) self.assertRaises(ValueError, str, a) -- Repository URL: http://hg.python.org/cpython From root at python.org Wed Feb 27 02:01:25 2013 From: root at python.org (Cron Daemon) Date: Wed, 27 Feb 2013 02:01:25 +0100 Subject: [Python-checkins] Cron /home/docs/build-devguide Message-ID: abort: error: Name or service not known From root at python.org Wed Feb 27 02:05:22 2013 From: root at python.org (Cron Daemon) Date: Wed, 27 Feb 2013 02:05:22 +0100 Subject: [Python-checkins] Cron /home/docs/build-devguide Message-ID: abort: error: Connection timed out From solipsis at pitrou.net Wed Feb 27 05:58:27 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Wed, 27 Feb 2013 05:58:27 +0100 Subject: [Python-checkins] Daily reference leaks (5aaf6bc1d502): sum=2 Message-ID: results for 5aaf6bc1d502 on branch "default" -------------------------------------------- test_unittest leaked [-1, 2, 1] memory blocks, sum=2 Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogSPVdMW', '-x'] From python-checkins at python.org Wed Feb 27 09:01:23 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 27 Feb 2013 09:01:23 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzE3MzAzOiB0ZXN0?= =?utf-8?q?=5Ffuture*_now_work_with_unittest_test_discovery=2E__Patch_by_Z?= =?utf-8?q?achary?= Message-ID: <3ZG8VC6v9xzSks@mail.python.org> http://hg.python.org/cpython/rev/83ae10bf608c changeset: 82409:83ae10bf608c branch: 3.3 parent: 82407:66e9d0185b0f user: Ezio Melotti date: Wed Feb 27 10:00:03 2013 +0200 summary: #17303: test_future* now work with unittest test discovery. Patch by Zachary Ware. files: Lib/test/test_future.py | 24 +++++++++++------------- Lib/test/test_future3.py | 6 +----- Lib/test/test_future4.py | 6 +----- Lib/test/test_future5.py | 4 ++-- Misc/NEWS | 3 +++ 5 files changed, 18 insertions(+), 25 deletions(-) diff --git a/Lib/test/test_future.py b/Lib/test/test_future.py --- a/Lib/test/test_future.py +++ b/Lib/test/test_future.py @@ -13,18 +13,18 @@ class FutureTest(unittest.TestCase): def test_future1(self): - support.unload('future_test1') - from test import future_test1 - self.assertEqual(future_test1.result, 6) + with support.CleanImport('future_test1'): + from test import future_test1 + self.assertEqual(future_test1.result, 6) def test_future2(self): - support.unload('future_test2') - from test import future_test2 - self.assertEqual(future_test2.result, 6) + with support.CleanImport('future_test2'): + from test import future_test2 + self.assertEqual(future_test2.result, 6) def test_future3(self): - support.unload('test_future3') - from test import test_future3 + with support.CleanImport('test_future3'): + from test import test_future3 def test_badfuture3(self): try: @@ -103,8 +103,8 @@ self.fail("syntax error didn't occur") def test_multiple_features(self): - support.unload("test.test_future5") - from test import test_future5 + with support.CleanImport("test.test_future5"): + from test import test_future5 def test_unicode_literals_exec(self): scope = {} @@ -112,8 +112,6 @@ self.assertIsInstance(scope["x"], str) -def test_main(): - support.run_unittest(FutureTest) if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Lib/test/test_future3.py b/Lib/test/test_future3.py --- a/Lib/test/test_future3.py +++ b/Lib/test/test_future3.py @@ -2,7 +2,6 @@ from __future__ import division import unittest -from test import support x = 2 def nester(): @@ -23,8 +22,5 @@ def test_nested_scopes(self): self.assertEqual(nester(), 3) -def test_main(): - support.run_unittest(TestFuture) - if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Lib/test/test_future4.py b/Lib/test/test_future4.py --- a/Lib/test/test_future4.py +++ b/Lib/test/test_future4.py @@ -1,10 +1,6 @@ from __future__ import unicode_literals import unittest -from test import support - -def test_main(): - pass if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Lib/test/test_future5.py b/Lib/test/test_future5.py --- a/Lib/test/test_future5.py +++ b/Lib/test/test_future5.py @@ -17,5 +17,5 @@ self.assertEqual(s.getvalue(), "foo\n") -def test_main(): - support.run_unittest(TestMultipleFeatures) +if __name__ == '__main__': + unittest.main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -644,6 +644,9 @@ - Issue #15539: Added regression tests for Tools/scripts/pindent.py. +- Issue #17303: test_future* now work with unittest test discovery. + Patch by Zachary Ware. + - Issue #17163: test_file now works with unittest test discovery. Patch by Zachary Ware. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 09:01:25 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 27 Feb 2013 09:01:25 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MzAzOiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZG8VF2Y13zSlx@mail.python.org> http://hg.python.org/cpython/rev/5599bbc275bc changeset: 82410:5599bbc275bc parent: 82408:5aaf6bc1d502 parent: 82409:83ae10bf608c user: Ezio Melotti date: Wed Feb 27 10:01:06 2013 +0200 summary: #17303: merge with 3.3. files: Lib/test/test_future.py | 24 +++++++++++------------- Lib/test/test_future3.py | 6 +----- Lib/test/test_future4.py | 6 +----- Lib/test/test_future5.py | 4 ++-- Misc/NEWS | 3 +++ 5 files changed, 18 insertions(+), 25 deletions(-) diff --git a/Lib/test/test_future.py b/Lib/test/test_future.py --- a/Lib/test/test_future.py +++ b/Lib/test/test_future.py @@ -13,18 +13,18 @@ class FutureTest(unittest.TestCase): def test_future1(self): - support.unload('future_test1') - from test import future_test1 - self.assertEqual(future_test1.result, 6) + with support.CleanImport('future_test1'): + from test import future_test1 + self.assertEqual(future_test1.result, 6) def test_future2(self): - support.unload('future_test2') - from test import future_test2 - self.assertEqual(future_test2.result, 6) + with support.CleanImport('future_test2'): + from test import future_test2 + self.assertEqual(future_test2.result, 6) def test_future3(self): - support.unload('test_future3') - from test import test_future3 + with support.CleanImport('test_future3'): + from test import test_future3 def test_badfuture3(self): try: @@ -103,8 +103,8 @@ self.fail("syntax error didn't occur") def test_multiple_features(self): - support.unload("test.test_future5") - from test import test_future5 + with support.CleanImport("test.test_future5"): + from test import test_future5 def test_unicode_literals_exec(self): scope = {} @@ -112,8 +112,6 @@ self.assertIsInstance(scope["x"], str) -def test_main(): - support.run_unittest(FutureTest) if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Lib/test/test_future3.py b/Lib/test/test_future3.py --- a/Lib/test/test_future3.py +++ b/Lib/test/test_future3.py @@ -2,7 +2,6 @@ from __future__ import division import unittest -from test import support x = 2 def nester(): @@ -23,8 +22,5 @@ def test_nested_scopes(self): self.assertEqual(nester(), 3) -def test_main(): - support.run_unittest(TestFuture) - if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Lib/test/test_future4.py b/Lib/test/test_future4.py --- a/Lib/test/test_future4.py +++ b/Lib/test/test_future4.py @@ -1,10 +1,6 @@ from __future__ import unicode_literals import unittest -from test import support - -def test_main(): - pass if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Lib/test/test_future5.py b/Lib/test/test_future5.py --- a/Lib/test/test_future5.py +++ b/Lib/test/test_future5.py @@ -17,5 +17,5 @@ self.assertEqual(s.getvalue(), "foo\n") -def test_main(): - support.run_unittest(TestMultipleFeatures) +if __name__ == '__main__': + unittest.main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -893,6 +893,9 @@ - Issue #16836: Enable IPv6 support even if IPv6 is disabled on the build host. +- Issue #17303: test_future* now work with unittest test discovery. + Patch by Zachary Ware. + - Issue #17163: test_file now works with unittest test discovery. Patch by Zachary Ware. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 09:10:03 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 27 Feb 2013 09:10:03 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4zKTogIzE3MzA0OiB0ZXN0?= =?utf-8?q?=5Fhash_now_works_with_unittest_test_discovery=2E__Patch_by_Zac?= =?utf-8?q?hary?= Message-ID: <3ZG8hC3PhTzSks@mail.python.org> http://hg.python.org/cpython/rev/619ed4ed7087 changeset: 82411:619ed4ed7087 branch: 3.3 parent: 82409:83ae10bf608c user: Ezio Melotti date: Wed Feb 27 10:09:12 2013 +0200 summary: #17304: test_hash now works with unittest test discovery. Patch by Zachary Ware. files: Lib/test/test_hash.py | 32 ++++++++++-------------------- Misc/NEWS | 3 ++ 2 files changed, 14 insertions(+), 21 deletions(-) diff --git a/Lib/test/test_hash.py b/Lib/test/test_hash.py --- a/Lib/test/test_hash.py +++ b/Lib/test/test_hash.py @@ -7,7 +7,6 @@ import os import sys import unittest -from test import support from test.script_helper import assert_python_ok from collections import Hashable @@ -133,7 +132,7 @@ for obj in self.hashes_to_check: self.assertEqual(hash(obj), _default_hash(obj)) -class HashRandomizationTests(unittest.TestCase): +class HashRandomizationTests: # Each subclass should define a field "repr_", containing the repr() of # an object to be tested @@ -190,19 +189,22 @@ h = -1024014457 self.assertEqual(self.get_hash(self.repr_, seed=42), h) -class StrHashRandomizationTests(StringlikeHashRandomizationTests): +class StrHashRandomizationTests(StringlikeHashRandomizationTests, + unittest.TestCase): repr_ = repr('abc') def test_empty_string(self): self.assertEqual(hash(""), 0) -class BytesHashRandomizationTests(StringlikeHashRandomizationTests): +class BytesHashRandomizationTests(StringlikeHashRandomizationTests, + unittest.TestCase): repr_ = repr(b'abc') def test_empty_string(self): self.assertEqual(hash(b""), 0) -class MemoryviewHashRandomizationTests(StringlikeHashRandomizationTests): +class MemoryviewHashRandomizationTests(StringlikeHashRandomizationTests, + unittest.TestCase): repr_ = "memoryview(b'abc')" def test_empty_string(self): @@ -212,27 +214,15 @@ def get_hash_command(self, repr_): return 'import datetime; print(hash(%s))' % repr_ -class DatetimeDateTests(DatetimeTests): +class DatetimeDateTests(DatetimeTests, unittest.TestCase): repr_ = repr(datetime.date(1066, 10, 14)) -class DatetimeDatetimeTests(DatetimeTests): +class DatetimeDatetimeTests(DatetimeTests, unittest.TestCase): repr_ = repr(datetime.datetime(1, 2, 3, 4, 5, 6, 7)) -class DatetimeTimeTests(DatetimeTests): +class DatetimeTimeTests(DatetimeTests, unittest.TestCase): repr_ = repr(datetime.time(0)) -def test_main(): - support.run_unittest(HashEqualityTestCase, - HashInheritanceTestCase, - HashBuiltinsTestCase, - StrHashRandomizationTests, - BytesHashRandomizationTests, - MemoryviewHashRandomizationTests, - DatetimeDateTests, - DatetimeDatetimeTests, - DatetimeTimeTests) - - if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -644,6 +644,9 @@ - Issue #15539: Added regression tests for Tools/scripts/pindent.py. +- Issue #17304: test_hash now works with unittest test discovery. + Patch by Zachary Ware. + - Issue #17303: test_future* now work with unittest test discovery. Patch by Zachary Ware. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 09:10:04 2013 From: python-checkins at python.org (ezio.melotti) Date: Wed, 27 Feb 2013 09:10:04 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?b?KTogIzE3MzA0OiBtZXJnZSB3aXRoIDMuMy4=?= Message-ID: <3ZG8hD67HlzSlH@mail.python.org> http://hg.python.org/cpython/rev/bc4458493024 changeset: 82412:bc4458493024 parent: 82410:5599bbc275bc parent: 82411:619ed4ed7087 user: Ezio Melotti date: Wed Feb 27 10:09:46 2013 +0200 summary: #17304: merge with 3.3. files: Lib/test/test_hash.py | 32 ++++++++++-------------------- Misc/NEWS | 3 ++ 2 files changed, 14 insertions(+), 21 deletions(-) diff --git a/Lib/test/test_hash.py b/Lib/test/test_hash.py --- a/Lib/test/test_hash.py +++ b/Lib/test/test_hash.py @@ -7,7 +7,6 @@ import os import sys import unittest -from test import support from test.script_helper import assert_python_ok from collections import Hashable @@ -133,7 +132,7 @@ for obj in self.hashes_to_check: self.assertEqual(hash(obj), _default_hash(obj)) -class HashRandomizationTests(unittest.TestCase): +class HashRandomizationTests: # Each subclass should define a field "repr_", containing the repr() of # an object to be tested @@ -190,19 +189,22 @@ h = -1024014457 self.assertEqual(self.get_hash(self.repr_, seed=42), h) -class StrHashRandomizationTests(StringlikeHashRandomizationTests): +class StrHashRandomizationTests(StringlikeHashRandomizationTests, + unittest.TestCase): repr_ = repr('abc') def test_empty_string(self): self.assertEqual(hash(""), 0) -class BytesHashRandomizationTests(StringlikeHashRandomizationTests): +class BytesHashRandomizationTests(StringlikeHashRandomizationTests, + unittest.TestCase): repr_ = repr(b'abc') def test_empty_string(self): self.assertEqual(hash(b""), 0) -class MemoryviewHashRandomizationTests(StringlikeHashRandomizationTests): +class MemoryviewHashRandomizationTests(StringlikeHashRandomizationTests, + unittest.TestCase): repr_ = "memoryview(b'abc')" def test_empty_string(self): @@ -212,27 +214,15 @@ def get_hash_command(self, repr_): return 'import datetime; print(hash(%s))' % repr_ -class DatetimeDateTests(DatetimeTests): +class DatetimeDateTests(DatetimeTests, unittest.TestCase): repr_ = repr(datetime.date(1066, 10, 14)) -class DatetimeDatetimeTests(DatetimeTests): +class DatetimeDatetimeTests(DatetimeTests, unittest.TestCase): repr_ = repr(datetime.datetime(1, 2, 3, 4, 5, 6, 7)) -class DatetimeTimeTests(DatetimeTests): +class DatetimeTimeTests(DatetimeTests, unittest.TestCase): repr_ = repr(datetime.time(0)) -def test_main(): - support.run_unittest(HashEqualityTestCase, - HashInheritanceTestCase, - HashBuiltinsTestCase, - StrHashRandomizationTests, - BytesHashRandomizationTests, - MemoryviewHashRandomizationTests, - DatetimeDateTests, - DatetimeDatetimeTests, - DatetimeTimeTests) - - if __name__ == "__main__": - test_main() + unittest.main() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -893,6 +893,9 @@ - Issue #16836: Enable IPv6 support even if IPv6 is disabled on the build host. +- Issue #17304: test_hash now works with unittest test discovery. + Patch by Zachary Ware. + - Issue #17303: test_future* now work with unittest test discovery. Patch by Zachary Ware. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 15:04:40 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 27 Feb 2013 15:04:40 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogIzE3Mjk2OiBiYWNr?= =?utf-8?q?port_fix_for_issue_1692335=2C_naive_exception_pickling=2E?= Message-ID: <3ZGJYN6yJjzP5Z@mail.python.org> http://hg.python.org/cpython/rev/2c9f7ed28384 changeset: 82413:2c9f7ed28384 branch: 3.2 parent: 82404:ae25a38e6c17 user: R David Murray date: Wed Feb 27 08:57:09 2013 -0500 summary: #17296: backport fix for issue 1692335, naive exception pickling. files: Lib/test/test_exceptions.py | 16 +++++++++++++++- Misc/NEWS | 3 +++ Objects/exceptions.c | 11 ++++++++++- 3 files changed, 28 insertions(+), 2 deletions(-) diff --git a/Lib/test/test_exceptions.py b/Lib/test/test_exceptions.py --- a/Lib/test/test_exceptions.py +++ b/Lib/test/test_exceptions.py @@ -10,6 +10,15 @@ from test.support import (TESTFN, captured_output, check_impl_detail, cpython_only, gc_collect, run_unittest, unlink) +class NaiveException(Exception): + def __init__(self, x): + self.x = x + +class SlottedNaiveException(Exception): + __slots__ = ('x',) + def __init__(self, x): + self.x = x + # XXX This is not really enough, each *operation* should be tested! class ExceptionTests(unittest.TestCase): @@ -272,6 +281,10 @@ {'args' : ('\u3042', 0, 1, 'ouch'), 'object' : '\u3042', 'reason' : 'ouch', 'start' : 0, 'end' : 1}), + (NaiveException, ('foo',), + {'args': ('foo',), 'x': 'foo'}), + (SlottedNaiveException, ('foo',), + {'args': ('foo',), 'x': 'foo'}), ] try: exceptionList.append( @@ -291,7 +304,8 @@ raise else: # Verify module name - self.assertEqual(type(e).__module__, 'builtins') + if not type(e).__name__.endswith('NaiveException'): + self.assertEqual(type(e).__module__, 'builtins') # Verify no ref leaks in Exc_str() s = str(e) for checkArgName in expected: diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins ----------------- +- Issue #1692335: Move initial args assignment to + BaseException.__new__ to help pickling of naive subclasses. + - Issue #17275: Corrected class name in init error messages of the C version of BufferedWriter and BufferedRandom. diff --git a/Objects/exceptions.c b/Objects/exceptions.c --- a/Objects/exceptions.c +++ b/Objects/exceptions.c @@ -29,6 +29,12 @@ self->dict = NULL; self->traceback = self->cause = self->context = NULL; + if (args) { + self->args = args; + Py_INCREF(args); + return (PyObject *)self; + } + self->args = PyTuple_New(0); if (!self->args) { Py_DECREF(self); @@ -41,12 +47,15 @@ static int BaseException_init(PyBaseExceptionObject *self, PyObject *args, PyObject *kwds) { + PyObject *tmp; + if (!_PyArg_NoKeywords(Py_TYPE(self)->tp_name, kwds)) return -1; - Py_DECREF(self->args); + tmp = self->args; self->args = args; Py_INCREF(self->args); + Py_XDECREF(tmp); return 0; } -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 15:04:42 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 27 Feb 2013 15:04:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Null_merge_for_issue_1692335_backport=2E?= Message-ID: <3ZGJYQ2yt7zQ7J@mail.python.org> http://hg.python.org/cpython/rev/67c27421b00b changeset: 82414:67c27421b00b branch: 3.3 parent: 82411:619ed4ed7087 parent: 82413:2c9f7ed28384 user: R David Murray date: Wed Feb 27 09:02:49 2013 -0500 summary: Null merge for issue 1692335 backport. files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 15:04:43 2013 From: python-checkins at python.org (r.david.murray) Date: Wed, 27 Feb 2013 15:04:43 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Null_merge_for_issue_1692335_backport=2E?= Message-ID: <3ZGJYR5lWmzN68@mail.python.org> http://hg.python.org/cpython/rev/94f107752e83 changeset: 82415:94f107752e83 parent: 82412:bc4458493024 parent: 82414:67c27421b00b user: R David Murray date: Wed Feb 27 09:03:30 2013 -0500 summary: Null merge for issue 1692335 backport. files: -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 18:06:33 2013 From: python-checkins at python.org (chris.jerdonek) Date: Wed, 27 Feb 2013 18:06:33 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Issue_=2317283=3A_Share_co?= =?utf-8?q?de_between_=5F=5Fmain=5F=5F=2Epy_and_regrtest=2Epy_in_Lib/test?= =?utf-8?q?=2E?= Message-ID: <3ZGNbF3B2rzNPy@mail.python.org> http://hg.python.org/cpython/rev/e0f3dcd30af8 changeset: 82416:e0f3dcd30af8 user: Chris Jerdonek date: Wed Feb 27 09:02:53 2013 -0800 summary: Issue #17283: Share code between __main__.py and regrtest.py in Lib/test. This commit also removes TESTCWD from regrtest.py's global namespace. files: Lib/test/__main__.py | 14 +---------- Lib/test/regrtest.py | 38 ++++++++++++++++--------------- Misc/NEWS | 3 ++ 3 files changed, 25 insertions(+), 30 deletions(-) diff --git a/Lib/test/__main__.py b/Lib/test/__main__.py --- a/Lib/test/__main__.py +++ b/Lib/test/__main__.py @@ -1,13 +1,3 @@ -from test import regrtest, support +from test import regrtest - -TEMPDIR, TESTCWD = regrtest._make_temp_dir_for_build(regrtest.TEMPDIR) -regrtest.TEMPDIR = TEMPDIR -regrtest.TESTCWD = TESTCWD - -# Run the tests in a context manager that temporary changes the CWD to a -# temporary and writable directory. If it's not possible to create or -# change the CWD, the original CWD will be used. The original CWD is -# available from support.SAVEDCWD. -with support.temp_cwd(TESTCWD, quiet=True): - regrtest.main() +regrtest.main_in_temp_cwd() diff --git a/Lib/test/regrtest.py b/Lib/test/regrtest.py --- a/Lib/test/regrtest.py +++ b/Lib/test/regrtest.py @@ -200,7 +200,14 @@ RESOURCE_NAMES = ('audio', 'curses', 'largefile', 'network', 'decimal', 'cpu', 'subprocess', 'urlfetch', 'gui') -TEMPDIR = os.path.abspath(tempfile.gettempdir()) +# When tests are run from the Python build directory, it is best practice +# to keep the test files in a subfolder. This eases the cleanup of leftover +# files using the "make distclean" command. +if sysconfig.is_python_build(): + TEMPDIR = os.path.join(sysconfig.get_config_var('srcdir'), 'build') +else: + TEMPDIR = tempfile.gettempdir() +TEMPDIR = os.path.abspath(TEMPDIR) class _ArgParser(argparse.ArgumentParser): @@ -1543,13 +1550,9 @@ initial_indent=blanks, subsequent_indent=blanks)) -def _make_temp_dir_for_build(TEMPDIR): - # When tests are run from the Python build directory, it is best practice - # to keep the test files in a subfolder. It eases the cleanup of leftover - # files using command "make distclean". +def main_in_temp_cwd(): + """Run main() in a temporary working directory.""" if sysconfig.is_python_build(): - TEMPDIR = os.path.join(sysconfig.get_config_var('srcdir'), 'build') - TEMPDIR = os.path.abspath(TEMPDIR) try: os.mkdir(TEMPDIR) except FileExistsError: @@ -1558,10 +1561,16 @@ # Define a writable temp dir that will be used as cwd while running # the tests. The name of the dir includes the pid to allow parallel # testing (see the -j option). - TESTCWD = 'test_python_{}'.format(os.getpid()) + test_cwd = 'test_python_{}'.format(os.getpid()) + test_cwd = os.path.join(TEMPDIR, test_cwd) - TESTCWD = os.path.join(TEMPDIR, TESTCWD) - return TEMPDIR, TESTCWD + # Run the tests in a context manager that temporarily changes the CWD to a + # temporary and writable directory. If it's not possible to create or + # change the CWD, the original CWD will be used. The original CWD is + # available from support.SAVEDCWD. + with support.temp_cwd(test_cwd, quiet=True): + main() + if __name__ == '__main__': # Remove regrtest.py's own directory from the module search path. Despite @@ -1585,11 +1594,4 @@ # sanity check assert __file__ == os.path.abspath(sys.argv[0]) - TEMPDIR, TESTCWD = _make_temp_dir_for_build(TEMPDIR) - - # Run the tests in a context manager that temporary changes the CWD to a - # temporary and writable directory. If it's not possible to create or - # change the CWD, the original CWD will be used. The original CWD is - # available from support.SAVEDCWD. - with support.temp_cwd(TESTCWD, quiet=True): - main() + main_in_temp_cwd() diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -878,6 +878,9 @@ Tests ----- +- Issue #17283: Share code between `__main__.py` and `regrtest.py` in + `Lib/test`. + - Issue #17249: convert a test in test_capi to use unittest and reap threads. - Issue #17107: Test client-side SNI support in urllib.request thanks to -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 19:05:37 2013 From: python-checkins at python.org (chris.jerdonek) Date: Wed, 27 Feb 2013 19:05:37 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMi43KTogSXNzdWUgIzE2NDA2?= =?utf-8?q?=3A_combine_the_doc_pages_for_uploading_and_registering_to_PyPI?= =?utf-8?q?=2E?= Message-ID: <3ZGPvP5lZJzPpv@mail.python.org> http://hg.python.org/cpython/rev/a9565750930e changeset: 82417:a9565750930e branch: 2.7 parent: 82403:eb45fd74db34 user: Chris Jerdonek date: Wed Feb 27 09:55:39 2013 -0800 summary: Issue #16406: combine the doc pages for uploading and registering to PyPI. files: Doc/distutils/index.rst | 1 - Doc/distutils/packageindex.rst | 123 +++++++++++++++++++- Doc/distutils/setupscript.rst | 5 +- Doc/distutils/uploading.rst | 79 +------------- Misc/NEWS | 2 + 5 files changed, 124 insertions(+), 86 deletions(-) diff --git a/Doc/distutils/index.rst b/Doc/distutils/index.rst --- a/Doc/distutils/index.rst +++ b/Doc/distutils/index.rst @@ -22,7 +22,6 @@ sourcedist.rst builtdist.rst packageindex.rst - uploading.rst examples.rst extending.rst commandref.rst diff --git a/Doc/distutils/packageindex.rst b/Doc/distutils/packageindex.rst --- a/Doc/distutils/packageindex.rst +++ b/Doc/distutils/packageindex.rst @@ -1,12 +1,33 @@ +.. index:: + single: Python Package Index (PyPI) + single: PyPI; (see Python Package Index (PyPI)) + .. _package-index: -********************************** -Registering with the Package Index -********************************** +******************************* +The Python Package Index (PyPI) +******************************* -The Python Package Index (PyPI) holds meta-data describing distributions -packaged with distutils. The distutils command :command:`register` is used to -submit your distribution's meta-data to the index. It is invoked as follows:: +The `Python Package Index (PyPI)`_ holds :ref:`meta-data ` +describing distributions packaged with distutils, as well as package data like +distribution files if the package author wishes. + +Distutils exposes two commands for submitting package data to PyPI: the +:ref:`register ` command for submitting meta-data to PyPI +and the :ref:`upload ` command for submitting distribution +files. Both commands read configuration data from a special file called the +:ref:`.pypirc file `. PyPI :ref:`displays a home page +` for each package created from the ``long_description`` +submitted by the :command:`register` command. + + +.. _package-register: + +Registering Packages +==================== + +The distutils command :command:`register` is used to submit your distribution's +meta-data to the index. It is invoked as follows:: python setup.py register @@ -48,6 +69,54 @@ versions to display and hide. +.. _package-upload: + +Uploading Packages +================== + +.. versionadded:: 2.5 + +The distutils command :command:`upload` pushes the distribution files to PyPI. + +The command is invoked immediately after building one or more distribution +files. For example, the command :: + + python setup.py sdist bdist_wininst upload + +will cause the source distribution and the Windows installer to be uploaded to +PyPI. Note that these will be uploaded even if they are built using an earlier +invocation of :file:`setup.py`, but that only distributions named on the command +line for the invocation including the :command:`upload` command are uploaded. + +The :command:`upload` command uses the username, password, and repository URL +from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this +file). If a :command:`register` command was previously called in the same command, +and if the password was entered in the prompt, :command:`upload` will reuse the +entered password. This is useful if you do not want to store a clear text +password in the :file:`$HOME/.pypirc` file. + +You can specify another PyPI server with the ``--repository=url`` option:: + + python setup.py sdist bdist_wininst upload -r http://example.com/pypi + +See section :ref:`pypirc` for more on defining several servers. + +You can use the ``--sign`` option to tell :command:`upload` to sign each +uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must +be available for execution on the system :envvar:`PATH`. You can also specify +which key to use for signing using the ``--identity=name`` option. + +Other :command:`upload` options include ``--repository=url`` or +``--repository=section`` where *url* is the url of the server and +*section* the name of the section in :file:`$HOME/.pypirc`, and +``--show-response`` (which displays the full response text from the PyPI +server for help in debugging upload problems). + + +.. index:: + single: .pypirc file + single: Python Package Index (PyPI); .pypirc file + .. _pypirc: The .pypirc file @@ -102,3 +171,45 @@ may also be used:: python setup.py register -r other + + +.. _package-display: + +PyPI package display +==================== + +The ``long_description`` field plays a special role at PyPI. It is used by +the server to display a home page for the registered package. + +If you use the `reStructuredText `_ +syntax for this field, PyPI will parse it and display an HTML output for +the package home page. + +The ``long_description`` field can be attached to a text file located +in the package:: + + from distutils.core import setup + + with open('README.txt') as file: + long_description = file.read() + + setup(name='Distutils', + long_description=long_description) + +In that case, :file:`README.txt` is a regular reStructuredText text file located +in the root of the package besides :file:`setup.py`. + +To prevent registering broken reStructuredText content, you can use the +:program:`rst2html` program that is provided by the :mod:`docutils` package and +check the ``long_description`` from the command line:: + + $ python setup.py --long-description | rst2html.py > output.html + +:mod:`docutils` will display a warning if there's something wrong with your +syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` +to ``rst2html.py`` in the command above), being able to run the command above +without warnings does not guarantee that PyPI will convert the content +successfully. + + +.. _Python Package Index (PyPI): http://pypi.python.org/ diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -607,8 +607,9 @@ `_. (5) - The ``long_description`` field is used by PyPI when you are registering a - package, to build its home page. + The ``long_description`` field is used by PyPI when you are + :ref:`registering ` a package, to + :ref:`build its home page `. (6) The ``license`` field is a text indicating the license covering the diff --git a/Doc/distutils/uploading.rst b/Doc/distutils/uploading.rst --- a/Doc/distutils/uploading.rst +++ b/Doc/distutils/uploading.rst @@ -1,82 +1,7 @@ -.. _package-upload: +:orphan: *************************************** Uploading Packages to the Package Index *************************************** -.. versionadded:: 2.5 - -The Python Package Index (PyPI) not only stores the package info, but also the -package data if the author of the package wishes to. The distutils command -:command:`upload` pushes the distribution files to PyPI. - -The command is invoked immediately after building one or more distribution -files. For example, the command :: - - python setup.py sdist bdist_wininst upload - -will cause the source distribution and the Windows installer to be uploaded to -PyPI. Note that these will be uploaded even if they are built using an earlier -invocation of :file:`setup.py`, but that only distributions named on the command -line for the invocation including the :command:`upload` command are uploaded. - -The :command:`upload` command uses the username, password, and repository URL -from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this -file). If a :command:`register` command was previously called in the same command, -and if the password was entered in the prompt, :command:`upload` will reuse the -entered password. This is useful if you do not want to store a clear text -password in the :file:`$HOME/.pypirc` file. - -You can specify another PyPI server with the ``--repository=url`` option:: - - python setup.py sdist bdist_wininst upload -r http://example.com/pypi - -See section :ref:`pypirc` for more on defining several servers. - -You can use the ``--sign`` option to tell :command:`upload` to sign each -uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must -be available for execution on the system :envvar:`PATH`. You can also specify -which key to use for signing using the ``--identity=name`` option. - -Other :command:`upload` options include ``--repository=url`` or -``--repository=section`` where *url* is the url of the server and -*section* the name of the section in :file:`$HOME/.pypirc`, and -``--show-response`` (which displays the full response text from the PyPI -server for help in debugging upload problems). - -PyPI package display -==================== - -The ``long_description`` field plays a special role at PyPI. It is used by -the server to display a home page for the registered package. - -If you use the `reStructuredText `_ -syntax for this field, PyPI will parse it and display an HTML output for -the package home page. - -The ``long_description`` field can be attached to a text file located -in the package:: - - from distutils.core import setup - - with open('README.txt') as file: - long_description = file.read() - - setup(name='Distutils', - long_description=long_description) - -In that case, :file:`README.txt` is a regular reStructuredText text file located -in the root of the package besides :file:`setup.py`. - -To prevent registering broken reStructuredText content, you can use the -:program:`rst2html` program that is provided by the :mod:`docutils` package -and check the ``long_description`` from the command line:: - - $ python setup.py --long-description | rst2html.py > output.html - -:mod:`docutils` will display a warning if there's something wrong with your -syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` -to ``rst2html.py`` in the command above), being able to run the command above -without warnings does not guarantee that PyPI will convert the content -successfully. - +The contents of this page have moved to the section :ref:`package-index`. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -931,6 +931,8 @@ Documentation ------------- +- Issue #16406: combine the pages for uploading and registering to PyPI. + - Issue #16403: Document how distutils uses the maintainer field in PKG-INFO. Patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 19:05:39 2013 From: python-checkins at python.org (chris.jerdonek) Date: Wed, 27 Feb 2013 19:05:39 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAoMy4yKTogSXNzdWUgIzE2NDA2?= =?utf-8?q?=3A_Combine_the_doc_pages_for_uploading_and_registering_to_PyPI?= =?utf-8?q?=2E?= Message-ID: <3ZGPvR3NJ0zQ9b@mail.python.org> http://hg.python.org/cpython/rev/f57ddf3c3e5d changeset: 82418:f57ddf3c3e5d branch: 3.2 parent: 82413:2c9f7ed28384 user: Chris Jerdonek date: Wed Feb 27 10:00:20 2013 -0800 summary: Issue #16406: Combine the doc pages for uploading and registering to PyPI. files: Doc/distutils/index.rst | 1 - Doc/distutils/packageindex.rst | 121 +++++++++++++++++++- Doc/distutils/setupscript.rst | 5 +- Doc/distutils/uploading.rst | 77 +------------- Misc/NEWS | 2 + 5 files changed, 122 insertions(+), 84 deletions(-) diff --git a/Doc/distutils/index.rst b/Doc/distutils/index.rst --- a/Doc/distutils/index.rst +++ b/Doc/distutils/index.rst @@ -22,7 +22,6 @@ sourcedist.rst builtdist.rst packageindex.rst - uploading.rst examples.rst extending.rst commandref.rst diff --git a/Doc/distutils/packageindex.rst b/Doc/distutils/packageindex.rst --- a/Doc/distutils/packageindex.rst +++ b/Doc/distutils/packageindex.rst @@ -1,12 +1,33 @@ +.. index:: + single: Python Package Index (PyPI) + single: PyPI; (see Python Package Index (PyPI)) + .. _package-index: -********************************** -Registering with the Package Index -********************************** +******************************* +The Python Package Index (PyPI) +******************************* -The Python Package Index (PyPI) holds meta-data describing distributions -packaged with distutils. The distutils command :command:`register` is used to -submit your distribution's meta-data to the index. It is invoked as follows:: +The `Python Package Index (PyPI)`_ holds :ref:`meta-data ` +describing distributions packaged with distutils, as well as package data like +distribution files if the package author wishes. + +Distutils exposes two commands for submitting package data to PyPI: the +:ref:`register ` command for submitting meta-data to PyPI +and the :ref:`upload ` command for submitting distribution +files. Both commands read configuration data from a special file called the +:ref:`.pypirc file `. PyPI :ref:`displays a home page +` for each package created from the ``long_description`` +submitted by the :command:`register` command. + + +.. _package-register: + +Registering Packages +==================== + +The distutils command :command:`register` is used to submit your distribution's +meta-data to the index. It is invoked as follows:: python setup.py register @@ -48,6 +69,52 @@ versions to display and hide. +.. _package-upload: + +Uploading Packages +================== + +The distutils command :command:`upload` pushes the distribution files to PyPI. + +The command is invoked immediately after building one or more distribution +files. For example, the command :: + + python setup.py sdist bdist_wininst upload + +will cause the source distribution and the Windows installer to be uploaded to +PyPI. Note that these will be uploaded even if they are built using an earlier +invocation of :file:`setup.py`, but that only distributions named on the command +line for the invocation including the :command:`upload` command are uploaded. + +The :command:`upload` command uses the username, password, and repository URL +from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this +file). If a :command:`register` command was previously called in the same command, +and if the password was entered in the prompt, :command:`upload` will reuse the +entered password. This is useful if you do not want to store a clear text +password in the :file:`$HOME/.pypirc` file. + +You can specify another PyPI server with the ``--repository=url`` option:: + + python setup.py sdist bdist_wininst upload -r http://example.com/pypi + +See section :ref:`pypirc` for more on defining several servers. + +You can use the ``--sign`` option to tell :command:`upload` to sign each +uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must +be available for execution on the system :envvar:`PATH`. You can also specify +which key to use for signing using the ``--identity=name`` option. + +Other :command:`upload` options include ``--repository=url`` or +``--repository=section`` where *url* is the url of the server and +*section* the name of the section in :file:`$HOME/.pypirc`, and +``--show-response`` (which displays the full response text from the PyPI +server for help in debugging upload problems). + + +.. index:: + single: .pypirc file + single: Python Package Index (PyPI); .pypirc file + .. _pypirc: The .pypirc file @@ -102,3 +169,45 @@ may also be used:: python setup.py register -r other + + +.. _package-display: + +PyPI package display +==================== + +The ``long_description`` field plays a special role at PyPI. It is used by +the server to display a home page for the registered package. + +If you use the `reStructuredText `_ +syntax for this field, PyPI will parse it and display an HTML output for +the package home page. + +The ``long_description`` field can be attached to a text file located +in the package:: + + from distutils.core import setup + + with open('README.txt') as file: + long_description = file.read() + + setup(name='Distutils', + long_description=long_description) + +In that case, :file:`README.txt` is a regular reStructuredText text file located +in the root of the package besides :file:`setup.py`. + +To prevent registering broken reStructuredText content, you can use the +:program:`rst2html` program that is provided by the :mod:`docutils` package and +check the ``long_description`` from the command line:: + + $ python setup.py --long-description | rst2html.py > output.html + +:mod:`docutils` will display a warning if there's something wrong with your +syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` +to ``rst2html.py`` in the command above), being able to run the command above +without warnings does not guarantee that PyPI will convert the content +successfully. + + +.. _Python Package Index (PyPI): http://pypi.python.org/ diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -610,8 +610,9 @@ `_. (5) - The ``long_description`` field is used by PyPI when you are registering a - package, to build its home page. + The ``long_description`` field is used by PyPI when you are + :ref:`registering ` a package, to + :ref:`build its home page `. (6) The ``license`` field is a text indicating the license covering the diff --git a/Doc/distutils/uploading.rst b/Doc/distutils/uploading.rst --- a/Doc/distutils/uploading.rst +++ b/Doc/distutils/uploading.rst @@ -1,80 +1,7 @@ -.. _package-upload: +:orphan: *************************************** Uploading Packages to the Package Index *************************************** -The Python Package Index (PyPI) not only stores the package info, but also the -package data if the author of the package wishes to. The distutils command -:command:`upload` pushes the distribution files to PyPI. - -The command is invoked immediately after building one or more distribution -files. For example, the command :: - - python setup.py sdist bdist_wininst upload - -will cause the source distribution and the Windows installer to be uploaded to -PyPI. Note that these will be uploaded even if they are built using an earlier -invocation of :file:`setup.py`, but that only distributions named on the command -line for the invocation including the :command:`upload` command are uploaded. - -The :command:`upload` command uses the username, password, and repository URL -from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this -file). If a :command:`register` command was previously called in the same command, -and if the password was entered in the prompt, :command:`upload` will reuse the -entered password. This is useful if you do not want to store a clear text -password in the :file:`$HOME/.pypirc` file. - -You can specify another PyPI server with the ``--repository=url`` option:: - - python setup.py sdist bdist_wininst upload -r http://example.com/pypi - -See section :ref:`pypirc` for more on defining several servers. - -You can use the ``--sign`` option to tell :command:`upload` to sign each -uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must -be available for execution on the system :envvar:`PATH`. You can also specify -which key to use for signing using the ``--identity=name`` option. - -Other :command:`upload` options include ``--repository=url`` or -``--repository=section`` where *url* is the url of the server and -*section* the name of the section in :file:`$HOME/.pypirc`, and -``--show-response`` (which displays the full response text from the PyPI -server for help in debugging upload problems). - -PyPI package display -==================== - -The ``long_description`` field plays a special role at PyPI. It is used by -the server to display a home page for the registered package. - -If you use the `reStructuredText `_ -syntax for this field, PyPI will parse it and display an HTML output for -the package home page. - -The ``long_description`` field can be attached to a text file located -in the package:: - - from distutils.core import setup - - with open('README.txt') as file: - long_description = file.read() - - setup(name='Distutils', - long_description=long_description) - -In that case, :file:`README.txt` is a regular reStructuredText text file located -in the root of the package besides :file:`setup.py`. - -To prevent registering broken reStructuredText content, you can use the -:program:`rst2html` program that is provided by the :mod:`docutils` package and -check the ``long_description`` from the command line:: - - $ python setup.py --long-description | rst2html.py > output.html - -:mod:`docutils` will display a warning if there's something wrong with your -syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` -to ``rst2html.py`` in the command above), being able to run the command above -without warnings does not guarantee that PyPI will convert the content -successfully. - +The contents of this page have moved to the section :ref:`package-index`. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1086,6 +1086,8 @@ Documentation ------------- +- Issue #16406: Combine the pages for uploading and registering to PyPI. + - Issue #16403: Document how distutils uses the maintainer field in PKG-INFO. Patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 19:05:41 2013 From: python-checkins at python.org (chris.jerdonek) Date: Wed, 27 Feb 2013 19:05:41 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Issue_=2316406=3A_Combine_the_doc_pages_for_uploading_and_regi?= =?utf-8?q?stering_to_PyPI=2E?= Message-ID: <3ZGPvT0ND9zQ4W@mail.python.org> http://hg.python.org/cpython/rev/58a28aa70fec changeset: 82419:58a28aa70fec branch: 3.3 parent: 82414:67c27421b00b parent: 82418:f57ddf3c3e5d user: Chris Jerdonek date: Wed Feb 27 10:03:26 2013 -0800 summary: Issue #16406: Combine the doc pages for uploading and registering to PyPI. files: Doc/distutils/index.rst | 1 - Doc/distutils/packageindex.rst | 121 +++++++++++++++++++- Doc/distutils/setupscript.rst | 5 +- Doc/distutils/uploading.rst | 77 +------------- Misc/NEWS | 2 + 5 files changed, 122 insertions(+), 84 deletions(-) diff --git a/Doc/distutils/index.rst b/Doc/distutils/index.rst --- a/Doc/distutils/index.rst +++ b/Doc/distutils/index.rst @@ -22,7 +22,6 @@ sourcedist.rst builtdist.rst packageindex.rst - uploading.rst examples.rst extending.rst commandref.rst diff --git a/Doc/distutils/packageindex.rst b/Doc/distutils/packageindex.rst --- a/Doc/distutils/packageindex.rst +++ b/Doc/distutils/packageindex.rst @@ -1,12 +1,33 @@ +.. index:: + single: Python Package Index (PyPI) + single: PyPI; (see Python Package Index (PyPI)) + .. _package-index: -********************************** -Registering with the Package Index -********************************** +******************************* +The Python Package Index (PyPI) +******************************* -The Python Package Index (PyPI) holds meta-data describing distributions -packaged with distutils. The distutils command :command:`register` is used to -submit your distribution's meta-data to the index. It is invoked as follows:: +The `Python Package Index (PyPI)`_ holds :ref:`meta-data ` +describing distributions packaged with distutils, as well as package data like +distribution files if the package author wishes. + +Distutils exposes two commands for submitting package data to PyPI: the +:ref:`register ` command for submitting meta-data to PyPI +and the :ref:`upload ` command for submitting distribution +files. Both commands read configuration data from a special file called the +:ref:`.pypirc file `. PyPI :ref:`displays a home page +` for each package created from the ``long_description`` +submitted by the :command:`register` command. + + +.. _package-register: + +Registering Packages +==================== + +The distutils command :command:`register` is used to submit your distribution's +meta-data to the index. It is invoked as follows:: python setup.py register @@ -48,6 +69,52 @@ versions to display and hide. +.. _package-upload: + +Uploading Packages +================== + +The distutils command :command:`upload` pushes the distribution files to PyPI. + +The command is invoked immediately after building one or more distribution +files. For example, the command :: + + python setup.py sdist bdist_wininst upload + +will cause the source distribution and the Windows installer to be uploaded to +PyPI. Note that these will be uploaded even if they are built using an earlier +invocation of :file:`setup.py`, but that only distributions named on the command +line for the invocation including the :command:`upload` command are uploaded. + +The :command:`upload` command uses the username, password, and repository URL +from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this +file). If a :command:`register` command was previously called in the same command, +and if the password was entered in the prompt, :command:`upload` will reuse the +entered password. This is useful if you do not want to store a clear text +password in the :file:`$HOME/.pypirc` file. + +You can specify another PyPI server with the ``--repository=url`` option:: + + python setup.py sdist bdist_wininst upload -r http://example.com/pypi + +See section :ref:`pypirc` for more on defining several servers. + +You can use the ``--sign`` option to tell :command:`upload` to sign each +uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must +be available for execution on the system :envvar:`PATH`. You can also specify +which key to use for signing using the ``--identity=name`` option. + +Other :command:`upload` options include ``--repository=url`` or +``--repository=section`` where *url* is the url of the server and +*section* the name of the section in :file:`$HOME/.pypirc`, and +``--show-response`` (which displays the full response text from the PyPI +server for help in debugging upload problems). + + +.. index:: + single: .pypirc file + single: Python Package Index (PyPI); .pypirc file + .. _pypirc: The .pypirc file @@ -102,3 +169,45 @@ may also be used:: python setup.py register -r other + + +.. _package-display: + +PyPI package display +==================== + +The ``long_description`` field plays a special role at PyPI. It is used by +the server to display a home page for the registered package. + +If you use the `reStructuredText `_ +syntax for this field, PyPI will parse it and display an HTML output for +the package home page. + +The ``long_description`` field can be attached to a text file located +in the package:: + + from distutils.core import setup + + with open('README.txt') as file: + long_description = file.read() + + setup(name='Distutils', + long_description=long_description) + +In that case, :file:`README.txt` is a regular reStructuredText text file located +in the root of the package besides :file:`setup.py`. + +To prevent registering broken reStructuredText content, you can use the +:program:`rst2html` program that is provided by the :mod:`docutils` package and +check the ``long_description`` from the command line:: + + $ python setup.py --long-description | rst2html.py > output.html + +:mod:`docutils` will display a warning if there's something wrong with your +syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` +to ``rst2html.py`` in the command above), being able to run the command above +without warnings does not guarantee that PyPI will convert the content +successfully. + + +.. _Python Package Index (PyPI): http://pypi.python.org/ diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -610,8 +610,9 @@ `_. (5) - The ``long_description`` field is used by PyPI when you are registering a - package, to build its home page. + The ``long_description`` field is used by PyPI when you are + :ref:`registering ` a package, to + :ref:`build its home page `. (6) The ``license`` field is a text indicating the license covering the diff --git a/Doc/distutils/uploading.rst b/Doc/distutils/uploading.rst --- a/Doc/distutils/uploading.rst +++ b/Doc/distutils/uploading.rst @@ -1,80 +1,7 @@ -.. _package-upload: +:orphan: *************************************** Uploading Packages to the Package Index *************************************** -The Python Package Index (PyPI) not only stores the package info, but also the -package data if the author of the package wishes to. The distutils command -:command:`upload` pushes the distribution files to PyPI. - -The command is invoked immediately after building one or more distribution -files. For example, the command :: - - python setup.py sdist bdist_wininst upload - -will cause the source distribution and the Windows installer to be uploaded to -PyPI. Note that these will be uploaded even if they are built using an earlier -invocation of :file:`setup.py`, but that only distributions named on the command -line for the invocation including the :command:`upload` command are uploaded. - -The :command:`upload` command uses the username, password, and repository URL -from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this -file). If a :command:`register` command was previously called in the same command, -and if the password was entered in the prompt, :command:`upload` will reuse the -entered password. This is useful if you do not want to store a clear text -password in the :file:`$HOME/.pypirc` file. - -You can specify another PyPI server with the ``--repository=url`` option:: - - python setup.py sdist bdist_wininst upload -r http://example.com/pypi - -See section :ref:`pypirc` for more on defining several servers. - -You can use the ``--sign`` option to tell :command:`upload` to sign each -uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must -be available for execution on the system :envvar:`PATH`. You can also specify -which key to use for signing using the ``--identity=name`` option. - -Other :command:`upload` options include ``--repository=url`` or -``--repository=section`` where *url* is the url of the server and -*section* the name of the section in :file:`$HOME/.pypirc`, and -``--show-response`` (which displays the full response text from the PyPI -server for help in debugging upload problems). - -PyPI package display -==================== - -The ``long_description`` field plays a special role at PyPI. It is used by -the server to display a home page for the registered package. - -If you use the `reStructuredText `_ -syntax for this field, PyPI will parse it and display an HTML output for -the package home page. - -The ``long_description`` field can be attached to a text file located -in the package:: - - from distutils.core import setup - - with open('README.txt') as file: - long_description = file.read() - - setup(name='Distutils', - long_description=long_description) - -In that case, :file:`README.txt` is a regular reStructuredText text file located -in the root of the package besides :file:`setup.py`. - -To prevent registering broken reStructuredText content, you can use the -:program:`rst2html` program that is provided by the :mod:`docutils` package and -check the ``long_description`` from the command line:: - - $ python setup.py --long-description | rst2html.py > output.html - -:mod:`docutils` will display a warning if there's something wrong with your -syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` -to ``rst2html.py`` in the command above), being able to run the command above -without warnings does not guarantee that PyPI will convert the content -successfully. - +The contents of this page have moved to the section :ref:`package-index`. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -774,6 +774,8 @@ Documentation ------------- +- Issue #16406: Combine the pages for uploading and registering to PyPI. + - Issue #16403: Document how distutils uses the maintainer field in PKG-INFO. Patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Wed Feb 27 19:05:42 2013 From: python-checkins at python.org (chris.jerdonek) Date: Wed, 27 Feb 2013 19:05:42 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Issue_=2316406=3A_Combine_the_doc_pages_for_uploading_an?= =?utf-8?q?d_registering_to_PyPI=2E?= Message-ID: <3ZGPvV4qsxzQGS@mail.python.org> http://hg.python.org/cpython/rev/44ebac378e51 changeset: 82420:44ebac378e51 parent: 82416:e0f3dcd30af8 parent: 82419:58a28aa70fec user: Chris Jerdonek date: Wed Feb 27 10:04:23 2013 -0800 summary: Issue #16406: Combine the doc pages for uploading and registering to PyPI. files: Doc/distutils/index.rst | 1 - Doc/distutils/packageindex.rst | 121 +++++++++++++++++++- Doc/distutils/setupscript.rst | 5 +- Doc/distutils/uploading.rst | 77 +------------- Misc/NEWS | 2 + 5 files changed, 122 insertions(+), 84 deletions(-) diff --git a/Doc/distutils/index.rst b/Doc/distutils/index.rst --- a/Doc/distutils/index.rst +++ b/Doc/distutils/index.rst @@ -22,7 +22,6 @@ sourcedist.rst builtdist.rst packageindex.rst - uploading.rst examples.rst extending.rst commandref.rst diff --git a/Doc/distutils/packageindex.rst b/Doc/distutils/packageindex.rst --- a/Doc/distutils/packageindex.rst +++ b/Doc/distutils/packageindex.rst @@ -1,12 +1,33 @@ +.. index:: + single: Python Package Index (PyPI) + single: PyPI; (see Python Package Index (PyPI)) + .. _package-index: -********************************** -Registering with the Package Index -********************************** +******************************* +The Python Package Index (PyPI) +******************************* -The Python Package Index (PyPI) holds meta-data describing distributions -packaged with distutils. The distutils command :command:`register` is used to -submit your distribution's meta-data to the index. It is invoked as follows:: +The `Python Package Index (PyPI)`_ holds :ref:`meta-data ` +describing distributions packaged with distutils, as well as package data like +distribution files if the package author wishes. + +Distutils exposes two commands for submitting package data to PyPI: the +:ref:`register ` command for submitting meta-data to PyPI +and the :ref:`upload ` command for submitting distribution +files. Both commands read configuration data from a special file called the +:ref:`.pypirc file `. PyPI :ref:`displays a home page +` for each package created from the ``long_description`` +submitted by the :command:`register` command. + + +.. _package-register: + +Registering Packages +==================== + +The distutils command :command:`register` is used to submit your distribution's +meta-data to the index. It is invoked as follows:: python setup.py register @@ -48,6 +69,52 @@ versions to display and hide. +.. _package-upload: + +Uploading Packages +================== + +The distutils command :command:`upload` pushes the distribution files to PyPI. + +The command is invoked immediately after building one or more distribution +files. For example, the command :: + + python setup.py sdist bdist_wininst upload + +will cause the source distribution and the Windows installer to be uploaded to +PyPI. Note that these will be uploaded even if they are built using an earlier +invocation of :file:`setup.py`, but that only distributions named on the command +line for the invocation including the :command:`upload` command are uploaded. + +The :command:`upload` command uses the username, password, and repository URL +from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this +file). If a :command:`register` command was previously called in the same command, +and if the password was entered in the prompt, :command:`upload` will reuse the +entered password. This is useful if you do not want to store a clear text +password in the :file:`$HOME/.pypirc` file. + +You can specify another PyPI server with the ``--repository=url`` option:: + + python setup.py sdist bdist_wininst upload -r http://example.com/pypi + +See section :ref:`pypirc` for more on defining several servers. + +You can use the ``--sign`` option to tell :command:`upload` to sign each +uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must +be available for execution on the system :envvar:`PATH`. You can also specify +which key to use for signing using the ``--identity=name`` option. + +Other :command:`upload` options include ``--repository=url`` or +``--repository=section`` where *url* is the url of the server and +*section* the name of the section in :file:`$HOME/.pypirc`, and +``--show-response`` (which displays the full response text from the PyPI +server for help in debugging upload problems). + + +.. index:: + single: .pypirc file + single: Python Package Index (PyPI); .pypirc file + .. _pypirc: The .pypirc file @@ -102,3 +169,45 @@ may also be used:: python setup.py register -r other + + +.. _package-display: + +PyPI package display +==================== + +The ``long_description`` field plays a special role at PyPI. It is used by +the server to display a home page for the registered package. + +If you use the `reStructuredText `_ +syntax for this field, PyPI will parse it and display an HTML output for +the package home page. + +The ``long_description`` field can be attached to a text file located +in the package:: + + from distutils.core import setup + + with open('README.txt') as file: + long_description = file.read() + + setup(name='Distutils', + long_description=long_description) + +In that case, :file:`README.txt` is a regular reStructuredText text file located +in the root of the package besides :file:`setup.py`. + +To prevent registering broken reStructuredText content, you can use the +:program:`rst2html` program that is provided by the :mod:`docutils` package and +check the ``long_description`` from the command line:: + + $ python setup.py --long-description | rst2html.py > output.html + +:mod:`docutils` will display a warning if there's something wrong with your +syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` +to ``rst2html.py`` in the command above), being able to run the command above +without warnings does not guarantee that PyPI will convert the content +successfully. + + +.. _Python Package Index (PyPI): http://pypi.python.org/ diff --git a/Doc/distutils/setupscript.rst b/Doc/distutils/setupscript.rst --- a/Doc/distutils/setupscript.rst +++ b/Doc/distutils/setupscript.rst @@ -610,8 +610,9 @@ `_. (5) - The ``long_description`` field is used by PyPI when you are registering a - package, to build its home page. + The ``long_description`` field is used by PyPI when you are + :ref:`registering ` a package, to + :ref:`build its home page `. (6) The ``license`` field is a text indicating the license covering the diff --git a/Doc/distutils/uploading.rst b/Doc/distutils/uploading.rst --- a/Doc/distutils/uploading.rst +++ b/Doc/distutils/uploading.rst @@ -1,80 +1,7 @@ -.. _package-upload: +:orphan: *************************************** Uploading Packages to the Package Index *************************************** -The Python Package Index (PyPI) not only stores the package info, but also the -package data if the author of the package wishes to. The distutils command -:command:`upload` pushes the distribution files to PyPI. - -The command is invoked immediately after building one or more distribution -files. For example, the command :: - - python setup.py sdist bdist_wininst upload - -will cause the source distribution and the Windows installer to be uploaded to -PyPI. Note that these will be uploaded even if they are built using an earlier -invocation of :file:`setup.py`, but that only distributions named on the command -line for the invocation including the :command:`upload` command are uploaded. - -The :command:`upload` command uses the username, password, and repository URL -from the :file:`$HOME/.pypirc` file (see section :ref:`pypirc` for more on this -file). If a :command:`register` command was previously called in the same command, -and if the password was entered in the prompt, :command:`upload` will reuse the -entered password. This is useful if you do not want to store a clear text -password in the :file:`$HOME/.pypirc` file. - -You can specify another PyPI server with the ``--repository=url`` option:: - - python setup.py sdist bdist_wininst upload -r http://example.com/pypi - -See section :ref:`pypirc` for more on defining several servers. - -You can use the ``--sign`` option to tell :command:`upload` to sign each -uploaded file using GPG (GNU Privacy Guard). The :program:`gpg` program must -be available for execution on the system :envvar:`PATH`. You can also specify -which key to use for signing using the ``--identity=name`` option. - -Other :command:`upload` options include ``--repository=url`` or -``--repository=section`` where *url* is the url of the server and -*section* the name of the section in :file:`$HOME/.pypirc`, and -``--show-response`` (which displays the full response text from the PyPI -server for help in debugging upload problems). - -PyPI package display -==================== - -The ``long_description`` field plays a special role at PyPI. It is used by -the server to display a home page for the registered package. - -If you use the `reStructuredText `_ -syntax for this field, PyPI will parse it and display an HTML output for -the package home page. - -The ``long_description`` field can be attached to a text file located -in the package:: - - from distutils.core import setup - - with open('README.txt') as file: - long_description = file.read() - - setup(name='Distutils', - long_description=long_description) - -In that case, :file:`README.txt` is a regular reStructuredText text file located -in the root of the package besides :file:`setup.py`. - -To prevent registering broken reStructuredText content, you can use the -:program:`rst2html` program that is provided by the :mod:`docutils` package and -check the ``long_description`` from the command line:: - - $ python setup.py --long-description | rst2html.py > output.html - -:mod:`docutils` will display a warning if there's something wrong with your -syntax. Because PyPI applies additional checks (e.g. by passing ``--no-raw`` -to ``rst2html.py`` in the command above), being able to run the command above -without warnings does not guarantee that PyPI will convert the content -successfully. - +The contents of this page have moved to the section :ref:`package-index`. diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -1036,6 +1036,8 @@ Documentation ------------- +- Issue #16406: Combine the pages for uploading and registering to PyPI. + - Issue #16403: Document how distutils uses the maintainer field in PKG-INFO. Patch by Jyrki Pulliainen. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 00:34:06 2013 From: python-checkins at python.org (vinay.sajip) Date: Thu, 28 Feb 2013 00:34:06 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython=3A_Closes_=2317313=3A_Deleted?= =?utf-8?q?_test_file_created_by_test=5Flogging=2E?= Message-ID: <3ZGYBQ10tjzNbr@mail.python.org> http://hg.python.org/cpython/rev/b7f5bff33c22 changeset: 82421:b7f5bff33c22 user: Vinay Sajip date: Wed Feb 27 23:33:52 2013 +0000 summary: Closes #17313: Deleted test file created by test_logging. files: Lib/test/test_logging.py | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/Lib/test/test_logging.py b/Lib/test/test_logging.py --- a/Lib/test/test_logging.py +++ b/Lib/test/test_logging.py @@ -3406,6 +3406,7 @@ self.addCleanup(expected.close) self.assertEqual(handler.stream.mode, expected.stream.mode) self.assertEqual(handler.stream.name, expected.stream.name) + self.addCleanup(os.remove, 'test.log') def test_filemode(self): logging.basicConfig(filename='test.log', filemode='wb') @@ -3414,6 +3415,7 @@ expected = logging.FileHandler('test.log', 'wb') self.addCleanup(expected.close) self.assertEqual(handler.stream.mode, expected.stream.mode) + self.addCleanup(os.remove, 'test.log') def test_stream(self): stream = io.StringIO() -- Repository URL: http://hg.python.org/cpython From solipsis at pitrou.net Thu Feb 28 06:01:57 2013 From: solipsis at pitrou.net (solipsis at pitrou.net) Date: Thu, 28 Feb 2013 06:01:57 +0100 Subject: [Python-checkins] Daily reference leaks (b7f5bff33c22): sum=0 Message-ID: results for b7f5bff33c22 on branch "default" -------------------------------------------- Command line was: ['./python', '-m', 'test.regrtest', '-uall', '-R', '3:3:/home/antoine/cpython/refleaks/reflogNsK0_4', '-x'] From python-checkins at python.org Thu Feb 28 07:31:51 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 07:31:51 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Fix_markup_in_?= =?utf-8?q?unittest_docs=2E?= Message-ID: <3ZGkSR1h10zRdf@mail.python.org> http://hg.python.org/cpython/rev/f0a3ceef7a04 changeset: 82422:f0a3ceef7a04 branch: 2.7 parent: 82417:a9565750930e user: Ezio Melotti date: Thu Feb 28 08:28:11 2013 +0200 summary: Fix markup in unittest docs. files: Doc/library/unittest.rst | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -1868,10 +1868,10 @@ .. note:: - The default pattern is 'test*.py'. This matches all Python files - that start with 'test' but *won't* match any test directories. - - A pattern like 'test*' will match test packages as well as + The default pattern is ``'test*.py'``. This matches all Python files + that start with ``'test'`` but *won't* match any test directories. + + A pattern like ``'test*'`` will match test packages as well as modules. If the package :file:`__init__.py` defines ``load_tests`` then it will be -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 07:31:52 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 07:31:52 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Fix_markup_in_?= =?utf-8?q?unittest_docs=2E?= Message-ID: <3ZGkSS4cQLzRdq@mail.python.org> http://hg.python.org/cpython/rev/6c5e991aa95a changeset: 82423:6c5e991aa95a branch: 3.2 parent: 82418:f57ddf3c3e5d user: Ezio Melotti date: Thu Feb 28 08:28:11 2013 +0200 summary: Fix markup in unittest docs. files: Doc/library/unittest.rst | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -2017,10 +2017,10 @@ .. note:: - The default pattern is 'test*.py'. This matches all Python files - that start with 'test' but *won't* match any test directories. - - A pattern like 'test*' will match test packages as well as + The default pattern is ``'test*.py'``. This matches all Python files + that start with ``'test'`` but *won't* match any test directories. + + A pattern like ``'test*'`` will match test packages as well as modules. If the package :file:`__init__.py` defines ``load_tests`` then it will be -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 07:31:54 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 07:31:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge_markup_fixes_in_unittest_docs_from_3=2E2=2E?= Message-ID: <3ZGkSV07bLzRhW@mail.python.org> http://hg.python.org/cpython/rev/4831d6db2f5d changeset: 82424:4831d6db2f5d branch: 3.3 parent: 82419:58a28aa70fec parent: 82423:6c5e991aa95a user: Ezio Melotti date: Thu Feb 28 08:29:37 2013 +0200 summary: Merge markup fixes in unittest docs from 3.2. files: Doc/library/unittest.rst | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -1870,10 +1870,10 @@ .. note:: - The default pattern is 'test*.py'. This matches all Python files - that start with 'test' but *won't* match any test directories. - - A pattern like 'test*' will match test packages as well as + The default pattern is ``'test*.py'``. This matches all Python files + that start with ``'test'`` but *won't* match any test directories. + + A pattern like ``'test*'`` will match test packages as well as modules. If the package :file:`__init__.py` defines ``load_tests`` then it will be -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 07:31:55 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 07:31:55 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_markup_fixes_in_unittest_docs_from_3=2E3=2E?= Message-ID: <3ZGkSW2pvDzRhL@mail.python.org> http://hg.python.org/cpython/rev/688e721f79d4 changeset: 82425:688e721f79d4 parent: 82421:b7f5bff33c22 parent: 82424:4831d6db2f5d user: Ezio Melotti date: Thu Feb 28 08:31:32 2013 +0200 summary: Merge markup fixes in unittest docs from 3.3. files: Doc/library/unittest.rst | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -1874,10 +1874,10 @@ .. note:: - The default pattern is 'test*.py'. This matches all Python files - that start with 'test' but *won't* match any test directories. - - A pattern like 'test*' will match test packages as well as + The default pattern is ``'test*.py'``. This matches all Python files + that start with ``'test'`` but *won't* match any test directories. + + A pattern like ``'test*'`` will match test packages as well as modules. If the package :file:`__init__.py` defines ``load_tests`` then it will be -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 17:03:53 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 17:03:53 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_Add_a_link_to_?= =?utf-8?q?the_demo_dir=2E?= Message-ID: <3ZGz8T3FYlzRld@mail.python.org> http://hg.python.org/cpython/rev/0eb3949aa2b6 changeset: 82426:0eb3949aa2b6 branch: 2.7 parent: 82422:f0a3ceef7a04 user: Ezio Melotti date: Thu Feb 28 17:55:17 2013 +0200 summary: Add a link to the demo dir. files: Doc/library/curses.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/curses.rst b/Doc/library/curses.rst --- a/Doc/library/curses.rst +++ b/Doc/library/curses.rst @@ -48,7 +48,7 @@ Tutorial material on using curses with Python, by Andrew Kuchling and Eric Raymond. - The :file:`Demo/curses/` directory in the Python source distribution contains + The :source:`Demo/curses/` directory in the Python source distribution contains some example programs using the curses bindings provided by this module. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 17:03:54 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 17:03:54 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=283=2E2=29=3A_Add_a_link_to_?= =?utf-8?q?the_demo_dir=2E?= Message-ID: <3ZGz8V5y2PzSc6@mail.python.org> http://hg.python.org/cpython/rev/52b9d5e3f026 changeset: 82427:52b9d5e3f026 branch: 3.2 parent: 82423:6c5e991aa95a user: Ezio Melotti date: Thu Feb 28 18:02:28 2013 +0200 summary: Add a link to the demo dir. files: Doc/library/curses.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/curses.rst b/Doc/library/curses.rst --- a/Doc/library/curses.rst +++ b/Doc/library/curses.rst @@ -45,7 +45,7 @@ Tutorial material on using curses with Python, by Andrew Kuchling and Eric Raymond. - The :file:`Tools/demo/` directory in the Python source distribution contains + The :source:`Tools/demo/` directory in the Python source distribution contains some example programs using the curses bindings provided by this module. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 17:03:56 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 17:03:56 +0100 (CET) Subject: [Python-checkins] =?utf-8?b?Y3B5dGhvbiAobWVyZ2UgMy4yIC0+IDMuMyk6?= =?utf-8?q?_Merge_link_addition_from_3=2E2=2E?= Message-ID: <3ZGz8X1VM5zSc6@mail.python.org> http://hg.python.org/cpython/rev/66d0f6ef2a7f changeset: 82428:66d0f6ef2a7f branch: 3.3 parent: 82424:4831d6db2f5d parent: 82427:52b9d5e3f026 user: Ezio Melotti date: Thu Feb 28 18:03:16 2013 +0200 summary: Merge link addition from 3.2. files: Doc/library/curses.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/curses.rst b/Doc/library/curses.rst --- a/Doc/library/curses.rst +++ b/Doc/library/curses.rst @@ -45,7 +45,7 @@ Tutorial material on using curses with Python, by Andrew Kuchling and Eric Raymond. - The :file:`Tools/demo/` directory in the Python source distribution contains + The :source:`Tools/demo/` directory in the Python source distribution contains some example programs using the curses bindings provided by this module. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 17:03:57 2013 From: python-checkins at python.org (ezio.melotti) Date: Thu, 28 Feb 2013 17:03:57 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=28merge_3=2E3_-=3E_default?= =?utf-8?q?=29=3A_Merge_link_addition_from_3=2E3=2E?= Message-ID: <3ZGz8Y4YVBzSlH@mail.python.org> http://hg.python.org/cpython/rev/637c2cd716d1 changeset: 82429:637c2cd716d1 parent: 82425:688e721f79d4 parent: 82428:66d0f6ef2a7f user: Ezio Melotti date: Thu Feb 28 18:03:35 2013 +0200 summary: Merge link addition from 3.3. files: Doc/library/curses.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/curses.rst b/Doc/library/curses.rst --- a/Doc/library/curses.rst +++ b/Doc/library/curses.rst @@ -45,7 +45,7 @@ Tutorial material on using curses with Python, by Andrew Kuchling and Eric Raymond. - The :file:`Tools/demo/` directory in the Python source distribution contains + The :source:`Tools/demo/` directory in the Python source distribution contains some example programs using the curses bindings provided by this module. -- Repository URL: http://hg.python.org/cpython From python-checkins at python.org Thu Feb 28 20:11:20 2013 From: python-checkins at python.org (raymond.hettinger) Date: Thu, 28 Feb 2013 20:11:20 +0100 (CET) Subject: [Python-checkins] =?utf-8?q?cpython_=282=2E7=29=3A_The_example_re?= =?utf-8?q?gex_should_be_a_raw_string=2E?= Message-ID: <3ZH3Jm0phPzSvg@mail.python.org> http://hg.python.org/cpython/rev/6ae4938256c6 changeset: 82430:6ae4938256c6 branch: 2.7 parent: 82426:0eb3949aa2b6 user: Raymond Hettinger date: Thu Feb 28 11:11:11 2013 -0800 summary: The example regex should be a raw string. files: Doc/library/collections.rst | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Doc/library/collections.rst b/Doc/library/collections.rst --- a/Doc/library/collections.rst +++ b/Doc/library/collections.rst @@ -51,7 +51,7 @@ >>> # Find the ten most common words in Hamlet >>> import re - >>> words = re.findall('\w+', open('hamlet.txt').read().lower()) + >>> words = re.findall(r'\w+', open('hamlet.txt').read().lower()) >>> Counter(words).most_common(10) [('the', 1143), ('and', 966), ('to', 762), ('of', 669), ('i', 631), ('you', 554), ('a', 546), ('my', 514), ('hamlet', 471), ('in', 451)] -- Repository URL: http://hg.python.org/cpython