From report at bugs.python.org Mon Mar 1 02:33:09 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Mon, 01 Mar 2021 07:33:09 +0000 Subject: [New-bugs-announce] [issue43349] [doc] incorrect tuning(7) manpage link Message-ID: <1614583989.29.0.630028464305.issue43349@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : In https://docs.python.org/3.10/library/resource.html#resource.RLIMIT_SWAP, tuning(7) points to https://manpages.debian.org/tuning(7), however this is a FreeBSD only (?) system call, so the link is incorrect. I suggest linking to either: - https://docs.freebsd.org/en/books/handbook/config/ - https://www.freebsd.org/cgi/man.cgi?query=tuning&sektion=7&format=html ---------- assignee: docs at python components: Documentation messages: 387845 nosy: docs at python, erlendaasland priority: normal severity: normal status: open title: [doc] incorrect tuning(7) manpage link versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 1 04:52:49 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Mon, 01 Mar 2021 09:52:49 +0000 Subject: [New-bugs-announce] [issue43350] [sqlite3] Active statements are reset twice in _pysqlite_query_execute() Message-ID: <1614592369.49.0.483199992415.issue43350@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : In _pysqlite_query_execute(), if the cursor already has a statement object, it is reset twice before the cache is queried. ---------- components: Library (Lib) messages: 387850 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: [sqlite3] Active statements are reset twice in _pysqlite_query_execute() type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 1 05:47:12 2021 From: report at bugs.python.org (Andrew V. Jones) Date: Mon, 01 Mar 2021 10:47:12 +0000 Subject: [New-bugs-announce] [issue43351] `RecursionError` during deallocation Message-ID: <1614595632.86.0.935465611369.issue43351@roundup.psfhosted.org> New submission from Andrew V. Jones : I am currently working with "porting" some code from Python 2.7.14 to Python 3.7.5, but the process running the Python code seems to terminate in the following way: ``` #0 0x00002aaaaef63337 in raise () from /lib64/libc.so.6 #1 0x00002aaaaef64a28 in abort () from /lib64/libc.so.6 #2 0x00002aaaae726e18 in fatal_error (prefix=0x0, msg=0x2aaaae8091f0 "Cannot recover from stack overflow.", status=-1) at Python/pylifecycle.c:2187 #3 0x00002aaaae727603 in Py_FatalError (msg=0x9bf0
) at Python/pylifecycle.c:2197 #4 0x00002aaaae6ede2b in _Py_CheckRecursiveCall (where=) at Python/ceval.c:489 #5 0x00002aaaae62b61d in _PyMethodDef_RawFastCallDict (method=0x2aaaaeae2740 , self=0x2aaabb1d4d70, args=0x0, nargs=0, kwargs=0x0) at Objects/call.c:464 #6 0x00002aaaae62b6a9 in _PyCFunction_FastCallDict (func=0x2aaabeaa5690, args=0x6, nargs=0, kwargs=0x0) at Objects/call.c:586 #7 0x00002aaaae62c56c in _PyObject_CallFunctionVa (callable=0x9bf0, format=, va=, is_size_t=) at Objects/call.c:935 #8 0x00002aaaae62cc80 in callmethod (is_size_t=, va=, format=, callable=) at Objects/call.c:1031 #9 _PyObject_CallMethodId (obj=, name=, format=0x0) at Objects/call.c:1100 #10 0x00002aaaae724c51 in flush_std_files () at Python/pylifecycle.c:1083 #11 0x00002aaaae72704f in fatal_error (prefix=0x0, msg=, status=-1) at Python/pylifecycle.c:2175 #12 0x00002aaaae727603 in Py_FatalError (msg=0x9bf0
) at Python/pylifecycle.c:2197 #13 0x00002aaaae6ede2b in _Py_CheckRecursiveCall (where=) at Python/ceval.c:489 #14 0x00002aaaae62ba3d in _PyObject_FastCallDict (callable=0x2aaabeab8790, args=, nargs=, kwargs=0x0) at Objects/call.c:120 #15 0x00002aaaae62c2f0 in object_vacall (callable=0x2aaabeab8790, vargs=0x7ffffff54d40) at Objects/call.c:1202 #16 0x00002aaaae62c3fd in PyObject_CallFunctionObjArgs (callable=0x9bf0) at Objects/call.c:1267 #17 0x00002aaaae6c1bf0 in PyObject_ClearWeakRefs (object=) at Objects/weakrefobject.c:872 #18 0x00002aaaae4b26f6 in instance_dealloc () from /home/LOCAL/avj/build/vc21__90601_pyedg_improvements/vc/lib64/libboost_python37.so.1.69.0 #19 0x00002aaaae67c3e0 in subtype_dealloc (self=0x2aaabeab9e40) at Objects/typeobject.c:1176 #20 0x00002aaaae4ba63f in life_support_call () from /home/LOCAL/avj/build/vc21__90601_pyedg_improvements/vc/lib64/libboost_python37.so.1.69.0 #21 0x00002aaaae62b9c4 in _PyObject_FastCallDict (callable=0x2aaabeab87b0, args=, nargs=, kwargs=0x0) at Objects/call.c:125 #22 0x00002aaaae62c2f0 in object_vacall (callable=0x2aaabeab87b0, vargs=0x7ffffff54fd0) at Objects/call.c:1202 #23 0x00002aaaae62c3fd in PyObject_CallFunctionObjArgs (callable=0x9bf0) at Objects/call.c:1267 #24 0x00002aaaae6c1bf0 in PyObject_ClearWeakRefs (object=) at Objects/weakrefobject.c:872 #25 0x00002aaaae4b26f6 in instance_dealloc () from /home/LOCAL/avj/build/vc21__90601_pyedg_improvements/vc/lib64/libboost_python37.so.1.69.0 #26 0x00002aaaae67c3e0 in subtype_dealloc (self=0x2aaabeab9e90) at Objects/typeobject.c:1176 #27 0x00002aaaae4ba63f in life_support_call () from /home/LOCAL/avj/build/vc21__90601_pyedg_improvements/vc/lib64/libboost_python37.so.1.69.0 #28 0x00002aaaae62b9c4 in _PyObject_FastCallDict (callable=0x2aaabeab87d0, args=, nargs=, kwargs=0x0) at Objects/call.c:125 #29 0x00002aaaae62c2f0 in object_vacall (callable=0x2aaabeab87d0, vargs=0x7ffffff55260) at Objects/call.c:1202 #30 0x00002aaaae62c3fd in PyObject_CallFunctionObjArgs (callable=0x9bf0) at Objects/call.c:1267 #31 0x00002aaaae6c1bf0 in PyObject_ClearWeakRefs (object=) at Objects/weakrefobject.c:872 #32 0x00002aaaae4b26f6 in instance_dealloc () from /home/LOCAL/avj/build/vc21__90601_pyedg_improvements/vc/lib64/libboost_python37.so.1.69.0 #33 0x00002aaaae67c3e0 in subtype_dealloc (self=0x2aaabeab9ee0) at Objects/typeobject.c:1176 #34 0x00002aaaae4ba63f in life_support_call () from /home/LOCAL/avj/build/vc21__90601_pyedg_improvements/vc/lib64/libboost_python37.so.1.69.0 ``` This is only the inner most 35 frames -- the actual back-trace is 7375 frames deep, and ends with: ``` #7358 0x00002aaaae4b26f6 in instance_dealloc () from /home/LOCAL/avj/build/vc21__90601_pyedg_improvements/vc/lib64/libboost_python37.so.1.69.0 #7359 0x00002aaaae67c3e0 in subtype_dealloc (self=0x2aaabeaefdf0) at Objects/typeobject.c:1176 #7360 0x00002aaaae6f2f46 in _PyEval_EvalFrameDefault (f=0x2aaabce48b30, throwflag=39920) at Python/ceval.c:1098 #7361 0x00002aaaae62a959 in function_code_fastcall (co=, args=0x7fffffffd088, nargs=1, globals=) at Objects/call.c:283 #7362 0x00002aaaae62ae44 in _PyFunction_FastCallDict (func=0x2aaabda07950, args=0x7fffffffd080, nargs=1, kwargs=0x0) at Objects/call.c:322 #7363 0x00002aaaae62bbea in _PyObject_Call_Prepend (callable=0x2aaabda07950, obj=0x2aaabea92590, args=0x2aaabb193050, kwargs=0x0) at Objects/call.c:908 #7364 0x00002aaaae62b9c4 in _PyObject_FastCallDict (callable=0x2aaabb253a50, args=, nargs=, kwargs=0x0) at Objects/call.c:125 #7365 0x00002aaaae62c677 in _PyObject_CallFunctionVa (callable=0x2aaabb253a50, format=, va=, is_size_t=) at Objects/call.c:956 #7366 0x00002aaaae62c93a in PyEval_CallFunction (callable=0x9bf0, format=0x9bf0
, format at entry=0x2aaaaaedba92 "()") at Objects/call.c:998 #7367 0x00002aaaaae6ae16 in boost::python::call (callable=) at /home/BUILD64/lib/boost-1.69.0-py37/include/boost/python/call.hpp:56 #7368 0x00002aaaaae6ae5a in boost::python::api::object_operators >::operator() (this=) at /home/BUILD64/lib/boost-1.69.0-py37/include/boost/python/object_core.hpp:440 #7369 0x00002aaaabc8b287 in PyEDGInterface::py_backend (this=) at /home/Users/avj/vector/source/vc21__90601_pyedg_improvements/lib/libcommoncpp/src/PyEDGInterface.cpp:192 #7370 0x00002aaaabc8c136 in PyEDGInterface::backend () at /home/Users/avj/vector/source/vc21__90601_pyedg_improvements/lib/libcommoncpp/inc/PyEDGInterface.h:38 #7371 0x00000000004195b5 in back_end () at /home/Users/avj/vector/source/vc21__90601_pyedg_improvements/progs/pyedg/main.cpp:77 #7372 0x000000000050ab21 in cfe_main (argc=argc at entry=9, argv=argv at entry=0x7fffffffd6c8) at /home/Users/avj/vector/source/vc21__90601_pyedg_improvements/progs/edg/src/cfe.cpp:141 #7373 0x000000000050abda in edg_main (argc=argc at entry=9, argv=argv at entry=0x7fffffffd6c8) at /home/Users/avj/vector/source/vc21__90601_pyedg_improvements/progs/edg/src/cfe.cpp:202 #7374 0x0000000000419752 in main (argc=9, argv=0x7fffffffd6c8) at /home/Users/avj/vector/source/vc21__90601_pyedg_improvements/progs/pyedg/main.cpp:43 ``` Where `progs/pyedg/main.cpp` is our `main` and uses an embedded Python interpreter (either 2.7.14 or 3.7.5). The application actually terminates with printed to stderr: ``` Exception ignored in: RecursionError: maximum recursion depth exceeded while calling a Python object Fatal Python error: Cannot recover from stack overflow. ``` The code that is running does not (itself) have any loops -- it simply walks a linked list (of length ~1400) returned via Boost Python. When moving to the next element of the list, the previous element should be "unreachable garbage" (indeed, inspecting gc.get_referrers/gc.get_referents gives 0). I've attached the whole back-trace to this issue, and it seems like, when recursing, the `current` argument to `PyObject_ClearWeakRefs` is different (i.e., it doesn't seem to be an infinite recursion, just a very _deep_ recursion when deallocating). Some other observations: 1) If I increase the size of the stack (using `sys.setrecursionlimit` set to a "suitably large" value), then the process complete successfully 2) The value of `ulimit -s` makes no difference 3) If I run the same code, with the same Boost Python bindings, except targetting Python 2.7.14 the process completes successfully Right now, I am not able to provide a simple reproducer, but I am wondering if this is a bug I've hit in Python 3.7.5 (maybe it is fixed by https://bugs.python.org/issue38006, which seems very similar) or if this is my "user code" that is doing something weird. If this appears to be a new bug, I will do my utmost to create a reproducer for it, but if the cause is obvious without it, then that would be helpful (the reproducer is tied to a proprietary code, so will be hard to extricate). One thing I will try is to update our version of Python to 3.9.2 and see the issue is still there, after the fix for #38006. ---------- files: python_bt.txt.gz messages: 387852 nosy: andrewvaughanj priority: normal severity: normal status: open title: `RecursionError` during deallocation versions: Python 3.7 Added file: https://bugs.python.org/file49841/python_bt.txt.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 1 06:39:56 2021 From: report at bugs.python.org (Yves Duprat) Date: Mon, 01 Mar 2021 11:39:56 +0000 Subject: [New-bugs-announce] [issue43352] Add a Barrier object in asyncio lib Message-ID: <1614598796.36.0.0905111746151.issue43352@roundup.psfhosted.org> New submission from Yves Duprat : Add a synchronized primitive Barrier in asyncio, in order to be consistent with them we have for threading. Barrier object will have a similar design from that of threading lib. (May be we have to think about a backport ?) Initial discussion started here: https://mail.python.org/archives/list/python-ideas at python.org/thread/IAFAH7PWMUDUTLXYLNSXES7VMDQ26A3W/ ---------- components: asyncio messages: 387855 nosy: asvetlov, yduprat, yselivanov priority: normal severity: normal status: open title: Add a Barrier object in asyncio lib type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 1 07:00:00 2021 From: report at bugs.python.org (Mariusz Felisiak) Date: Mon, 01 Mar 2021 12:00:00 +0000 Subject: [New-bugs-announce] [issue43353] Document that logging.getLevelName() can return a numeric value. Message-ID: <1614600000.66.0.180359096956.issue43353@roundup.psfhosted.org> New submission from Mariusz Felisiak : Can we document[1] that `logging.getLevelName()` returns a numeric value when corresponding string is passed in (related with https://bugs.python.org/issue1008295). I know that we have "Changed in version 3.4" annotation but I think it's worth mentioning in the main description. I hope it's not a duplicate, I tried to find matching ticket. I can submit PR if accepted. [1] https://docs.python.org/3.10/library/logging.html?highlight=getlevelname#logging.getLevelName ---------- components: Library (Lib) messages: 387856 nosy: felixxm, vinay.sajip priority: normal severity: normal status: open title: Document that logging.getLevelName() can return a numeric value. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 1 08:42:31 2021 From: report at bugs.python.org (=?utf-8?q?J=C3=BCrgen_Gmach?=) Date: Mon, 01 Mar 2021 13:42:31 +0000 Subject: [New-bugs-announce] [issue43354] xmlrpc.client: Fault.faultCode erroneously documented to be a string, should be int Message-ID: <1614606151.94.0.783784524766.issue43354@roundup.psfhosted.org> New submission from J?rgen Gmach : I created a XMLRPC API (via Zope), which gets consumed by a C# application. C#'s XMLRPC lib expects an `int` for the `faultCode` but I currently return a string, as this is the type which is currently documented in the official docs https://docs.python.org/3/library/xmlrpc.client.html#xmlrpc.client.Fault.faultCode This leads to a `TypeMismatch` error on the client's side. The documentation for `faultCode` is pretty much unchanged since 2007, when `xmlrpc.client.rst` was first created (at least at that place) by Georg Brandl. The docs are most probably older, but I do not know where they were managed before. I had a look at the cpython source code, and at least the tests all use ints for `faultCode` (both of them :-) ) eg https://github.com/python/cpython/blob/255f53bdb54a64b93035374ca4484ba0cc1b41e1/Lib/test/test_xmlrpc.py#L166 Having a look at the XMLRPC spec at http://xmlrpc.com/spec.md it is clearly shown that `faultCode` has to be an int. Typeshed recently added type hints for xmlrpc lib, and they used string for `faultCode` - but I guess this is just an aftereffect from the faulty documentation. https://github.com/python/typeshed/pull/3834 I suggest to both update cpython's documentation and to create a PR for typeshed in order to fix the type issue. If any core dev agrees on this, I'd like to prepare the pull requests. Thanks for taking your time to look into this! ---------- assignee: docs at python components: Documentation messages: 387866 nosy: docs at python, jugmac00 priority: normal severity: normal status: open title: xmlrpc.client: Fault.faultCode erroneously documented to be a string, should be int versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 1 11:02:00 2021 From: report at bugs.python.org (Eric Engestrom) Date: Mon, 01 Mar 2021 16:02:00 +0000 Subject: [New-bugs-announce] [issue43355] __future__.annotations breaks inspect.signature() Message-ID: <1614614520.28.0.353733927276.issue43355@roundup.psfhosted.org> New submission from Eric Engestrom : We have a pytest that boils down to the following: ``` #from __future__ import annotations from inspect import Parameter, signature def foo(x: str) -> str: return x + x def test_foo(): expected = ( Parameter("x", Parameter.POSITIONAL_OR_KEYWORD, annotation=str), ) actual = tuple(signature(foo).parameters.values()) assert expected == actual ``` (execute with `pip install pytest && pytest -vv test_foo.py`) I tried importing 3.10 annotations (so that we can get rid of quotes around the class containing `foo()`, which is omitted here because it isn't necessary to reproduce the bug), but doing so changes the output of `inspect.signature()` but not the output `inspect.Parameter()`, causing a mismatch between the two that breaks the test. The above passes on 3.7.9, 3.8.7 & 3.9.1, and if I uncomment the first line, it fails on those same versions. As can be expected, the annotations import is a no-op on 3.10.0a5 and the test passes either way. I expect `inspect` might have not been correctly updated to support postponed annotations, but I haven't looked at the implementation (I'm not familiar with the CPython codebase at all) so it's just a guess. ---------- components: Library (Lib) messages: 387875 nosy: 1ace priority: normal severity: normal status: open title: __future__.annotations breaks inspect.signature() type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 1 12:55:28 2021 From: report at bugs.python.org (Antoine Pitrou) Date: Mon, 01 Mar 2021 17:55:28 +0000 Subject: [New-bugs-announce] [issue43356] PyErr_SetInterrupt should have an equivalent that takes a signal number Message-ID: <1614621328.26.0.486134499262.issue43356@roundup.psfhosted.org> New submission from Antoine Pitrou : PyErr_SetInterrupt() is useful if you want to simulate the effect of a SIGINT. It would be helpful to provide a similar primitive for other signal numbers, e.g. `PyErr_SetInterruptEx(int signum)`. ---------- components: C API messages: 387877 nosy: pitrou priority: normal severity: normal status: open title: PyErr_SetInterrupt should have an equivalent that takes a signal number type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 07:40:42 2021 From: report at bugs.python.org (John Belmonte) Date: Tue, 02 Mar 2021 12:40:42 +0000 Subject: [New-bugs-announce] [issue43370] thread_time not available on python.org OS X builds Message-ID: <1614688842.62.0.631384615998.issue43370@roundup.psfhosted.org> New submission from John Belmonte : time.thread_time is supposed to be available on OS X 10.12 and newer. Yet it's reported to raise ImportError on macOS Big Sur (11.2.1) on Python 3.9.2 (python.org download). (Reported by Quentin Pradet.) It is available in other OS X Python builds, such as published by Home Brew. (I confirm it's available on macOS 10.13, Python 3.7.7.) ---------- components: Build messages: 387925 nosy: John Belmonte priority: normal severity: normal status: open title: thread_time not available on python.org OS X builds type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 15:42:17 2021 From: report at bugs.python.org (Maxime Belanger) Date: Tue, 02 Mar 2021 20:42:17 +0000 Subject: [New-bugs-announce] [issue43377] _PyErr_Display should be available in the CPython-specific API Message-ID: <1614717737.06.0.355391552087.issue43377@roundup.psfhosted.org> New submission from Maxime Belanger : We have found `_PyErr_Display` to be quite helpful in embedding situations, in particular as a way to capture errors to a custom buffer rather than to `stderr`. Otherwise, embedders often have to replicate logic in `PyErr_Print`, etc. Since the header restructuring in Python 3.8+, that function is a bit harder to call. It's exported, but is considered "internal" and thus requires defining `Py_BUILD_CORE`. I was wondering: why not expose it under "Include/cpython"? It seems like a generic-enough helper, similar to `_PyErr_WriteUnraisableMsg`. ---------- components: C API messages: 387965 nosy: Maxime Belanger priority: normal severity: normal status: open title: _PyErr_Display should be available in the CPython-specific API type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 14:49:51 2021 From: report at bugs.python.org (Brandt Bucher) Date: Tue, 02 Mar 2021 19:49:51 +0000 Subject: [New-bugs-announce] [issue43376] Add PyComplex_FromString Message-ID: <1614714591.46.0.224854150713.issue43376@roundup.psfhosted.org> New submission from Brandt Bucher : I recently came across a case where this functionality would be quite useful (parsing complex values from delimited text files). We have PyLong_FromString and PyFloat_FromString, but no PyComplex_FromString (I can't find a reason why it might have been deliberately omitted). I *think* the best current workaround is to use sscanf to parse out two floats, then feed that to PyComplex_FromDoubles, which is non-trivial. Do others support this addition? I imagine we would just use something similar to the _Py_string_to_number_with_underscores call at the end of complex_subtype_from_string in Objects/complexobject.c. ---------- components: C API keywords: easy (C) messages: 387958 nosy: brandtbucher priority: normal severity: normal status: open title: Add PyComplex_FromString type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 16:22:01 2021 From: report at bugs.python.org (Luciano Ramalho) Date: Tue, 02 Mar 2021 21:22:01 +0000 Subject: [New-bugs-announce] [issue43378] Pattern Matching section in tutorial refers to | as Message-ID: <1614720121.97.0.0865243171415.issue43378@roundup.psfhosted.org> Change by Luciano Ramalho : ---------- nosy: ramalho priority: normal severity: normal status: open title: Pattern Matching section in tutorial refers to | as _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 16:45:36 2021 From: report at bugs.python.org (Romain Vincent) Date: Tue, 02 Mar 2021 21:45:36 +0000 Subject: [New-bugs-announce] [issue43379] Pasting multiple lines in the REPL is broken since 3.9 Message-ID: <1614721536.12.0.538890122544.issue43379@roundup.psfhosted.org> New submission from Romain Vincent : DISCLAIMER: This is the first time I submit an issue here. In advance, my humble apologies if I missed something. Feel free to correct me :) -- I regularly test snippets of code by pasting them from a code editor to a shell REPL. It works perfectly well in python 3.8 or 3.7 but not in python 3.9. Demonstration: Try to copy and paste the following simple snippet: --- def f(): print("hello world") --- The result in a python 3.8 REPL (same with 3.7): --- $ python3.8 Python 3.8.6 (default, Nov 20 2020, 18:29:40) [Clang 12.0.0 (clang-1200.0.32.27)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> def f(): print("hello world") >>> f() hello world --- But with python 3.9: --- $ python3.9 Python 3.9.1 (default, Dec 10 2020, 10:36:35) [Clang 12.0.0 (clang-1200.0.32.27)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> def f(): print("hello world") File "", line 1 ^ SyntaxError: multiple statements found while compiling a single statement --- This behavior happens with any snippet of code containing at least one indented line, whether by tabs or spaces and whatever the number of spaces. Regards. ---------- components: IO messages: 387976 nosy: romainfv priority: normal severity: normal status: open title: Pasting multiple lines in the REPL is broken since 3.9 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 12:53:31 2021 From: report at bugs.python.org (Gaming With Skytorrush) Date: Tue, 02 Mar 2021 17:53:31 +0000 Subject: [New-bugs-announce] [issue43373] Tensorflow Message-ID: <1614707611.34.0.215212100924.issue43373@roundup.psfhosted.org> New submission from Gaming With Skytorrush : I was unable to install Tensorflow via pip in python 3.9 but its working fine with 3.8 ---------- components: Library (Lib) messages: 387944 nosy: clashwithchiefrpjyt priority: normal severity: normal status: open title: Tensorflow type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 13:10:10 2021 From: report at bugs.python.org (Adrian) Date: Tue, 02 Mar 2021 18:10:10 +0000 Subject: [New-bugs-announce] [issue43374] Apple refuses apps written in Python Message-ID: <1614708610.18.0.213124723322.issue43374@roundup.psfhosted.org> New submission from Adrian : My company maintains several Python related projects, one of them being an application published for many years in the Mac App Store. During the submittion of the last update for the app, we were refused by Apple to publish the software with the following reason: Guideline 2.5.1 - Performance Your app links against the following non-public framework(s): ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/_wadd_wchnstr ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/_wins_wch ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/_win_wch ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/_win_wchnstr ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/_win_wchnstr ? Contents/Frameworks/libpython3.9.dylib/___sprintf_chk ? Contents/Frameworks/libpython3.9.dylib/_posix_spawn ? Contents/Frameworks/libpython3.9.dylib/_posix_spawn_file_actions_addclose ? Contents/Frameworks/libpython3.9.dylib/_posix_spawn_file_actions_adddup2 ? Contents/Frameworks/libpython3.9.dylib/_posix_spawn_file_actions_addopen ? Contents/Frameworks/libpython3.9.dylib/_posix_spawn_file_actions_destroy ? Contents/Frameworks/libpython3.9.dylib/_posix_spawn_file_actions_init ? Contents/Frameworks/libpython3.9.dylib/_posix_spawnattr_destroy ? Contents/Frameworks/libpython3.9.dylib/_posix_spawnattr_init ? Contents/Frameworks/libpython3.9.dylib/_posix_spawnattr_setflags ? Contents/Frameworks/libpython3.9.dylib/_posix_spawnattr_setpgroup ? Contents/Frameworks/libpython3.9.dylib/_posix_spawnattr_setsigdefault ? Contents/Frameworks/libpython3.9.dylib/_posix_spawnattr_setsigmask ? Contents/Frameworks/libpython3.9.dylib/_posix_spawnp ? Contents/Frameworks/libcrypto.1.1.dylib/___sprintf_chk ? Contents/Frameworks/libp11-kit.0.dylib/___sprintf_chk ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libpanelw.5.dylib/_SP ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/_SP ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/___sprintf_chk ? Contents/Frameworks/libmpfr.6.dylib/___sprintf_chk ? Contents/Frameworks/libgnutls.30.dylib/___sprintf_chk ? Contents/Frameworks/libgnutls.30.dylib/_p11_kit_space_strdup ? Contents/Frameworks/libgnutls.30.dylib/_p11_kit_space_strlen ? Contents/Frameworks/libidn2.0.dylib/_sprintf ? Contents/Frameworks/libx264.157.dylib/___sprintf_chk ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libcrypto.1.1.dylib/___sprintf_chk ? Contents/Frameworks/libssl.1.1.dylib/___sprintf_chk ? Contents/Frameworks/libxml2.2.dylib/___sprintf_chk ? Contents/Frameworks/Python.framework/Versions/3.9/Python/___sprintf_chk ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawn ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawn_file_actions_addclose ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawn_file_actions_adddup2 ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawn_file_actions_addopen ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawn_file_actions_destroy ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawn_file_actions_init ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawnattr_destroy ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawnattr_init ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawnattr_setflags ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawnattr_setpgroup ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawnattr_setsigdefault ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawnattr_setsigmask ? Contents/Frameworks/Python.framework/Versions/3.9/Python/_posix_spawnp ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libssl.1.1.dylib/___sprintf_chk ? Contents/Frameworks/libhogweed.6.dylib/_nettle_buffer_space ? Contents/Frameworks/libicui18n.67.dylib/_sprintf ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libmenuw.5.dylib/_SP ? Contents/Frameworks/libavfilter.7.dylib/_avformat_match_stream_specifier ? Contents/Frameworks/libmpc.3.dylib/___sprintf_chk ? Contents/Frameworks/libunistring.2.dylib/___sprintf_chk ? Contents/Frameworks/libunistring.2.dylib/_sprintf ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libpanelw.5.dylib/__nc_panelhook ? Contents/Frameworks/Python.framework/Versions/3.9/lib/libformw.5.dylib/__nc_wcrtomb Next Steps The use of non-public APIs is not permitted on the App Store as it can lead to a poor user experience should these APIs change. ---------- components: C API, Interpreter Core, macOS messages: 387947 nosy: adigeo, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Apple refuses apps written in Python type: security versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 13:10:58 2021 From: report at bugs.python.org (Igor Mandrichenko) Date: Tue, 02 Mar 2021 18:10:58 +0000 Subject: [New-bugs-announce] [issue43375] memory leak in threading ? Message-ID: <1614708658.89.0.572605723578.issue43375@roundup.psfhosted.org> New submission from Igor Mandrichenko : There is an apparent memory leak in threading. It looks like memory grows when I create, run and destroy threads. The memory keeps adding at the rate of about 100 bytes per thread. I am attaching the code, which works for Linux. getMemory() function is Linux-specific, it gets current process memory utilization ---------- components: Extension Modules files: memleak.py messages: 387948 nosy: igorvm priority: normal severity: normal status: open title: memory leak in threading ? type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49846/memleak.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 09:31:34 2021 From: report at bugs.python.org (Dmitriy Mironiyk) Date: Tue, 02 Mar 2021 14:31:34 +0000 Subject: [New-bugs-announce] [issue43371] Mock.assert_has_calls works strange Message-ID: <1614695494.31.0.773020245058.issue43371@roundup.psfhosted.org> New submission from Dmitriy Mironiyk : I think that behavior of Mock.assert_has_calls is misleading. - When I call Mock.assert_has_calls with any_order=False it checks that expected calls are the same calls as performed on mock and raise an error if mock has some calls other than expected. - When I call Mock.assert_has_calls with any_order=True it checks that mock has expected calls and not raise an error when mock has other calls than expected. I suppose should be two separate methods: - Mock.assert_has_only_calls that always raise an error when mock has calls other than expected(not regarding any_order). - Mock.assert_has_calls that raise error only when no calls that provided in expected or if order of calls is wrong when any_order=False. ---------- components: Tests files: test_mock_asser_has_calls.py messages: 387932 nosy: dmitriy.mironiyk priority: normal severity: normal status: open title: Mock.assert_has_calls works strange type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49845/test_mock_asser_has_calls.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 09:40:34 2021 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Tue, 02 Mar 2021 14:40:34 +0000 Subject: [New-bugs-announce] [issue43372] ctypes: test_frozentable fails when make regen-frozen Message-ID: <1614696034.15.0.923342699637.issue43372@roundup.psfhosted.org> New submission from Miro Hron?ok : The following test failure happens on Python 3.10.0a6+ when we make regen-frozen with the same Python version we test: ====================================================================== FAIL: test_frozentable (ctypes.test.test_values.PythonValuesTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/churchyard/Dokumenty/RedHat/cpython/Lib/ctypes/test/test_values.py", line 87, in test_frozentable self.assertEqual(items, expected, "PyImport_FrozenModules example " AssertionError: Lists differ: [('__hello__', 129), ('__phello__', -129), ('__phello__.spam', 129)] != [('__hello__', 125), ('__phello__', -125), ('__phello__.spam', 125)] First differing element 0: ('__hello__', 129) ('__hello__', 125) - [('__hello__', 129), ('__phello__', -129), ('__phello__.spam', 129)] ? ^ ^ ^ + [('__hello__', 125), ('__phello__', -125), ('__phello__.spam', 125)] ? ^ ^ ^ : PyImport_FrozenModules example in Doc/library/ctypes.rst may be out of date ---------------------------------------------------------------------- Ran 494 tests in 0.466s FAILED (failures=1, skipped=87) Reproducer: 1. Build Python from source: $ ./configure && make -j... 2. Run ctypes tests: $ ./python -m ctypes.test 3. Regenerate frozen: $ PYTHON_FOR_REGEN=./python make regen-frozen 4. Build Python from source again: $ ./configure && make -j... 5. Run ctypes tests: $ ./python -m ctypes.test Actual result: Tests in (2) pass, tests in (5) fail. The difference after (3) is: diff --git a/Python/frozen_hello.h b/Python/frozen_hello.h index 9c566cc81e..d58b726aa8 100644 --- a/Python/frozen_hello.h +++ b/Python/frozen_hello.h @@ -9,5 +9,5 @@ static unsigned char M___hello__[] = { 100,218,5,112,114,105,110,116,169,0,114,2,0, 0,0,114,2,0,0,0,218,4,110,111,110,101, 218,8,60,109,111,100,117,108,101,62,1,0,0, - 0,115,2,0,0,0,4,1, + 0,115,6,0,0,0,4,0,12,1,255,128, }; Expected results: Tests pass, no diff. ---------- components: Tests, ctypes messages: 387933 nosy: hrnciar, hroncok priority: normal severity: normal status: open title: ctypes: test_frozentable fails when make regen-frozen type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 19:36:29 2021 From: report at bugs.python.org (jennydaman) Date: Wed, 03 Mar 2021 00:36:29 +0000 Subject: [New-bugs-announce] [issue43380] Assigning function parameter to class attribute by the same name Message-ID: <1614731789.13.0.0370276321832.issue43380@roundup.psfhosted.org> New submission from jennydaman : # Example Consider these three examples, which are theoretically identical ``` a = 4 class A: a = a print(A.a) def createB(b): class B: z = b print(B.z) createB(5) def createD(D): class D: d = d print(D.d) createD(6) ``` ## Expected Output ``` 4 5 6 ``` ## Actual Output ``` 4 5 Traceback (most recent call last): File "", line 1, in File "", line 2, in createD File "", line 3, in D NameError: name 'd' is not defined ``` ---------- components: Interpreter Core messages: 387987 nosy: jennydaman priority: normal severity: normal status: open title: Assigning function parameter to class attribute by the same name type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 19:52:30 2021 From: report at bugs.python.org (Neil Schemenauer) Date: Wed, 03 Mar 2021 00:52:30 +0000 Subject: [New-bugs-announce] [issue43381] add small test for frozen module line number table Message-ID: <1614732750.02.0.00856334728328.issue43381@roundup.psfhosted.org> New submission from Neil Schemenauer : In bug #43372, we didn't notice that the code for the __hello__ module was not re-generated. Things seems to be okay but the line number table was corrupted. It seems a good idea to add a small test to ensure that doesn't happen again. I marked the test as CPython implementation specific. ---------- components: Tests messages: 387989 nosy: nascheme priority: low severity: normal stage: patch review status: open title: add small test for frozen module line number table type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 23:29:04 2021 From: report at bugs.python.org (Gregory P. Smith) Date: Wed, 03 Mar 2021 04:29:04 +0000 Subject: [New-bugs-announce] [issue43382] github CI blocked by the Ubuntu CI with an SSL error Message-ID: <1614745744.43.0.675662116863.issue43382@roundup.psfhosted.org> New submission from Gregory P. Smith : https://github.com/python/cpython/pull/20442/checks?check_run_id=2018900756 ssl.SSLError: [SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:1122) [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1122) ssl.SSLError: [SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:1122) ssl.SSLError: [SSL] internal error (_ssl.c:1122) ssl.SSLError: [SSL] called a function you should not call (_ssl.c:1122) ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1122) ssl.SSLError: [SSL: UNEXPECTED_MESSAGE] unexpected message (_ssl.c:1122) ssl.SSLError: [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:1122) [SSL: NO_PROTOCOLS_AVAILABLE] no protocols available (_ssl.c:1122) ... and so on ... This CI failure is preventing any PR from being merged. Do we have a bad test certificate in the tree we need to update? Or did some Ubuntu CI infrastructure just fall over and start rejecting a certificate it used to accept? I believe this started around ~20210302T1500 UTC. ---------- assignee: christian.heimes components: Build, SSL, Tests messages: 387998 nosy: christian.heimes, gregory.p.smith, lukasz.langa, ned.deily priority: release blocker severity: normal status: open title: github CI blocked by the Ubuntu CI with an SSL error type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 2 23:48:00 2021 From: report at bugs.python.org (Konrad Schwarz) Date: Wed, 03 Mar 2021 04:48:00 +0000 Subject: [New-bugs-announce] [issue43383] imprecise handling of weakref callbacks Message-ID: <1614746880.88.0.974985202377.issue43383@roundup.psfhosted.org> New submission from Konrad Schwarz : I am seeing the following non-deterministic behavior: My code processes DeviceTree, a tree-based specification format for hardware descriptions, that includes cross-references ("phandles"). For all intents and purposes, this format is similar to XML; phandles are analog to ID/IDREFS. To prevent reference cycles and avoid the need for garbage collection, my code uses weakref.proxy for parent pointers and weakref.ref for cross-references. My goal is to provide a "projection" operation on a DeviceTree: creating derived DeviceTrees that model subsets of the hardware (this is to partition the hardware into multiple independent sub-machines). The projection is specified by newly introduced nodes and attributes (aka properties) in the tree; phandles are used to indicate which part belongs to which partition. Python weak references provide a callback interface to indicate the demise of their referents and my code uses that to prune the tree: e.g., if a node modeling a partition is deleted, nodes that reference that node (i.e., indicate they belong to that partition) are deleted in the corresponding weakref callback. So technically, the code implicitly uses the interpreters list of weak referrers (__weakref__) to find and execute code on them when the referent's state changes. This works exactly as envisioned when single-stepping in PDB. When running at full speed however, I see that weak reference callbacks are being triggered after the corresponding weak reference has been deleted with del (the weak reference is a value of a Python dict holding a node's attributes.) I suspect that this is because of some batching or deferred processing in the Python interpreter. Ultimately, this is a violation of the semantics and must be classified as a bug. However, in my case, it would suffice to have a "memory barrier" type of operation that flushes the queue of deferred deletions before continuing. Something like that must exist, because single stepping in PDB is successful. Initial tests of calling the garbage collector to this end were inconclusive, unfortunately. ---------- components: Interpreter Core messages: 387999 nosy: konrad.schwarz priority: normal severity: normal status: open title: imprecise handling of weakref callbacks type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 02:08:09 2021 From: report at bugs.python.org (Neil Schemenauer) Date: Wed, 03 Mar 2021 07:08:09 +0000 Subject: [New-bugs-announce] [issue43384] Include regen-stdlib-module-names in regen-all Message-ID: <1614755289.13.0.914626827133.issue43384@roundup.psfhosted.org> New submission from Neil Schemenauer : While I was fixing the regen-frozen issue, I noticed it seems unnecessary to have regen-stdlib-module-names separate from regen-all. Maybe Victor knows why it needs to be separate. If it doesn't need to be separate, the CI scripts can be slightly simplified. ---------- components: Build messages: 388003 nosy: nascheme priority: normal severity: normal stage: patch review status: open title: Include regen-stdlib-module-names in regen-all type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 05:09:09 2021 From: report at bugs.python.org (=?utf-8?q?Micha=C5=82_Kozik?=) Date: Wed, 03 Mar 2021 10:09:09 +0000 Subject: [New-bugs-announce] [issue43385] heapq fails to sort tuples by datetime correctly Message-ID: <1614766149.48.0.609887863234.issue43385@roundup.psfhosted.org> New submission from Micha? Kozik : Tuples (datetime, description) all are sorted by the date except one entry (2021-03-09) which is out of order: Expected order: Actual order: 2021-03-04 Event E 2021-03-04 Event E 2021-03-07 Event B 2021-03-07 Event B 2021-03-08 Event C 2021-03-08 Event C 2021-03-09 Event A 2021-03-11 Event D 2021-03-11 Event D 2021-03-09 Event A In REPL it can be replicated by pasting the following code: import heapq from datetime import datetime event_a = (datetime.strptime('2021-03-09', '%Y-%m-%d'), "Event A") event_b = (datetime.strptime('2021-03-07', '%Y-%m-%d'), "Event B") event_c = (datetime.strptime('2021-03-08', '%Y-%m-%d'), "Event C") event_d = (datetime.strptime('2021-03-11', '%Y-%m-%d'), "Event D") event_e = (datetime.strptime('2021-03-04', '%Y-%m-%d'), "Event E") events = [] heapq.heappush(events, event_a) heapq.heappush(events, event_b) heapq.heappush(events, event_c) heapq.heappush(events, event_d) heapq.heappush(events, event_e) expected_list = [event_e, event_b, event_c, event_a, event_d] assert events == expected_list ---------- components: Library (Lib) files: test_heapq.py messages: 388012 nosy: mike.koikos priority: normal severity: normal status: open title: heapq fails to sort tuples by datetime correctly type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49848/test_heapq.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 06:34:00 2021 From: report at bugs.python.org (=?utf-8?b?TWljaGHFgiBHw7Nybnk=?=) Date: Wed, 03 Mar 2021 11:34:00 +0000 Subject: [New-bugs-announce] [issue43386] test_ctypes hangs inside Portage build env since 'subprocess: Use vfork() instead of fork() [...]' Message-ID: <1614771240.05.0.187208683071.issue43386@roundup.psfhosted.org> New submission from Micha? G?rny : So I've finally found time to bisect this. Long story short, test_ctypes started hanging on Gentoo package builds since 3.10.0a2. Previously, the test took less than a second. Now, it just keeps running for minutes until I kill it. The weird thing is that I can't reproduce it when running it manually. I've tried hard to rebuild Portage-like environment to make it hang, to no avail. I've finally gotten around to bisecting it, and established that the problem is caused by the following change: ``` commit 976da903a746a5455998e9ca45fbc4d3ad3479d8 Author: Alexey Izbyshev Date: 2020-10-24 02:47:01 +0200 bpo-35823: subprocess: Use vfork() instead of fork() on Linux when safe (GH-11671) [...] ``` After running the test with a timeout, I get the following backtrace: ``` test_issue_8959_a (ctypes.test.test_callbacks.SampleCallbacksTestCase) ... Timeout (0:00:30)! Thread 0x00007f72f2507740 (most recent call first): File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/subprocess.py", line 1773 in _execute_child File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/subprocess.py", line 962 in __init__ File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/ctypes/util.py", line 289 in _findSoname_ldconfig File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/ctypes/util.py", line 329 in find_library File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/ctypes/test/test_callbacks.py", line 183 in test_issue_8959_a File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/case.py", line 549 in _callTestMethod File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/case.py", line 592 in run File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/case.py", line 652 in __call__ File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 122 in run File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 84 in __call__ File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 122 in run File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 84 in __call__ File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 122 in run File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 84 in __call__ File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 122 in run File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 84 in __call__ File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 122 in run File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/suite.py", line 84 in __call__ File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/unittest/runner.py", line 176 in run File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/support/__init__.py", line 959 in _run_suite File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/support/__init__.py", line 1082 in run_unittest File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/runtest.py", line 211 in _test_module File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/runtest.py", line 236 in _runtest_inner2 File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/runtest.py", line 272 in _runtest_inner File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/runtest.py", line 155 in _runtest File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/runtest.py", line 195 in runtest File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/main.py", line 319 in rerun_failed_tests File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/main.py", line 696 in _main File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/main.py", line 639 in main File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/libregrtest/main.py", line 717 in main File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/test/__main__.py", line 2 in File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/runpy.py", line 87 in _run_code File "/var/tmp/portage/dev-lang/python-3.10.0_alpha6/work/Python-3.10.0a6/Lib/runpy.py", line 197 in _run_module_as_main make: *** [Makefile:1204: test] Error 1 ``` I'd appreciate any help in debugging this further. ---------- components: Tests messages: 388014 nosy: izbyshev, mgorny priority: normal severity: normal status: open title: test_ctypes hangs inside Portage build env since 'subprocess: Use vfork() instead of fork() [...]' versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 08:34:22 2021 From: report at bugs.python.org (digitaldragon) Date: Wed, 03 Mar 2021 13:34:22 +0000 Subject: [New-bugs-announce] [issue43387] Enable pydoc to run as background job Message-ID: <1614778462.01.0.596151563335.issue43387@roundup.psfhosted.org> New submission from digitaldragon : The pydoc tool can serve a website for browsing the docs interactively with the -p or -b option. While serving, it enters a simple command line interface: Server commands: [b]rowser, [q]uit server> Because it is reading from stdin, it is not possible to straightforwardly run the progress in the background of a linux shell. Normally, this is achieved by appending a & to the command, starting it in the background, or using the shell's job control features to put it in the background after starting normally: Usually Ctrl-Z and issuing the command 'bg'. In both cases, any attempt to read from stdin causes a SIGTTIN signal, suspending the process if not caught. The webserver then cannot process any requests. I reproduced the behavior in python versions 3.5, 3.7, 3.8 and 3.9. In 2.7, no interactive interface is present. Possible fixes: - remove the interactive command line altogether (it does not offer more functionality than the -b flag, and the shell's handling of Ctrl-C, which sends a SIGINT anyway) - catch SIGTTIN (handles a subsequent sending-to-background) - detecting if started in background (https://stackoverflow.com/a/24862213, can't handle subsequent sending-to-background) ---------- components: Library (Lib) messages: 388017 nosy: digitaldragon priority: normal severity: normal status: open title: Enable pydoc to run as background job type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 11:06:53 2021 From: report at bugs.python.org (=?utf-8?q?Lorenz_H=C3=BCdepohl?=) Date: Wed, 03 Mar 2021 16:06:53 +0000 Subject: [New-bugs-announce] [issue43388] shutil._fastcopy_sendfile() makes wrong (?) assumption about sendfile() return value Message-ID: <1614787613.45.0.094604330239.issue43388@roundup.psfhosted.org> New submission from Lorenz H?depohl : Since version 3.8, the shutil.copyfile() function tries to make use of os.sendfile(). Currently, this is done in a loop that aborts as soon as sendfile() returns 0, as it is assumed that this signifies the end of the file. The problem: the return value of sendfile() is simply the amount of bytes that could be successfully copied in that one syscall, with no guarantee that a return value of 0 only happens when there is truly no data left to copy. Some other urgent task could have interrupted the kernel before any data was copied. At least, I could not find documentation to the contrary. (Note: This might or might not be actual behavior of current Linux kernels today, but the spec of sendfile() certainly allows this) In any case, in that same routine the size of the source file is anyway requested in an os.fstat() call, one could thus restructure the loop like this, for example: filesize = os.fstat(infd).st_size offset = 0 while offset < filesize: sent = os.sendfile(outfd, infd, offset, blocksize) offset += sent (Error handling etc. left out for clarity, just to point out the new structure) This would also optimize the case of an empty input file, in that case the loop is never entered and no os.sendfile() call is issued, at all. In the normal case, it would also save the unnecessary last os.sendfile() call, when 'offset' has already grown to 'filesize'. (This was the actual reason for me to look into this in the first place, a filesystem bug where sendfile() called with an offset set to the file's size returns "EAGAIN" in certain cases. But this is another topic entirely and has nothing to do with Python, of course.) Note that in Modules/posixmodule.c os_sendfile_impl() there is also another loop around individual actual sendfile() system call, but a return value of 0 there would also exit that loop and be passed up: do { Py_BEGIN_ALLOW_THREADS #ifdef __APPLE__ ret = sendfile(in_fd, out_fd, offset, &sbytes, &sf, flags); #else ret = sendfile(in_fd, out_fd, offset, count, &sf, &sbytes, flags); #endif Py_END_ALLOW_THREADS } while (ret < 0 && errno == EINTR && !(async_err = PyErr_CheckSignals())); Kind regards, Lorenz ---------- components: Library (Lib) messages: 388027 nosy: lhuedepohl priority: normal severity: normal status: open title: shutil._fastcopy_sendfile() makes wrong (?) assumption about sendfile() return value versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 11:50:58 2021 From: report at bugs.python.org (nmatravolgyi) Date: Wed, 03 Mar 2021 16:50:58 +0000 Subject: [New-bugs-announce] [issue43389] Cancellation ignored by asyncio.wait_for can hang application Message-ID: <1614790258.6.0.416802160939.issue43389@roundup.psfhosted.org> New submission from nmatravolgyi : I have found myself debugging a *very* not intuitive behavior regarding asyncio.wait_for that I'd consider a bug/deficiency. The problem very simply put: wait_for will return the wrapped future's result even when it is being cancelled, ignoring the cancellation as it has never existed. This will make parallel execution-waits hang forever if some simple conditions are met. From the perspective of this snippet every task must exit so it just needs to wait. I know cancellation *can* be ignored, but it is discouraged by the documentation for this reason exactly. tasks = [...] for t in tasks: t.cancel() results = await asyncio.gather(*tasks, return_exceptions=True) I already know that this behavior has been chosen because otherwise the returned value would be lost. But for many applications, losing an explicit cancellation error/event is just as bad. The reason why ignoring the cancellation is critical is because the cancelling (arbiter) task cannot reliably solve it. In most cases having repeated cancellations in a polling wait can solve this, but it is ugly and does not work if the original wait_for construct is in a loop and will always ignore the cancellation. The most sensible solution would be to allow the user to handle both the return value and the cancellation if they do happen at once. This can be done by subclassing the CancelledError as CancelledWithResultError and raising that instead. If the user code does not handle that exception specifically then the user "chose" to ignore the result. Even if this is not intuitive, it would give the user the control over what really is happening. Right now, the user cannot prefer to handle the cancellation or both. Lastly, I may have overlooked something trivial to make this work well. Right now I'm considering replacing all of the asyncio.wait_for constructs with asyncio.wait constructs. I can fully control all tasks and cancellations with that. I've made a simple demonstration of my problem, maybe someone can shed some light onto it. ---------- components: asyncio files: aio_wait_for_me.py messages: 388032 nosy: asvetlov, nmatravolgyi, yselivanov priority: normal severity: normal status: open title: Cancellation ignored by asyncio.wait_for can hang application type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49849/aio_wait_for_me.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 12:45:04 2021 From: report at bugs.python.org (Gregory P. Smith) Date: Wed, 03 Mar 2021 17:45:04 +0000 Subject: [New-bugs-announce] [issue43390] Set the SA_ONSTACK in PyOS_setsig to play well with other VMs like Golang Message-ID: <1614793504.75.0.801298006043.issue43390@roundup.psfhosted.org> New submission from Gregory P. Smith : PyOS_setsig currently sets the struct sigaction context.sa_flags = 0 before calling sigaction. Other virtual machines such as Golang depend on signals using SA_ONSTACK such that signal handlers use a specially allocated stack that runtime sets up for reliability reasons as they use tiny stacks on normal threads. SA_ONSTACK is a no-op flag in the typical case where no sigaltstack() call has been made to setup an alternate signal handling stack. (as in 99.99% of all CPython applications) When a C/C++ extension module is linked with cgo to call into a Golang library, doing this increases reliability. As much as I try to dissuade anyone from creating and relying on hidden complexity multi-VM-hybrids in a single process like this, some people do, and this one line change helps. Golang references: https://golang.org/pkg/os/signal/#hdr-Go_programs_that_use_cgo_or_SWIG and https://go-review.googlesource.com/c/go/+/298269/ (which clarifies that SA_RESTART is no longer a requirement. Good. Because Python won't get along well with that one.) ---------- assignee: gregory.p.smith components: Interpreter Core messages: 388036 nosy: gregory.p.smith priority: normal severity: normal status: open title: Set the SA_ONSTACK in PyOS_setsig to play well with other VMs like Golang versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 14:59:50 2021 From: report at bugs.python.org (Jake Gustafson) Date: Wed, 03 Mar 2021 19:59:50 +0000 Subject: [New-bugs-announce] [issue43391] The comments have invalid license information (broken Python 2.4 URL for Python 3) Message-ID: <1614801590.99.0.0824232667755.issue43391@roundup.psfhosted.org> New submission from Jake Gustafson : Steps to reproduce the issue: - Run Python 3.7.3 (or later, possibly) with the following code: import subprocess import inspect with open("subprocess-py3.py", 'w') as outs: outs.write(inspect.getsource(subprocess).replace("\\n","\n")) The resulting ./subprocess-py3.py contains the source code for subprocess including: # Copyright (c) 2003-2005 by Peter Astrand # # Licensed to PSF under a Contributor Agreement. # See http://www.python.org/2.4/license for licensing details. However, the URL is broken, and whatever code Peter Astrand developed may be long gone--I'll leave it up to the devs to determine the correct license, but the link is broken and the code is significantly different even than Python 2.4's. ---------- assignee: docs at python components: Documentation messages: 388049 nosy: docs at python, poikilos priority: normal severity: normal status: open title: The comments have invalid license information (broken Python 2.4 URL for Python 3) versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 15:48:50 2021 From: report at bugs.python.org (=?utf-8?b?R2VybcOhbiBNw6luZGV6IEJyYXZv?=) Date: Wed, 03 Mar 2021 20:48:50 +0000 Subject: [New-bugs-announce] [issue43392] Optimize repeated calls to `__import__()` Message-ID: <1614804530.16.0.0156925953016.issue43392@roundup.psfhosted.org> New submission from Germ?n M?ndez Bravo : A call to `importlib.__import__()` normally locks the import for the module being worked on; this, however has a performance impact for modules that are already imported and fully initialized. An example of this are inline `__import__()` calls in a hot path that is called repeatedly during the life of the process. Proposal: A two steps check in `importlib._bootstrap._find_and_load()` to avoid locking when the module has been already imported and it's ready. ---------- components: Library (Lib) messages: 388057 nosy: Kronuz priority: normal severity: normal status: open title: Optimize repeated calls to `__import__()` versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 17:17:03 2021 From: report at bugs.python.org (Raymond Hettinger) Date: Wed, 03 Mar 2021 22:17:03 +0000 Subject: [New-bugs-announce] [issue43393] Older Python builds are missing a required file on Big Sur Message-ID: <1614809823.25.0.747126603146.issue43393@roundup.psfhosted.org> New submission from Raymond Hettinger : When I upgraded to Big Sur, all my older builds stopped working. Do we have any known workarounds or can we publish updated builds that work? Like a lot of consultants, I still have to help clients maintain older code. Having a standard working build is essential. $ python3.5 dyld: Library not loaded: /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation Referenced from: /Library/Frameworks/Python.framework/Versions/3.5/Resources/Python.app/Contents/MacOS/Python Reason: image not found Abort trap: 6 ---------- components: macOS messages: 388062 nosy: ned.deily, rhettinger, ronaldoussoren priority: normal severity: normal status: open title: Older Python builds are missing a required file on Big Sur type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 3 17:44:24 2021 From: report at bugs.python.org (Brandt Bucher) Date: Wed, 03 Mar 2021 22:44:24 +0000 Subject: [New-bugs-announce] [issue43394] Compiler warnings on master (-Wstrict-prototypes) Message-ID: <1614811464.76.0.205706240961.issue43394@roundup.psfhosted.org> New submission from Brandt Bucher : We're getting "function declaration isn?t a prototype [-Wstrict-prototypes]" warnings in Modules/_zoneinfo.c and Modules/_xxtestfuzz/fuzzer.c. I'll have a patch up momentarily. ---------- assignee: brandtbucher components: Build messages: 388064 nosy: brandtbucher priority: normal severity: normal status: open title: Compiler warnings on master (-Wstrict-prototypes) type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 01:31:25 2021 From: report at bugs.python.org (Eric L.) Date: Thu, 04 Mar 2021 06:31:25 +0000 Subject: [New-bugs-announce] [issue43395] os.path states that bytes can't represent all MBCS paths under Windows Message-ID: <1614839485.4.0.948731177165.issue43395@roundup.psfhosted.org> New submission from Eric L. : The os.path documentation at https://docs.python.org/3/library/os.path.html states that: > Vice versa, using bytes objects cannot represent all file names on Windows (in the standard mbcs encoding), hence Windows applications should use string objects to access all files. This doesn't sound right and is at least misleading because anything can be represented as bytes, as everything (in a computer) is bytes at the end of the day, unless mbcs is really using something like half-bytes, which I couldn't find any sign of (skimming through the documentation, Microsoft seems to interpret it as DBCS, one or two bytes). I could imagine that the meaning is that some bytes combinations can't be used as path under Windows, but I just don't know, and that wouldn't be a valid reason to not use bytes under Windows (IMHO). ---------- assignee: docs at python components: Documentation messages: 388077 nosy: docs at python, ericzolf priority: normal severity: normal status: open title: os.path states that bytes can't represent all MBCS paths under Windows type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 01:44:55 2021 From: report at bugs.python.org (Tore Anderson) Date: Thu, 04 Mar 2021 06:44:55 +0000 Subject: [New-bugs-announce] [issue43396] Non-existent method sqlite3.Connection.fetchone() used in docs Message-ID: <1614840295.99.0.480319283179.issue43396@roundup.psfhosted.org> New submission from Tore Anderson : In https://docs.python.org/3/library/sqlite3.html, the following example code is found: > # Do this instead > t = ('RHAT',) > c.execute('SELECT * FROM stocks WHERE symbol=?', t) > print(c.fetchone()) However this fails as follows: > Traceback (most recent call last): > File "./test.py", line 8, in > print(c.fetchone()) > AttributeError: 'sqlite3.Connection' object has no attribute 'fetchone' I believe the correct code should have been (at least it works for me): > # Do this instead > t = ('RHAT',) > cursor = c.execute('SELECT * FROM stocks WHERE symbol=?', t) > print(cursor.fetchone()) Tore ---------- assignee: docs at python components: Documentation messages: 388078 nosy: docs at python, toreanderson priority: normal severity: normal status: open title: Non-existent method sqlite3.Connection.fetchone() used in docs versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 04:42:06 2021 From: report at bugs.python.org (=?utf-8?b?0KHQtdGA0LPQtdC5INCc?=) Date: Thu, 04 Mar 2021 09:42:06 +0000 Subject: [New-bugs-announce] [issue43397] Incorrect conversion path case with german character Message-ID: <1614850926.28.0.842904177967.issue43397@roundup.psfhosted.org> New submission from ?????? ? : I try to normalize case for path with german characters: ``` >os.path.normcase(r'c:\asd\ASD?') 'c:\\asd\\asd?' ``` But in OS Windows r'c:\asd\ASD?' and r'c:\asd\asd?' are different paths. ---------- components: Windows messages: 388079 nosy: paul.moore, steve.dower, tim.golden, voramva, zach.ware priority: normal severity: normal status: open title: Incorrect conversion path case with german character type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 05:54:37 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Thu, 04 Mar 2021 10:54:37 +0000 Subject: [New-bugs-announce] [issue43398] [sqlite3] sqlite3.connect() segfaults if given a faulty Connection factory Message-ID: <1614855277.19.0.209861526791.issue43398@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : If the connection factory __init__ method fails, we hit a seg. fault when pysqlite_do_all_statements() is called to clean up the defect connection: PyList_Size received a NULL pointer. Suggested fix: Split pysqlite_do_all_statements() in two: one function for resetting cursors, and one for resetting/finalising statements. In each function, check if the respective lists are NULL pointers before iterating. See attached proposed patch. Test: def test_invalid_connection_factory(self): class DefectFactory(sqlite.Connection): def __init__(self, *args, **kwargs): return None self.con = sqlite.connect(":memory:", factory=DefectFactory) ---------- components: Library (Lib) files: patch.diff keywords: patch messages: 388082 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: [sqlite3] sqlite3.connect() segfaults if given a faulty Connection factory type: crash versions: Python 3.10 Added file: https://bugs.python.org/file49850/patch.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 10:59:21 2021 From: report at bugs.python.org (Alex) Date: Thu, 04 Mar 2021 15:59:21 +0000 Subject: [New-bugs-announce] [issue43399] xml.etree.ElementTree.extend does not work with iterators when using the Python implementation Message-ID: <1614873561.76.0.875762243403.issue43399@roundup.psfhosted.org> New submission from Alex : This issue is only visible when the C accelerator of ElementTree is *not* used. It is the counterpart of the following issue on PyPy3: https://foss.heptapod.net/pypy/pypy/-/issues/3181 >>> from xml.etree.ElementTree import Element >>> r = Element("root") >>> r.extend((Element(str(i)) for i in range(3))) >>> print(list(r)) [] When using the C accelerator, the list is not empty, as expected. In the Python code, a check on the input empties the input iterator. The fix is trivial (one-line change), so if you are interested I could open a PR, which would be my first, so a good occasion to go through the devguide ;) I understand that since Python3.3 the C accelerator is used by default, so I would agree that this is not really a bug, and I can just fix it on PyPy side. ---------- components: XML messages: 388099 nosy: alexprengere priority: normal severity: normal status: open title: xml.etree.ElementTree.extend does not work with iterators when using the Python implementation type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 11:43:58 2021 From: report at bugs.python.org (Eddie Peters) Date: Thu, 04 Mar 2021 16:43:58 +0000 Subject: [New-bugs-announce] [issue43400] Remove "Mock is very easy to use" from unittest.mock documentation Message-ID: <1614876238.1.0.922616136062.issue43400@roundup.psfhosted.org> New submission from Eddie Peters : The unittest.mock library is very useful and very powerful, but it is not "very easy to use." Docs are useful and important, or we wouldn't be here in a documentation issue. I have watched several of the most experienced Python programmers I know struggle with various aspects of mock, including basic usage. I have sat with frustrated developers who have used mocking utilities in other languages but had little Python experience, and they were surprised by some mock behaviors and just couldn't get things "right" until they were helped by someone with all the tiny little healed-over cuts from lots of mock usage. Again, mock is great, but maybe if I have these opinions, I should contribute to making mock more intuitive. That's true. For now, though, the documentation contains this little line in the opening paragraphs that is unnecessary and can only make new mock users feel bad about having trouble: "Mock is very easy to use and is designed for use with unittest." I propose we remove the opinion "Mock is very easy to use" and change this line to "Mock is designed for use with unittest." The rest of the paragraph flows just fine without this: "Mock is very easy to use and is designed for use with unittest. Mock is based on the ?action -> assertion? pattern instead of ?record -> replay? used by many mocking frameworks." ---------- assignee: docs at python components: Documentation messages: 388102 nosy: docs at python, eppeters priority: normal severity: normal status: open title: Remove "Mock is very easy to use" from unittest.mock documentation type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 13:03:22 2021 From: report at bugs.python.org (Numerlor) Date: Thu, 04 Mar 2021 18:03:22 +0000 Subject: [New-bugs-announce] [issue43401] dbm module doc page redirects to itself Message-ID: <1614881002.06.0.413665275054.issue43401@roundup.psfhosted.org> New submission from Numerlor : In [32]: requests.get("https://docs.python.org/3/library/dbm.html", allow_redirects=False) Out[32]: In [33]: _.headers["Location"] Out[33]: 'https://docs.python.org/3/library/dbm.html#module-dbm.ndbm' ---------- assignee: docs at python components: Documentation messages: 388113 nosy: Numerlor, docs at python priority: normal severity: normal status: open title: dbm module doc page redirects to itself _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 13:14:22 2021 From: report at bugs.python.org (Hugo Nobrega) Date: Thu, 04 Mar 2021 18:14:22 +0000 Subject: [New-bugs-announce] [issue43402] IDLE shell adds newline after print even when `end=''` is specificied Message-ID: <1614881662.35.0.881061659082.issue43402@roundup.psfhosted.org> New submission from Hugo Nobrega : When evaluting a call to the `print` function with argument `end=''` in the IDLE shell, a newline is unexpectedly added at the end, before the next shell prompt. The expected behavior is to have the shell prompt next to the last printed line. The expected behavior is seen when evaluting the same expression in an interactive python shell from a terminal (`python -i`) Example: IDLE shell (not expected): >>> print('a',end='') a >>> Interactive python shell (expected): >>> print('a',end='') a>>> I could not find any settings in IDLE that might be governing this behavior, not any other issues mentioning this same thing. Tested on Python 3.9.1 on Manjaro Linux. ---------- assignee: terry.reedy components: IDLE messages: 388115 nosy: hugonobrega, terry.reedy priority: normal severity: normal status: open title: IDLE shell adds newline after print even when `end=''` is specificied type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 13:43:56 2021 From: report at bugs.python.org (Gregory P. Smith) Date: Thu, 04 Mar 2021 18:43:56 +0000 Subject: [New-bugs-announce] [issue43403] Misleading statement about bytes not being able to represent windows filenames in documentation Message-ID: <1614883436.88.0.196877066023.issue43403@roundup.psfhosted.org> New submission from Gregory P. Smith : As noted in the comment on https://github.com/rdiff-backup/rdiff-backup/issues/540#issuecomment-789485896 The Python documentation in https://docs.python.org/3/library/os.path.html makes an odd claim that bytes cannot represent all file names on Windows. That doesn't make sense. bytes can by definition represent everything. """Vice versa, using bytes objects cannot represent all file names on Windows (in the standard mbcs encoding), hence Windows applications should use string objects to access all files.""" Could we get this clarified and corrected to cover what any actual technical limitation is? Every OS is going to reject some bytes objects as a pathname for containing invalid byte sequences for their filesystem (ex: I doubt any OS allows null b'\0' characters). But lets not claim that bytes cannot represent everything on a filesystem with an encoding. ---------- assignee: docs at python components: Documentation messages: 388122 nosy: docs at python, gregory.p.smith, steve.dower priority: normal severity: normal stage: needs patch status: open title: Misleading statement about bytes not being able to represent windows filenames in documentation versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 13:53:25 2021 From: report at bugs.python.org (Sam Bull) Date: Thu, 04 Mar 2021 18:53:25 +0000 Subject: [New-bugs-announce] [issue43404] No SSL certificates when using the Mac installer Message-ID: <1614884005.35.0.786100424357.issue43404@roundup.psfhosted.org> New submission from Sam Bull : After installing the latest version of Python on Mac OS X using the installer downloaded from python.org (https://www.python.org/ftp/python/3.9.2/python-3.9.2-macosx10.9.pkg), the installed version of Python is unable to find the system certificates. Using the old version of Python located at /usr/local/Cellar/python/3.7.5/bin/python3, I get: >>> ssl.create_default_context().cert_store_stats() {'x509': 168, 'crl': 0, 'x509_ca': 168} But, with the new version located at /Library/Frameworks/Python.framework/Versions/3.9/bin/python3, I get: >>> ssl.create_default_context().cert_store_stats() {'x509': 0, 'crl': 0, 'x509_ca': 0} Looking around on the internet, this seems to be a pretty common issue on Mac, but is often getting misdiagnosed as an actual problem with the server's certificate. Because of that, nobody seems to have proposed any methods to fix it. Examples: https://github.com/aio-libs/aiohttp/issues/5375 https://stackoverflow.com/questions/65039677/unable-to-get-local-issuer-certificate-mac-os#comment115039330_65040851 ---------- assignee: christian.heimes components: SSL messages: 388123 nosy: christian.heimes, dreamsorcerer priority: normal severity: normal status: open title: No SSL certificates when using the Mac installer type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 15:29:15 2021 From: report at bugs.python.org (Zackery Spytz) Date: Thu, 04 Mar 2021 20:29:15 +0000 Subject: [New-bugs-announce] [issue43405] DeprecationWarnings in test_unicode Message-ID: <1614889755.46.0.756872715083.issue43405@roundup.psfhosted.org> New submission from Zackery Spytz : ./python -m test test_unicode 0:00:00 load avg: 0.33 Run tests sequentially 0:00:00 load avg: 0.33 [1/1] test_unicode /home/lubuntu2/cpython/Lib/test/test_unicode.py:2941: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. self.assertEqual(unicode_encodedecimal('123'), /home/lubuntu2/cpython/Lib/test/test_unicode.py:2943: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. self.assertEqual(unicode_encodedecimal('\u0663.\u0661\u0664'), /home/lubuntu2/cpython/Lib/test/test_unicode.py:2945: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. self.assertEqual(unicode_encodedecimal("\N{EM SPACE}3.14\N{EN SPACE}"), /home/lubuntu2/cpython/Lib/unittest/case.py:201: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. callable_obj(*args, **kwargs) /home/lubuntu2/cpython/Lib/test/test_unicode.py:2958: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. self.assertEqual(transform_decimal('123'), /home/lubuntu2/cpython/Lib/test/test_unicode.py:2960: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. self.assertEqual(transform_decimal('\u0663.\u0661\u0664'), /home/lubuntu2/cpython/Lib/test/test_unicode.py:2962: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. self.assertEqual(transform_decimal("\N{EM SPACE}3.14\N{EN SPACE}"), /home/lubuntu2/cpython/Lib/test/test_unicode.py:2964: DeprecationWarning: getargs: The 'u' format is deprecated. Use 'U' instead. self.assertEqual(transform_decimal('123\u20ac'), == Tests result: SUCCESS == 1 test OK. Total duration: 12.8 sec Tests result: SUCCESS ---------- components: Tests messages: 388129 nosy: ZackerySpytz priority: normal severity: normal status: open title: DeprecationWarnings in test_unicode versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 16:31:08 2021 From: report at bugs.python.org (Antoine Pitrou) Date: Thu, 04 Mar 2021 21:31:08 +0000 Subject: [New-bugs-announce] [issue43406] Possible race condition between signal catching and signal.signal Message-ID: <1614893468.93.0.265748275948.issue43406@roundup.psfhosted.org> New submission from Antoine Pitrou : We can receive signals (at the C level, in trip_signal() in signalmodule.c) while signal.signal is being called to modify the corresponding handler. Later when PyErr_CheckSignals() is called to handle the given signal, the handler may be a non-callable object and will raise a cryptic asynchronous exception. ---------- components: Interpreter Core, Library (Lib) messages: 388131 nosy: pitrou priority: normal severity: normal stage: needs patch status: open title: Possible race condition between signal catching and signal.signal type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 4 17:21:10 2021 From: report at bugs.python.org (Alex Willmer) Date: Thu, 04 Mar 2021 22:21:10 +0000 Subject: [New-bugs-announce] [issue43407] time.monotonic(): Docs imply comparing call N and call N+M is invalid for M>1 Message-ID: <1614896470.36.0.766208602401.issue43407@roundup.psfhosted.org> New submission from Alex Willmer : I believe the documentation for time.monotonic() and time.perf_counter() could be misleading. Taken literally they could imply that given delta = 0.1 a = time.monotonic() b = time.monotonic() c = time.monotonic() the comparisons `b - a < delta`, and `c - b < delta` are valid; but `c - a < delta` is not valid. I believe that `c - a < delta` is a valid comparison, and that what the documentation means to say is "only the difference between the results of *subsequent* calls is valid." The exact wording (present since the functions were added in https://hg.python.org/cpython/rev/376ce937823c) > The reference point of the returned value is undefined, so that only > the difference between the results of consecutive calls is valid. If there is agreement I'll submit a PR. ---------- assignee: docs at python components: Documentation messages: 388133 nosy: Alex.Willmer, docs at python priority: normal severity: normal status: open title: time.monotonic(): Docs imply comparing call N and call N+M is invalid for M>1 versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 09:09:39 2021 From: report at bugs.python.org (Qihao Chen) Date: Fri, 05 Mar 2021 14:09:39 +0000 Subject: [New-bugs-announce] [issue43408] about the method: title() Message-ID: <1614953379.79.0.488838369469.issue43408@roundup.psfhosted.org> New submission from Qihao Chen : Hello, I'm a student from China, and I think there is a bug in Python3.7. When I use the method "title()", I think the method should not make "'s" upper. I would appreciate it if you could consider my advice. ---------- files: title().png messages: 388153 nosy: chinkikoo227 priority: normal severity: normal status: open title: about the method: title() type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49852/title().png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 11:16:07 2021 From: report at bugs.python.org (Zhang Boyang) Date: Fri, 05 Mar 2021 16:16:07 +0000 Subject: [New-bugs-announce] [issue43409] [Win] Call subprocess.Popen() twice makes Thread.join() interruptible and deadlock Message-ID: <1614960967.41.0.0636989902685.issue43409@roundup.psfhosted.org> New submission from Zhang Boyang : Please run joinbug.py and press Ctrl-C twice. ============= The output ============= 1st join PLEASE PRESS CTRL-C TWICE, IGNORE THE 'Press any key to continue' Press any key to continue . . . Press any key to continue . . . thread exit Traceback (most recent call last): File "D:\xxx\joinbug.py", line 21, in t.join() # subprocess.Popen() makes join() can be interrupted File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\threading.py", line 1033, in join self._wait_for_tstate_lock() File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\threading.py", line 1049, in _wait_for_tstate_lock elif lock.acquire(block, timeout): KeyboardInterrupt 2nd join (stuck here) ============= Expected behaviour ============= either join uninterruptible or the 2nd join doesn't deadlock. ---------- components: Windows files: joinbug.py messages: 388156 nosy: paul.moore, steve.dower, tim.golden, zach.ware, zby1234 priority: normal severity: normal status: open title: [Win] Call subprocess.Popen() twice makes Thread.join() interruptible and deadlock type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49853/joinbug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 12:22:35 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 05 Mar 2021 17:22:35 +0000 Subject: [New-bugs-announce] [issue43410] Parser does not handle correctly some errors when parsin from stdin Message-ID: <1614964955.39.0.913568985121.issue43410@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The parser crashes when there are some syntax errors from stdin: > echo "(1+34" | ./python.exe - [1] 54046 done echo "(1+34" | 54047 segmentation fault ./python.exe - ---------- components: Interpreter Core messages: 388157 nosy: lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Parser does not handle correctly some errors when parsin from stdin versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 14:25:19 2021 From: report at bugs.python.org (Martin Cooper) Date: Fri, 05 Mar 2021 19:25:19 +0000 Subject: [New-bugs-announce] [issue43411] wm_manage fails with ttk.Frame Message-ID: <1614972319.75.0.638524894464.issue43411@roundup.psfhosted.org> New submission from Martin Cooper : Attempting to use a ttk.Frame with wm_manage() causes a TclError: _tkinter.TclError: window ".!frame" is not manageable: must be a frame, labelframe or toplevel The (Tcl) documentation for wm manage states "Only frame, labelframe and toplevel widgets can be used with this command." One might reasonably expect a ttk.Frame to appropriately fall under this requirement, especially since the name 'frame' is used for them, but it does not. One must use a tk.Frame instead to make this work. At the very least, this needs to be documented. Looking at the error message and seeing it complain that a 'frame' is not one of 'frame', 'labelframe' or 'toplevel' is extremely confusing. There is nothing to lead to the conclusion that a ttk Frame is not a 'frame'. Better than documenting it, of course, would be to make wm_manage actually work properly with a ttk.Frame, as developers would expect. ---------- components: Tkinter messages: 388161 nosy: mfncooper priority: normal severity: normal status: open title: wm_manage fails with ttk.Frame type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 17:12:41 2021 From: report at bugs.python.org (Peter Eisentraut) Date: Fri, 05 Mar 2021 22:12:41 +0000 Subject: [New-bugs-announce] [issue43412] object.h -Wcast-qual warning Message-ID: <1614982361.61.0.858230879301.issue43412@roundup.psfhosted.org> New submission from Peter Eisentraut : object.h contains an inline function that causes a -Wcast-qual warning from gcc. Since this file ends up visible in third-party code that includes Python.h, this makes it impossible to use -Wcast-qual in such code. The problem is the change c5cb077ab3c88394b7ac8ed4e746bd31b53e39f1, which replaced ob->ob_type by Py_TYPE(ob), which seems reasonable by itself, but Py_TYPE casts away the const, so it creates this problem. This is a regression in Python 3.10. ---------- components: Interpreter Core messages: 388167 nosy: petere priority: normal severity: normal status: open title: object.h -Wcast-qual warning versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 20:37:11 2021 From: report at bugs.python.org (Jason R. Coombs) Date: Sat, 06 Mar 2021 01:37:11 +0000 Subject: [New-bugs-announce] [issue43413] tuple subclasses allow kwargs Message-ID: <1614994631.23.0.248202692536.issue43413@roundup.psfhosted.org> New submission from Jason R. Coombs : While troubleshooting a strange problem (https://github.com/jaraco/keyring/issues/498) where a program worked on Python 3.7+ but failed on Python 3.6, I discovered a somewhat unintuitive behavior. On Python 3.7+, keyword arguments to tuple subclasses are allowed and ignored: >>> class Foo(tuple): pass >>> tuple(name='xyz') TypeError: tuple() takes no keyword arguments >>> Foo(name='xyz') () But on Python 3.6, the keyword parameter causes an error: $ python3.6 -c "type('Foo', (tuple,), {})(name='xyz')" Traceback (most recent call last): File "", line 1, in TypeError: 'name' is an invalid keyword argument for this function I checked out the What's new in Python 3.7 and I see this notice: Functions bool(), float(), list() and tuple() no longer take keyword arguments. The first argument of int() can now be passed only as positional argument. Hmm, but in my experience, tuple on Python 3.6 doesn't take keyword arguments either: importlib_metadata main $ python3.6 -c "tuple(name='xyz')" Traceback (most recent call last): File "", line 1, in TypeError: 'name' is an invalid keyword argument for this function So that change may be related, but I'm not sure where or how. The main place my expectation is violated is in the subclass. Why should a subclass of tuple allow keyword arguments when the parent class does not? I'd expect that the subclass should reject keyword arguments as well. Less importantly, the What's New doc implies that keyword arguments were accepted in Python 3.6; why aren't they? ---------- components: Interpreter Core keywords: 3.7regression messages: 388183 nosy: jaraco priority: normal severity: normal status: open title: tuple subclasses allow kwargs versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 20:58:09 2021 From: report at bugs.python.org (Eryk Sun) Date: Sat, 06 Mar 2021 01:58:09 +0000 Subject: [New-bugs-announce] [issue43414] os.get_terminal_size() should use file descriptors in Windows Message-ID: <1614995889.38.0.729191122852.issue43414@roundup.psfhosted.org> New submission from Eryk Sun : Currently os.get_terminal_size() is hard coded to use the process standard handles in Windows. The function, however, is intended for arbitrary file descriptors, so should not be limited as follows: >>> f = open('conout$', 'w') >>> os.get_terminal_size(f.fileno()) Traceback (most recent call last): File "", line 1, in ValueError: bad file descriptor This is a simple fix. Rewrite it to use _get_osfhandle(). For example: #ifdef TERMSIZE_USE_CONIO { HANDLE handle; CONSOLE_SCREEN_BUFFER_INFO csbi; _Py_BEGIN_SUPPRESS_IPH handle = (HANDLE)_get_osfhandle(fd); _Py_END_SUPPRESS_IPH if (handle == INVALID_HANDLE_VALUE) return PyErr_SetFromErrno(PyExc_OSError); if (!GetConsoleScreenBufferInfo(handle, &csbi)) return PyErr_SetFromWindowsErr(0); columns = csbi.srWindow.Right - csbi.srWindow.Left + 1; lines = csbi.srWindow.Bottom - csbi.srWindow.Top + 1; } #endif /* TERMSIZE_USE_CONIO */ ---------- components: Extension Modules, Windows keywords: easy (C) messages: 388187 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: os.get_terminal_size() should use file descriptors in Windows type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 5 22:29:32 2021 From: report at bugs.python.org (Andrei Kulakov) Date: Sat, 06 Mar 2021 03:29:32 +0000 Subject: [New-bugs-announce] [issue43415] Typo Message-ID: <1615001372.2.0.966613332718.issue43415@roundup.psfhosted.org> New submission from Andrei Kulakov : Typo (explicit|ly) in library/dataclasses.rst:139: 139: If :meth:`__hash__` is not explicit defined, or if it is set to ``None``, ---------- assignee: docs at python components: Documentation messages: 388193 nosy: andrei.avk, docs at python priority: normal severity: normal status: open title: Typo versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 03:44:25 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 06 Mar 2021 08:44:25 +0000 Subject: [New-bugs-announce] [issue43416] Add README files in Include/cpython and Include/internal Message-ID: <1615020265.58.0.71326223935.issue43416@roundup.psfhosted.org> New submission from Serhiy Storchaka : I always hesitate in what subdirectory of Include/ to add a declaration of new private API. What is more "private"? It would be nice to add README files in these directories which describe what API should be declared here and what is the difference between Include/cpython and Include/internal. ---------- assignee: docs at python components: Documentation, Interpreter Core messages: 388196 nosy: docs at python, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Add README files in Include/cpython and Include/internal type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 06:55:57 2021 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 06 Mar 2021 11:55:57 +0000 Subject: [New-bugs-announce] [issue43417] ast.unparse: Simplify buffering logic Message-ID: <1615031757.71.0.0216210718913.issue43417@roundup.psfhosted.org> New submission from Batuhan Taskaya : Currently, buffer is just an instance-level list that is used in various places to avoid directly writing stuff into the real source buffer, though the design is pretty complicated and hard to use. There are various use cases (like omitting the empty space when unparsing argument-less lambdas, e.g: lambda : 2 + 2) we could've use this buffer system if it was offering a stackable version (like multiple levels of buffers). What I think is we should probably do this with a proper context manager and in the context capture all writings into a list where we would return after the context is closed; with self.buffered() as buffer: self._write_fstring_inner(node) return self._write_str_avoiding_backslashes("".join(buffer)) ---------- assignee: BTaskaya components: Library (Lib) messages: 388197 nosy: BTaskaya priority: normal severity: normal status: open title: ast.unparse: Simplify buffering logic versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 08:33:26 2021 From: report at bugs.python.org (Hugo Chia) Date: Sat, 06 Mar 2021 13:33:26 +0000 Subject: [New-bugs-announce] [issue43418] FTPLib module crashes when server returns byte message instead of string Message-ID: <1615037606.06.0.879407636626.issue43418@roundup.psfhosted.org> New submission from Hugo Chia : https://github.com/cowrie/cowrie/issues/1394 https://github.com/cowrie/cowrie/pull/1396 Above are some of the links mentioning the issue with the FTPLib module. It happens when the FTP server returns a byte message instead of a string. Ftplib expects a string and does not account for receiving a byte message ---------- components: Library (Lib) messages: 388198 nosy: hugochiaxyz8 priority: normal severity: normal status: open title: FTPLib module crashes when server returns byte message instead of string type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 09:09:53 2021 From: report at bugs.python.org (Lanfon) Date: Sat, 06 Mar 2021 14:09:53 +0000 Subject: [New-bugs-announce] [issue43419] contextvars does not work properly in asyncio REPL. Message-ID: <1615039793.26.0.615166791283.issue43419@roundup.psfhosted.org> New submission from Lanfon : Demonstration (via python -m asyncio): asyncio REPL 3.9.0 (default, Oct 18 2020, 00:21:26) [Clang 11.0.0 (clang-1100.0.33.16)] on darwin Use "await" directly instead of "asyncio.run()". Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> from contextvars import ContextVar >>> ctx = ContextVar('ctx') >>> ctx.set(1) at 0x1021bf800> >>> ctx.get() Traceback (most recent call last): File "/Users/lanfon/.pyenv/versions/3.9.0/lib/python3.9/concurrent/futures/_base.py", line 440, in result return self.__get_result() File "/Users/lanfon/.pyenv/versions/3.9.0/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result raise self._exception File "/Users/lanfon/.pyenv/versions/3.9.0/lib/python3.9/asyncio/__main__.py", line 34, in callback coro = func() File "", line 1, in LookupError: >>> exit() It also got problem inside the functions when the context is referenced in global scope. ---------- components: asyncio messages: 388199 nosy: asvetlov, lanfon72, yselivanov priority: normal severity: normal status: open title: contextvars does not work properly in asyncio REPL. type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 09:18:17 2021 From: report at bugs.python.org (Sergey B Kirpichev) Date: Sat, 06 Mar 2021 14:18:17 +0000 Subject: [New-bugs-announce] [issue43420] Optimize rational arithmetics Message-ID: <1615040297.73.0.541090358152.issue43420@roundup.psfhosted.org> New submission from Sergey B Kirpichev : fractions.py uses naive algorithms for doing arithmetics. It may worth implementing less trivial versions for addtion/substraction and multiplication (e.g. Henrici algorithm and so on), described here: https://www.eecis.udel.edu/~saunders/courses/822/98f/collins-notes/rnarith.ps as e.g gmplib does: https://gmplib.org/repo/gmp/file/tip/mpq/aors.c Some projects (e.g. SymPy here: https://github.com/sympy/sympy/pull/12656) reinvent the stdlib's Fraction just to add such simple improvements. With big denominators (~10**6) this really does matter, my local benchmarks suggest the order of magnitude difference for summation of several such numbers. ---------- components: Library (Lib) messages: 388200 nosy: Sergey.Kirpichev priority: normal severity: normal status: open title: Optimize rational arithmetics type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 10:04:20 2021 From: report at bugs.python.org (Eryk Sun) Date: Sat, 06 Mar 2021 15:04:20 +0000 Subject: [New-bugs-announce] [issue43421] os.device_encoding(fd) should support any console fd in Windows Message-ID: <1615043060.58.0.743954048014.issue43421@roundup.psfhosted.org> New submission from Eryk Sun : In Windows, os.device_encoding() is hard coded to map file descriptor 0 and descriptors 1 and 2 respectively to the console's input and output code page if isatty(fd) is true. But isatty() is true for any character device, such as "NUL". Also any fd might be a console, by way of dup(), dup2() -- or open() with the device names "CON", "CONIN$", and "CONOUT$". The correct device encoding of a console file is needed for use with os.read() and os.write(). It's also necessary for io.TextIOWrapper() if PYTHONLEGACYWINDOWSSTDIO is set. _Py_device_encoding() in Python/fileutils.c should use _get_osfhandle() to get the OS handle, and, if it's a character-device file, determine the code page to return, if any, depending on whether it's an input or output console file. For example: PyObject * _Py_device_encoding(int fd) { #if defined(MS_WINDOWS) HANDLE handle; DWORD temp; UINT cp = 0; _Py_BEGIN_SUPPRESS_IPH handle = (HANDLE)_get_osfhandle(fd); _Py_END_SUPPRESS_IPH if (handle == INVALID_HANDLE_VALUE || GetFileType(handle) != FILE_TYPE_CHAR) Py_RETURN_NONE; Py_BEGIN_ALLOW_THREADS /* GetConsoleMode requires a console handle. */ if (!GetConsoleMode(handle, &temp)) { /* Assume access denied implies output. */ if (GetLastError() == ERROR_ACCESS_DENIED) cp = GetConsoleOutputCP(); } else { if (GetNumberOfConsoleInputEvents(handle, &temp)) { cp = GetConsoleCP(); } else { cp = GetConsoleOutputCP(); } } Py_END_ALLOW_THREADS if (cp == CP_UTF8) { return PyUnicode_FromString("UTF-8"); } else if (cp != 0) { return PyUnicode_FromFormat("cp%u", (unsigned int)cp); } else { Py_RETURN_NONE; } #else if (isatty(fd)) { return _Py_GetLocaleEncodingObject(); } else { Py_RETURN_NONE; } #endif /* defined(MS_WINDOWS) */ } ---------- components: Extension Modules, IO, Interpreter Core, Windows messages: 388201 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.device_encoding(fd) should support any console fd in Windows type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 10:18:26 2021 From: report at bugs.python.org (Antoine Pitrou) Date: Sat, 06 Mar 2021 15:18:26 +0000 Subject: [New-bugs-announce] [issue43422] Revert _decimal C API changes Message-ID: <1615043906.44.0.566874098139.issue43422@roundup.psfhosted.org> New submission from Antoine Pitrou : Stefan Krah (who doesn't have access rights here, and is the author of the C _decimal module) asked me to transmit me this request: """ The capsule API does not meet my testing standards, since I've focused on the upstream mpdecimal in the last couple of months. Additionally, I'd like to refine the API, perhaps together with the Arrow community. """ The relevant diff is here: https://github.com/python/cpython/compare/master...skrah:revert_decimal_capsule_api I can turn it into a PR but first I'd like to gather reactions here. ---------- components: Extension Modules messages: 388205 nosy: facundobatista, mark.dickinson, pitrou, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Revert _decimal C API changes type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 13:52:42 2021 From: report at bugs.python.org (Chris Griffith) Date: Sat, 06 Mar 2021 18:52:42 +0000 Subject: [New-bugs-announce] [issue43423] Subprocess IndexError possible in _communicate Message-ID: <1615056762.46.0.090803448155.issue43423@roundup.psfhosted.org> New submission from Chris Griffith : It is possible to run into an IndexError in the subprocess module's _communicate function. ``` return run( File "subprocess.py", line 491, in run File "subprocess.py", line 1024, in communicate File "subprocess.py", line 1418, in _communicate IndexError: list index out of range ``` The lines in question are: ``` if stdout is not None: stdout = stdout[0] if stderr is not None: stderr = stderr[0] ``` I believe this is due to the fact there is no safety checking to make sure that self._stdout_buff and self._stderr_buff have any content in them after being set to empty lists. The fix I suggest is to change the checks from `if stdout is not None` to simply `if stdout` to make sure it is a populated list. Example fixed code: ``` if stdout: stdout = stdout[0] if stderr: stderr = stderr[0] ``` If a more stringent check is required, we could expand that out to check type and length, such as `isinstance(stdout, list) and len(stdout) > 0:` but that is more then necessary currently. ---------- components: Library (Lib) messages: 388211 nosy: cdgriffith priority: normal severity: normal status: open title: Subprocess IndexError possible in _communicate type: crash versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 6 20:38:28 2021 From: report at bugs.python.org (Ilya Grigoriev) Date: Sun, 07 Mar 2021 01:38:28 +0000 Subject: [New-bugs-announce] [issue43424] Document the `controller.name` field in `webbrowser` module Message-ID: <1615081108.59.0.182832926789.issue43424@roundup.psfhosted.org> New submission from Ilya Grigoriev : The object `webbrowser.get()` returns has, and had for a long time, a useful but undocumented field `name`. I wonder if it would be OK to document it as something like `a system-dependent name for the browser`. This would go here: https://docs.python.org/3/library/webbrowser.html#browser-controller-objects The reason I'd like this is so that I can write code like the following: ```python # In Crostini Chrome OS Linux, the default browser is set to an # intermediary called `garcon-url-handler`. # It opens URLs in Chrome running outside the linux VM. This # browser does not have access to the Linux filesystem. Some references: # https://chromium.googlesource.com/chromiumos/platform2/+/master/vm_tools/garcon/#opening-urls # https://source.chromium.org/search?q=garcon-url-handler if "garcon-url-handler" in webbrowser.get().name: webbrowser.open("http://external-url.com/docs.html") else: webbrowser.open("file:///usr/share/doc/docs.html") ``` This would work correctly, even if the user has installed a browser native to the Linux VM and put it into their `BROWSER` environment variable. I don't know a better way to achieve the same effect. Some references to where the `name` field was introduced: https://bugs.python.org/issue754022 https://github.com/python/cpython/commit/e8f244305ef4f257f6999b69601f4316b31faa5e ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 388218 nosy: docs at python, ilyagr priority: normal severity: normal status: open title: Document the `controller.name` field in `webbrowser` module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 7 00:07:09 2021 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sun, 07 Mar 2021 05:07:09 +0000 Subject: [New-bugs-announce] [issue43425] test_peg_generator.test_c_parser emits DeprecationWarning due to distutils Message-ID: <1615093629.58.0.305422387924.issue43425@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : distutils was deprecated for removal in Python 3.10. It is used in test_peg_generator.test_c_parser which emits a deprecation warning. It also seems to be used in test_support for missing_compiler_executable that will emit a deprecation warning. ./python -Wall -m test test_peg_generator test_c_parser 0:00:00 load avg: 0.02 Run tests sequentially 0:00:00 load avg: 0.02 [1/2] test_peg_generator /root/cpython/Lib/test/test_peg_generator/test_c_parser.py:4: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.tests.support import TempdirManager Other places on grep : rg 'from distutils' | rg -v 'Lib/distutils|rst' Modules/_decimal/tests/formathelper.py:from distutils.spawn import find_executable Doc/includes/setup.py:from distutils.core import setup, Extension Doc/includes/test.py:from distutils.util import get_platform setup.py:from distutils import log setup.py:from distutils.command.build_ext import build_ext setup.py:from distutils.command.build_scripts import build_scripts setup.py:from distutils.command.install import install setup.py:from distutils.command.install_lib import install_lib setup.py:from distutils.core import Extension, setup setup.py:from distutils.errors import CCompilerError, DistutilsError setup.py:from distutils.spawn import find_executable Lib/_osx_support.py: from distutils import log Lib/_osx_support.py: Currently called from distutils.sysconfig Lib/test/support/__init__.py: from distutils import ccompiler, sysconfig, spawn, errors Lib/test/test_importlib/test_windows.py:from distutils.util import get_platform Lib/test/test_peg_generator/test_c_parser.py:from distutils.tests.support import TempdirManager Tools/peg_generator/pegen/build.py: from distutils.core import Distribution, Extension Tools/peg_generator/pegen/build.py: from distutils.command.clean import clean # type: ignore Tools/peg_generator/pegen/build.py: from distutils.command.build_ext import build_ext # type: ignore Tools/peg_generator/pegen/build.py: from distutils.tests.support import fixup_build_ext # type: ignore Tools/test2to3/setup.py:from distutils.core import setup Tools/test2to3/setup.py: from distutils.command.build_py import build_py_2to3 as build_py Tools/test2to3/setup.py: from distutils.command.build_py import build_py Tools/test2to3/setup.py: from distutils.command.build_scripts import build_scripts_2to3 as build_scripts Tools/test2to3/setup.py: from distutils.command.build_scripts import build_scripts Tools/test2to3/test/runtests.py: from distutils.util import copydir_run_2to3 Misc/HISTORY:- Issue #5394: removed > 2.3 syntax from distutils.msvc9compiler. ---------- components: Tests messages: 388222 nosy: pablogsal, xtreak priority: normal severity: normal status: open title: test_peg_generator.test_c_parser emits DeprecationWarning due to distutils type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 7 00:11:06 2021 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sun, 07 Mar 2021 05:11:06 +0000 Subject: [New-bugs-announce] [issue43426] test_importlib.test_windows emits deprecation warning over usage of distutils Message-ID: <1615093866.16.0.242132753657.issue43426@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : test_windows uses distutil which emits a deprecation warning due to distutils being deprecated. sysconfig.get_platform and distutils.util.get_host_platform seem to be identical though distutils.util.get_platform has an extra if clause for nt systems. This is related to https://bugs.python.org/issue41282 ./python -Wall -m test test_importlib.test_windows 0:00:00 load avg: 0.00 Run tests sequentially 0:00:00 load avg: 0.00 [1/1] test_importlib.test_windows /root/cpython/Lib/test/test_importlib/test_windows.py:10: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.util import get_platform test_importlib.test_windows skipped -- No module named 'winreg' test_importlib.test_windows skipped == Tests result: SUCCESS == 1 test skipped: test_importlib.test_windows Total duration: 56 ms Tests result: SUCCESS ---------- components: Tests, Windows messages: 388223 nosy: paul.moore, steve.dower, tim.golden, xtreak, zach.ware priority: normal severity: normal status: open title: test_importlib.test_windows emits deprecation warning over usage of distutils type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 7 10:42:35 2021 From: report at bugs.python.org (Marcos M) Date: Sun, 07 Mar 2021 15:42:35 +0000 Subject: [New-bugs-announce] [issue43427] Possible error on the descriptor howto guide Message-ID: <1615131755.48.0.550524446603.issue43427@roundup.psfhosted.org> New submission from Marcos M : > To recap, functions have a __get__() method so that they can be converted to a method when accessed as attributes. The non-data descriptor transforms an obj.f(*args) call into f(obj, *args). Calling cls.f(*args) becomes f(*args). I THINK it should say cls.f(*args) becomes f(cls, *args) as stated in the table that follows that paragraph. ---------- assignee: docs at python components: Documentation messages: 388239 nosy: docs at python, marcosmodenesi priority: normal severity: normal status: open title: Possible error on the descriptor howto guide type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 7 17:54:27 2021 From: report at bugs.python.org (Jason R. Coombs) Date: Sun, 07 Mar 2021 22:54:27 +0000 Subject: [New-bugs-announce] [issue43428] Sync importlib_metadata enhancements through 3.7. Message-ID: <1615157667.11.0.27235417477.issue43428@roundup.psfhosted.org> New submission from Jason R. Coombs : importlib metadata has added a few important [changes](https://importlib-metadata.readthedocs.io/en/latest/history.html#v3-7-0) since the last sync in issue42382 (importlib_metadata 3.3): - Performance enhancements to distribution discovery. - `entry_points` only returns unique distributions. - Introduces new ``EntryPoints`` object for containing a set of entry points with convenience methods for selecting entry points by group or name. - Added packages_distributions function to return a mapping of packages to the distributions that provide them. ---------- assignee: jaraco components: Library (Lib) messages: 388250 nosy: jaraco priority: normal severity: normal status: open title: Sync importlib_metadata enhancements through 3.7. versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 7 18:29:06 2021 From: report at bugs.python.org (Zackery Spytz) Date: Sun, 07 Mar 2021 23:29:06 +0000 Subject: [New-bugs-announce] [issue43429] mmap.size() raises OSError on Unix for anonymous memory Message-ID: <1615159746.88.0.364980606308.issue43429@roundup.psfhosted.org> New submission from Zackery Spytz : For anonymous memory, mmap.size() works without issue on Windows, but it raises OSError on Unix. ---------- components: Extension Modules messages: 388252 nosy: ZackerySpytz priority: normal severity: normal status: open title: mmap.size() raises OSError on Unix for anonymous memory type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 7 19:58:27 2021 From: report at bugs.python.org (Suhail S.) Date: Mon, 08 Mar 2021 00:58:27 +0000 Subject: [New-bugs-announce] [issue43430] Exception raised when attempting to create Enum via functional API Message-ID: <1615165107.75.0.80264384575.issue43430@roundup.psfhosted.org> New submission from Suhail S. : It is possible to create custom Enum classes with a metaclass that is a subtype of EnumMeta. It is also possible to inherit from such an enumeration to create another enumeration. However, attempting to do so via the functional API raises an exception. See attached file that highlights minimal failing test case. ---------- components: Library (Lib) files: test.py messages: 388255 nosy: suhailsingh247 priority: normal severity: normal status: open title: Exception raised when attempting to create Enum via functional API type: crash versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49855/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 00:41:46 2021 From: report at bugs.python.org (Jordan Macdonald) Date: Mon, 08 Mar 2021 05:41:46 +0000 Subject: [New-bugs-announce] [issue43431] Subprocess timeout causes output to be returned as bytes in text mode Message-ID: <1615182106.34.0.569482281722.issue43431@roundup.psfhosted.org> New submission from Jordan Macdonald : Passing the argument `text=True` to `subprocess.run()` is supposed to mean that any captured output of the called process is automatically decoded and retuned to the user as test instead of bytes. However, if you give a timeout and that timeout expires, the raised `subprocess.TimeoutExpired` exception will have the captured output as as bytes even if text mode is enabled. Test output: bash-5.0$ python3 test_subprocess.py Version and interpreter information: namespace(_multiarch='x86_64-linux-gnu', cache_tag='cpython-37', hexversion=50792432, name='cpython', version=sys.version_info(major=3, minor=7, micro=7, releaselevel='final', serial=0)) Completed STDOUT Type: Completed STDOUT Content: 'Start\nDone\n' Timeout STDOUT Type: Timeout STDOUT Content: b'Start\n' ---------- components: Library (Lib) files: test_subprocess.py messages: 388257 nosy: macdjord priority: normal severity: normal status: open title: Subprocess timeout causes output to be returned as bytes in text mode type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49856/test_subprocess.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 07:05:40 2021 From: report at bugs.python.org (parsa mpsh) Date: Mon, 08 Mar 2021 12:05:40 +0000 Subject: [New-bugs-announce] [issue43432] Add function `clear` to the `os` module Message-ID: <1615205140.87.0.702709667317.issue43432@roundup.psfhosted.org> New submission from parsa mpsh : I wanna add a new function named `clear` to the os module. This function runs the `clear` command in the os. but this function checks that is os windows, if yes, runs `cls`. I'm working on my patch. ---------- components: Library (Lib) messages: 388262 nosy: parsampsh priority: normal severity: normal status: open title: Add function `clear` to the `os` module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 07:29:16 2021 From: report at bugs.python.org (OndrejPtak) Date: Mon, 08 Mar 2021 12:29:16 +0000 Subject: [New-bugs-announce] [issue43433] xmlrpc.client ignores query in URI ("?action=xmlrpc2") from python-3.9 Message-ID: <1615206556.6.0.195752820504.issue43433@roundup.psfhosted.org> New submission from OndrejPtak : xmlrpc.client proxy behaviour changed and broke tools depending on URI containing query part. Last working version: https://github.com/python/cpython/blob/3.8/Lib/xmlrpc/client.py#L1417 Changed behaviour here: https://github.com/python/cpython/blob/3.9/Lib/xmlrpc/client.py#L1424 Is this change intended? If so, what is recommended solution for xmlrpc client communicating with URI: http://example.com/path?var=foo ? ---------- messages: 388263 nosy: OndrejPtak priority: normal severity: normal status: open title: xmlrpc.client ignores query in URI ("?action=xmlrpc2") from python-3.9 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 07:53:59 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Mon, 08 Mar 2021 12:53:59 +0000 Subject: [New-bugs-announce] [issue43434] sqlite3.Connection(...) bypasses 'sqlite3.connect' audit hooks Message-ID: <1615208039.92.0.863639207352.issue43434@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : The module level connect method is guarded by PySys_Audit(), but sqlite3.Connection.__init__() is not. It is possible to bypass the module level connect() method simply by creating a new sqlite3.Connection object directly. Easily fixed by either moving the PySys_Audit() check to pysqlite_connection_init(), or by adding an extra check in pysqlite_connection_init(). >>> import sqlite3, sys >>> def hook(s, e): ... if s == 'sqlite3.connect': ... raise PermissionError ... >>> sys.addaudithook(hook) >>> sqlite3.connect(':memory:') Traceback (most recent call last): File "", line 1, in File "", line 3, in hook PermissionError >>> sqlite3.Connection(':memory:') ---------- components: Library (Lib) files: audit.py messages: 388264 nosy: berker.peksag, erlendaasland, steve.dower priority: normal severity: normal status: open title: sqlite3.Connection(...) bypasses 'sqlite3.connect' audit hooks type: security versions: Python 3.10, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49857/audit.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 10:31:33 2021 From: report at bugs.python.org (David Wood) Date: Mon, 08 Mar 2021 15:31:33 +0000 Subject: [New-bugs-announce] [issue43435] Py_BuildValue("y#".... returns incomplete result Message-ID: <1615217493.05.0.795009945001.issue43435@roundup.psfhosted.org> New submission from David Wood : I have a c function to encrypt values which returns an array of bytes. The function returns proper values outside of python. When used as a python function, the result is incomplete usually 10-20% of the time. If I add a sleep(1) call before returning from the function, my success rate goes to 100%. While this works, it is unacceptable as it will create enormous latency in my application. static PyObject *method_encrypt(PyObject *self, PyObject *args) { char *keyval, *str = NULL, output[512]; Py_ssize_t count=0; PyObject *retval; if(!PyArg_ParseTuple(args, "ss", &str, &keyval)) { return NULL; } encryptBlowfishCfb(str, &count, output, keyval); retval = Py_BuildValue("y#", output, count); //sleep(1); return retval; } ---------- components: C API messages: 388268 nosy: dwoodjunkmail priority: normal severity: normal status: open title: Py_BuildValue("y#".... returns incomplete result type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 12:12:28 2021 From: report at bugs.python.org (Dan Snider) Date: Mon, 08 Mar 2021 17:12:28 +0000 Subject: [New-bugs-announce] [issue43436] bounded _lru_cache_wrapprer behaves as if typed=True when it wasn't Message-ID: <1615223548.2.0.417347822108.issue43436@roundup.psfhosted.org> New submission from Dan Snider : Isn't the point of setting typed=True to make it so that e.g. True doesn't register as a hit when there is already a cache entry for 1.0? Assuming that is the case, although this report specifically targets 3.8 I found no indication that what I believe is the cause of this has been fixed in the interim. def test(): from functools import lru_cache class No1: __eq__ = 0 .__eq__ __hash__ = 0 .__hash__ class No2: __eq__ = (0,).__contains__ def __hash__(self, /): return hash(0) @lru_cache(256, typed=False) def test(v): return [v] test(No1()), test(No1()), test(0.0), test(0) print(test.cache_info()) @lru_cache(256, typed=False) def test(v): return [v] test(No2()), test(No2()), test(0.0), test(0) print(test.cache_info()) CacheInfo(hits=0, misses=4, maxsize=256, currsize=4) CacheInfo(hits=2, misses=2, maxsize=256, currsize=2) ---------- messages: 388271 nosy: bup priority: normal severity: normal status: open title: bounded _lru_cache_wrapprer behaves as if typed=True when it wasn't versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 13:17:00 2021 From: report at bugs.python.org (Jeff Moguillansky) Date: Mon, 08 Mar 2021 18:17:00 +0000 Subject: [New-bugs-announce] [issue43437] venv activate bash script has wrong line endings on windows Message-ID: <1615227420.16.0.158256767394.issue43437@roundup.psfhosted.org> New submission from Jeff Moguillansky : when running python.exe -m venv on Windows, It creates several activate scripts. The activate bash script has the wrong line endings (it should be unix-style, not windows-style). Bash scripts should always end with unix style line endings ---------- components: Windows messages: 388276 nosy: jmoguill2, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: venv activate bash script has wrong line endings on windows type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 15:26:14 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 08 Mar 2021 20:26:14 +0000 Subject: [New-bugs-announce] [issue43438] [doc] sys.addaudithook() documentation should be more explicit on its limitations Message-ID: <1615235174.36.0.713670774369.issue43438@roundup.psfhosted.org> New submission from STINNER Victor : Recently, the PEP 578 audit hooks was used to build a Capture The Flag (CTF) security challenge, AntCTF x D^3CTF: https://d3ctf.io/ Multiple issues have been reported to the Python Security Response Team (PSRT) from this challenge. It seems like there was a misunderstanding on the intent of the PEP 578. Building a sandbox using audit hooks is *explicitly* excluded from the PEP 578 design: https://www.python.org/dev/peps/pep-0578/#why-not-a-sandbox See also the PEP 551 for more details. The problem is that these two PEPs are not well summarized in the Python documentation, especially in the sys.addaudithook() documentation: https://docs.python.org/dev/library/sys.html#sys.addaudithook The documentation should better describe limitations of audit hooks, and may also point to these two PEPs for more information (PEP 578 is already mentioned). The bare minimum should be to explicitly say that it should not be used to build a sandbox. By design, audit events is a whack a mole game. Rather than starting from a short "allow list", it is based on a "deny list", so it cannot be safe or complete by design. Every "forgotten" audit event can be "abused" to take the control on the application. And that's perfectly *fine*. It should just be documented. ---------- assignee: docs at python components: Documentation messages: 388299 nosy: christian.heimes, docs at python, steve.dower, vstinner priority: normal severity: normal status: open title: [doc] sys.addaudithook() documentation should be more explicit on its limitations versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 15:32:28 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 08 Mar 2021 20:32:28 +0000 Subject: [New-bugs-announce] [issue43439] [security] Add audit events on GC functions giving access to all Python objects Message-ID: <1615235548.58.0.659993767767.issue43439@roundup.psfhosted.org> New submission from STINNER Victor : It is currently possible to discover the internal list of audit hooks using gc module functions, like gc.get_objects(), and so remove an audit hooks, whereas it is supposed to not be possible. The PEP 578 states: "Hooks cannot be removed or replaced." Rather than attempting to fix this specific vulnerability, I suggest to add new audit events on the following gc functions: * gc.get_objects() * gc.get_referrers() * gc.get_referents() These functions are "dangerous" since they can expose Python objects in an inconsistent state. In the past, we add multiple bugs related to "internal" tuples which were not fully initialized (but already tracked by the GC). See bpo-15108 for an example. Note: if someone wants to address the ability to remove an audit hook, the internal list can be modified to not be a Python object. ---------- components: Library (Lib) messages: 388300 nosy: christian.heimes, pablogsal, steve.dower, vstinner priority: normal severity: normal status: open title: [security] Add audit events on GC functions giving access to all Python objects versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 19:24:08 2021 From: report at bugs.python.org (Steve Dower) Date: Tue, 09 Mar 2021 00:24:08 +0000 Subject: [New-bugs-announce] [issue43440] Enable rtree support in SQLite Message-ID: <1615249448.96.0.887210184332.issue43440@roundup.psfhosted.org> New submission from Steve Dower : I heard [1] that rtree support would be very useful in SQLite. We should see whether enabling it is safe/easy/cheap and then do it. 1: https://twitter.com/Jamieallencook/status/1368221499794976775?s=20 ---------- components: Build, Library (Lib), Windows messages: 388320 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Enable rtree support in SQLite type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 8 22:54:44 2021 From: report at bugs.python.org (junyixie) Date: Tue, 09 Mar 2021 03:54:44 +0000 Subject: [New-bugs-announce] [issue43441] mutilcorevm: global variable next_version_tag cause method cache bug Message-ID: <1615262084.5.0.815014213556.issue43441@roundup.psfhosted.org> New submission from junyixie : type->tp_version_tag = next_version_tag++; when sub interpreters parallel, next_version_tag++ is thread-unsafe. may cause different type has same tp_version_tag. cause method cache bug in _PyType_Lookup #define MCACHE_HASH_METHOD(type, name) \ MCACHE_HASH((type)->tp_version_tag, \ ((PyASCIIObject *)(name))->hash) if (MCACHE_CACHEABLE_NAME(name) && _PyType_HasFeature(type, Py_TPFLAGS_VALID_VERSION_TAG)) { /* fast path */ unsigned int h = MCACHE_HASH_METHOD(type, name); struct type_cache *cache = get_type_cache(); struct type_cache_entry *entry = &cache->hashtable[h]; if (entry->version == type->tp_version_tag && entry->name == name) { #if MCACHE_STATS cache->hits++; #endif return entry->value; } } static int assign_version_tag(struct type_cache *cache, PyTypeObject *type) { ... type->tp_version_tag = next_version_tag++; ... } ---------- components: Interpreter Core messages: 388327 nosy: JunyiXie priority: normal severity: normal status: open title: mutilcorevm: global variable next_version_tag cause method cache bug versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 01:39:09 2021 From: report at bugs.python.org (junyixie) Date: Tue, 09 Mar 2021 06:39:09 +0000 Subject: [New-bugs-announce] [issue43442] multicorevm: guarantee type multi sub interpreters safe Message-ID: <1615271949.39.0.761711981619.issue43442@roundup.psfhosted.org> New submission from junyixie : in multi core cpython project. when use multi sub interpreters, Type is not safe. Type shared by interpreters. but isolate type may cause python abi/api change. python 4.0? temporary solution: 1. add a type lock to guarantee type object safe in multi subinterpreters. 2. some thing like pycmethod object and descr in pytype, set their refcount to INT MAX.It is guaranteed that these objects will not be released. and not cause memory leaks, only one type exist in memory. ---------- messages: 388333 nosy: JunyiXie priority: normal severity: normal status: open title: multicorevm: guarantee type multi sub interpreters safe _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 03:46:35 2021 From: report at bugs.python.org (Dominik Vilsmeier) Date: Tue, 09 Mar 2021 08:46:35 +0000 Subject: [New-bugs-announce] [issue43443] Should shelve support dict union? Message-ID: <1615279595.44.0.954464448046.issue43443@roundup.psfhosted.org> New submission from Dominik Vilsmeier : The docs of shelve mention that > Shelf objects support all methods supported by dictionaries. This eases the transition from dictionary based scripts to those requiring persistent storage. However the `|=` operator is not implemented, preventing a seamless transition from `dict` to `shelve`. So should this be implemented for `Shelf` as well? `|` on the other hand doesn't make much sense. Otherwise the docs could be updated. ---------- components: Library (Lib) messages: 388335 nosy: Dominik V. priority: normal severity: normal status: open title: Should shelve support dict union? type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 05:34:19 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Tue, 09 Mar 2021 10:34:19 +0000 Subject: [New-bugs-announce] [issue43444] [sqlite3] Move MODULE_NAME def from setup.py to module.h Message-ID: <1615286059.68.0.212415918598.issue43444@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : Berker, can we please move the MODULE_NAME define from setup.py to Modules/_sqlite/module.h? I'm tired of all the undeclared identifier warnings. No other module defines their MODULE_NAME in setup.py. ---------- components: Library (Lib) messages: 388344 nosy: berker.peksag, erlendaasland priority: normal severity: normal status: open title: [sqlite3] Move MODULE_NAME def from setup.py to module.h type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 05:35:11 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 09 Mar 2021 10:35:11 +0000 Subject: [New-bugs-announce] [issue43445] Add frozen modules to sys.stdlib_module_names Message-ID: <1615286111.17.0.443084957575.issue43445@roundup.psfhosted.org> New submission from STINNER Victor : The sys.stdlib_module_names documentation says: "All module kinds are listed: pure Python, built-in, frozen and extension modules. Test modules are excluded." https://docs.python.org/dev/library/sys.html#sys.stdlib_module_names But I just noticed that frozen modules are not listed! Attached PR fix this issue. ---------- components: Library (Lib) messages: 388345 nosy: vstinner priority: normal severity: normal status: open title: Add frozen modules to sys.stdlib_module_names versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 06:32:55 2021 From: report at bugs.python.org (Sergio Livi) Date: Tue, 09 Mar 2021 11:32:55 +0000 Subject: [New-bugs-announce] [issue43446] Wrong character in footnote Message-ID: <1615289575.44.0.162763918069.issue43446@roundup.psfhosted.org> New submission from Sergio Livi : Hello, There seems to be a display error in the sqlite documentation: https://docs.python.org/3.9/library/sqlite3.html#f1 The footnote says "To get loadable extension support, you must pass ?enable-loadable-sqlite-extensions to configure." When actually the configure argument is --enable-etc. The double dash was substituted with a long dash (?), which breaks copy/paste for people. Here's an example: https://github.com/pyenv/pyenv/issues/1702. ---------- assignee: docs at python components: Documentation messages: 388353 nosy: docs at python, serl2 priority: normal severity: normal status: open title: Wrong character in footnote type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 06:34:24 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 09 Mar 2021 11:34:24 +0000 Subject: [New-bugs-announce] [issue43447] Generate vectorcall code to parse arguments using Argument Clinic Message-ID: <1615289664.02.0.198288195908.issue43447@roundup.psfhosted.org> New submission from STINNER Victor : To optimize the creation of objects, a lot of "tp_new" methods are defined twice: once in the legacy way (tp_new slot), once with the new VECTORCALL calling convention (tp_vectorcall slot). My concern is that the VECTORCALL implementation copy/paste most of the code just to parse arguments, whereas the specific code is just a few lines. Example with the float type constructor: -------------- static PyObject * float_new(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; PyObject *x = NULL; if ((type == &PyFloat_Type) && !_PyArg_NoKeywords("float", kwargs)) { goto exit; } if (!_PyArg_CheckPositional("float", PyTuple_GET_SIZE(args), 0, 1)) { goto exit; } if (PyTuple_GET_SIZE(args) < 1) { goto skip_optional; } x = PyTuple_GET_ITEM(args, 0); skip_optional: return_value = float_new_impl(type, x); exit: return return_value; } /*[clinic input] @classmethod float.__new__ as float_new x: object(c_default="NULL") = 0 / Convert a string or number to a floating point number, if possible. [clinic start generated code]*/ static PyObject * float_new_impl(PyTypeObject *type, PyObject *x) /*[clinic end generated code: output=ccf1e8dc460ba6ba input=f43661b7de03e9d8]*/ { if (type != &PyFloat_Type) { if (x == NULL) { x = _PyLong_GetZero(); } return float_subtype_new(type, x); /* Wimp out */ } if (x == NULL) { return PyFloat_FromDouble(0.0); } /* If it's a string, but not a string subclass, use PyFloat_FromString. */ if (PyUnicode_CheckExact(x)) return PyFloat_FromString(x); return PyNumber_Float(x); } static PyObject * float_vectorcall(PyObject *type, PyObject * const*args, size_t nargsf, PyObject *kwnames) { if (!_PyArg_NoKwnames("float", kwnames)) { return NULL; } Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); if (!_PyArg_CheckPositional("float", nargs, 0, 1)) { return NULL; } PyObject *x = nargs >= 1 ? args[0] : NULL; return float_new_impl((PyTypeObject *)type, x); } -------------- Here the float_new() function (tp_new slot) is implemented with Argument Clinic: float_new() C code is generated from the [clinic input] DSL: good! My concern is that float_vectorcall() code is hand written, it's boring to write and boring to maintain. Would it be possible to add a new [clinic input] DSL for vectorcall? I expect something like that: -------------- static PyObject * float_vectorcall(PyObject *type, PyObject * const*args, size_t nargsf, PyObject *kwnames) { if (!_PyArg_NoKwnames("float", kwnames)) { return NULL; } Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); if (!_PyArg_CheckPositional("float", nargs, 0, 1)) { return NULL; } PyObject *x = nargs >= 1 ? args[0] : NULL; return float_vectorcall_impl(type, x); } static PyObject * float_vectorcall_impl(PyObject *type, PyObject *x) { return float_new_impl((PyTypeObject *)type, x); } -------------- where float_vectorcall() C code would be generated, and float_vectorcall_impl() would be the only part written manually. float_vectorcall_impl() gets a clean API and its body is way simpler to write and to maintain! ---------- components: C API messages: 388354 nosy: corona10, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Generate vectorcall code to parse arguments using Argument Clinic versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 10:42:58 2021 From: report at bugs.python.org (Keepun) Date: Tue, 09 Mar 2021 15:42:58 +0000 Subject: [New-bugs-announce] [issue43448] exec() ignores scope. Message-ID: <1615304578.57.0.704046890189.issue43448@roundup.psfhosted.org> New submission from Keepun : exec() ignores scope. Code: -------------------------- class ExecTest: def public(self): h=None exec("h='It is public'") print(h) self._private() def _private(self): h=None exec("h='It is private'", globals(), locals()) print(h) h = None exec("h='It is global'") print(h) e=ExecTest() e.public() Result -------------------------- It is global None None -------------------------- Python 3.7.10 (default, Feb 26 2021, 13:06:18) [MSC v.1916 64 bit (AMD64)] and Python 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0] ---------- components: Interpreter Core messages: 388366 nosy: Keepun priority: normal severity: normal status: open title: exec() ignores scope. type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 13:15:25 2021 From: report at bugs.python.org (Jamie Kirkpatrick) Date: Tue, 09 Mar 2021 18:15:25 +0000 Subject: [New-bugs-announce] [issue43449] multiprocessing.Pool - crash in subprocess causes deadlock in parent Message-ID: <1615313725.52.0.420809061492.issue43449@roundup.psfhosted.org> New submission from Jamie Kirkpatrick : When using multiprocessing.Pool.apply[_async] a crash in the subprocess that is assigned the work item results in a deadlock in the parent process. The parent process remains blissfully unaware of the crash in the subprocess and waits for a result forever. The parent process treats this as normal since the thread running _maintain_pool handles dead processes and repopulates the pool with a replacement subprocess. See the test-case attached. Its not clear how this case should be handled but it can be very hard to trace issues in an application where this condition arises since all information about the crashing subprocess is lost (even with debug logging for the multiprocessing module enabled). ---------- components: Library (Lib) files: test.py messages: 388371 nosy: jkp priority: normal severity: normal status: open title: multiprocessing.Pool - crash in subprocess causes deadlock in parent type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49860/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 14:02:51 2021 From: report at bugs.python.org (=?utf-8?q?M=C3=A1rcio_Mocellin?=) Date: Tue, 09 Mar 2021 19:02:51 +0000 Subject: [New-bugs-announce] [issue43450] List amnesia Message-ID: <1615316571.19.0.916785177436.issue43450@roundup.psfhosted.org> New submission from M?rcio Mocellin : In Python 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux, when I materialize the list, it is shown and is deleted. Shouldn't the list persist in memory? Is this a bug or is it really? ```python >>> vet_neg ['EH01', 'EH02', 'EH03'] Categories (3, object): ['EH01', 'EH02', 'EH03'] >>> cenarios [0, 1] >>> vet_neg_cenarios = itertools.product(vet_neg, cenarios, cenarios) >>> vet_neg_cenarios >>> list(vet_neg_cenarios) [('EH01', 0, 0), ('EH01', 0, 1), ('EH01', 1, 0), ('EH01', 1, 1), ('EH02', 0, 0), ('EH02', 0, 1), ('EH02', 1, 0), ('EH02', 1, 1), ('EH03', 0, 0), ('EH03', 0, 1), ('EH03', 1, 0), ('EH03', 1, 1)] >>> list(vet_neg_cenarios) [] >>> ``` ---------- messages: 388372 nosy: marciomocellin priority: normal severity: normal status: open title: List amnesia versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 14:12:29 2021 From: report at bugs.python.org (David Wilson) Date: Tue, 09 Mar 2021 19:12:29 +0000 Subject: [New-bugs-announce] [issue43451] pydoc terminal suboptimal rendering of complex annotations Message-ID: <1615317149.57.0.837523610181.issue43451@roundup.psfhosted.org> New submission from David Wilson : When viewing docs for classes that use annotations, pydoc's rendering of argument lists is regularly truncated at the terminal edge (if using `less -S`), or wrapped in a manner where quickly scanning the output is next to impossible. My 'classic' example is 'pydoc aiohttp.client.ClientSession', where the __init__ argument list wraps to 24 lines. The pull request attached to this issue works around the problem by formatting arguments one-per-line if the sum of the arguments would exceed a hard-coded width of 150 characters. It is more of an RFC than a suggested fix. It produces acceptable formatting, but I'm not sure where the correct fix should be made -- in the inspect module, or somehow in the pydoc module, or even what the correct fix shuld be. I will include a before/after screenshot in the pull request I will attach before/after screenshot ---------- components: Library (Lib) messages: 388373 nosy: dw priority: normal severity: normal status: open title: pydoc terminal suboptimal rendering of complex annotations versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 14:21:17 2021 From: report at bugs.python.org (Dino Viehland) Date: Tue, 09 Mar 2021 19:21:17 +0000 Subject: [New-bugs-announce] [issue43452] Microoptimize PyType_Lookup for cache hits Message-ID: <1615317677.24.0.232039554227.issue43452@roundup.psfhosted.org> New submission from Dino Viehland : The common case going through _PyType_Lookup is to have a cache hit. There's some small tweaks which can make this a little cheaper: 1) the name field identity is used for a cache hit, and is kept alive by the cache. So there's no need to read the hash code of the name - instead the address can be used as the hash. 2) There's no need to check if the name is cachable on the lookup either, it probably is, and if it is, it'll be in the cache. 3) If we clear the version tag when invalidating a type then we don't actually need to check for a valid version tag bit. ---------- components: Interpreter Core messages: 388377 nosy: dino.viehland priority: normal severity: normal status: open title: Microoptimize PyType_Lookup for cache hits versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 16:16:26 2021 From: report at bugs.python.org (Henry Schreiner) Date: Tue, 09 Mar 2021 21:16:26 +0000 Subject: [New-bugs-announce] [issue43453] docs: runtime_checkable example refers to changed behavior in 3.10 Message-ID: <1615324586.52.0.680787948676.issue43453@roundup.psfhosted.org> New submission from Henry Schreiner : The documentation here: https://docs.python.org/3/library/typing.html#typing.runtime_checkable refers to "For example, builtins.complex implements __float__(), therefore it passes an issubclass() check against SupportsFloat. However, the complex.__float__ method exists only to raise a TypeError with a more informative message.". However, that's not true in Python 3.10 anymore, those methods were thankfully removed. See https://docs.python.org/3.10/whatsnew/3.10.html#removed or https://bugs.python.org/issue41974 for the removal. This documentation should either say "before Python 3.10, ...", or pick some other example that still is valid. Happy to make the change if I know what direction this should go. ---------- assignee: docs at python components: Documentation messages: 388387 nosy: Henry Schreiner, docs at python priority: normal severity: normal status: open title: docs: runtime_checkable example refers to changed behavior in 3.10 type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 16:54:25 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Tue, 09 Mar 2021 21:54:25 +0000 Subject: [New-bugs-announce] [issue43454] [sqlite3] Add support for R*Tree callbacks Message-ID: <1615326865.81.0.691983788342.issue43454@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : Ref. bpo-43440 Now that both Windows and macOS builds compile SQLite with R*Tree support, we should consider adding support for R*Tree callbacks. SQLite has two API's: - sqlite3_rtree_query_callback() for SQLite 3.8.5 and newer. - sqlite3_rtree_geometry_callback() for SQLite pre 3.8.5. I suggest using the new API only, because it is more flexible, and it is also the one recommended by SQLite. See https://sqlite.org/rtree.html Python API: sqlite3.Connection.create_rtree_query_function() Too long function name? As for the callback spec, I'm not sure what's the most pythonic approach? callback(coords, *params, **query_info): coords # array of coordinates of node or entry to check *params # parameters passed to the SQL function **query_info # the rest of the relevant sqlite3_rtree_query_info members return (visibility, score) ---------- components: Library (Lib) messages: 388391 nosy: berker.peksag, erlendaasland priority: normal severity: normal status: open title: [sqlite3] Add support for R*Tree callbacks type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 18:05:56 2021 From: report at bugs.python.org (Eryk Sun) Date: Tue, 09 Mar 2021 23:05:56 +0000 Subject: [New-bugs-announce] [issue43455] pathlib mistakenly assumes os.getcwd() is a resolved path in Windows Message-ID: <1615331156.25.0.626111621937.issue43455@roundup.psfhosted.org> New submission from Eryk Sun : pathlib._WindowsFlavour.resolve() mistakenly assume that os.getcwd() returns a resolved path in Windows: s = str(path) if not s: return os.getcwd() I don't think this is a practical problem since `str(path)` should never be an empty string. But if there is a concern that the result is an empty string, the code should use `s = str(path) or '.'`, and resolve "." like any other relative path. In POSIX the result of getcwd() "shall contain no components that are dot or dot-dot, or are symbolic links". In Windows, os.getcwd() calls WinAPI GetCurrentDirectoryW(), which returns a fully-qualified path that may contain symbolic components that would be resolved in a final path. This includes filesystem symlinks and bind mounts (junctions), as well as mapped and substitute drives (i.e. drives that resolve to a filesystem directory instead of a volume device). ---------- components: Library (Lib), Windows messages: 388393 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: pathlib mistakenly assumes os.getcwd() is a resolved path in Windows type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 20:24:59 2021 From: report at bugs.python.org (Brett Cannon) Date: Wed, 10 Mar 2021 01:24:59 +0000 Subject: [New-bugs-announce] [issue43456] Remove _xxsubinterpreters from sys.stdlib_module_names Message-ID: <1615339499.12.0.300459298792.issue43456@roundup.psfhosted.org> New submission from Brett Cannon : I noticed that _xxsubinterpreters is in sys.stdlib_module_names but none of the other `_xx` modules are included (nor is 'test'). Since _xxsubinterpreters is only meant for testing (ATM) I think it should probably be left out. ---------- components: Library (Lib) messages: 388401 nosy: brett.cannon, vstinner priority: normal severity: normal status: open title: Remove _xxsubinterpreters from sys.stdlib_module_names type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 9 21:05:40 2021 From: report at bugs.python.org (nervecenter) Date: Wed, 10 Mar 2021 02:05:40 +0000 Subject: [New-bugs-announce] [issue43457] Include simple file loading and saving functions in JSON standard library. Message-ID: <1615341940.36.0.754743862961.issue43457@roundup.psfhosted.org> New submission from nervecenter : Python has a "batteries included" approach to standard library construction. To that end, commonly used procedures are often included as functions; adding sugar to the language is often exchanged for adding sugar to libraries. One of these common procedures in small-scale scripting tasks is loading a JSON file as simple data structures and saving simple data structures as a JSON file. This is normally handled using context managers, json.load(), and json.dump(). This is a bit cluttered and, I'd argue, not quite as Pythonic as the philosophy demands. I have a small file containing this code: import json def load_file(filename, *args, **kwargs): with open(filename, "r") as fp: data = json.load(fp, *args, **kwargs) return data def save_file(data, filename, *args, **kwargs): with open(filename, "w") as fp: json.dump(data, fp, *args, **kwargs) I'd say, toss these two functions into the json module. Those two functions contain the clutter. For all other users, loading and saving JSON files become one-line function calls. This is convenient and batteries-included. ---------- components: Library (Lib) messages: 388403 nosy: nervecenter priority: normal severity: normal status: open title: Include simple file loading and saving functions in JSON standard library. type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 00:08:31 2021 From: report at bugs.python.org (Marek M) Date: Wed, 10 Mar 2021 05:08:31 +0000 Subject: [New-bugs-announce] [issue43458] Tutorial should mention about variable scope in try/except/finally Message-ID: <1615352911.49.0.844400259332.issue43458@roundup.psfhosted.org> New submission from Marek M : It can be helpful to mention that variables defined in try block are visible in except/finally block as well. I did not find this info in Python tutorial and for me (having C++ background) this is quite unexpected feature. ---------- assignee: docs at python components: Documentation messages: 388407 nosy: deekox, docs at python priority: normal severity: normal status: open title: Tutorial should mention about variable scope in try/except/finally type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 02:42:12 2021 From: report at bugs.python.org (=?utf-8?b?TWljaGHFgiBHw7Nybnk=?=) Date: Wed, 10 Mar 2021 07:42:12 +0000 Subject: [New-bugs-announce] [issue43459] Race conditions when the same source file used to build mutliple extensions Message-ID: <1615362132.43.0.297389622008.issue43459@roundup.psfhosted.org> New submission from Micha? G?rny : There is a race condition in distutils' build_ext implementation. When the same source file is used to build multiple extensions, distutils attempts to build it multiple times using the same output file, in parallel. This means that the link editor can grab the file while another compiler instance is overwriting it. The results vary from compile errors to cryptic dyld failures when attempting to load the module. I've created a trivial reproducer that I've attached in a patch form. For convenience, it's also available on my GitHub: https://github.com/mgorny/distutils-build_ext-race The reproducer consists of two extension modules sharing the same file. The race.sh script attempts to build the extension and then import it. The process is repeated until something fails, e.g.: + python3.10 setup.py build_ext -i -j4 running build_ext building 'bar' extension creating build building 'foo' extension creating build/temp.linux-x86_64-3.10 creating build/temp.linux-x86_64-3.10 x86_64-pc-linux-gnu-gcc-10.2.0 -pthread -fPIC -I/usr/include/python3.10 -c bar.c -o build/temp.linux-x86_64-3.10/bar.o x86_64-pc-linux-gnu-gcc-10.2.0 -pthread -fPIC -I/usr/include/python3.10 -c foo.c -o build/temp.linux-x86_64-3.10/foo.o x86_64-pc-linux-gnu-gcc-10.2.0 -pthread -fPIC -I/usr/include/python3.10 -c shared.c -o build/temp.linux-x86_64-3.10/shared.o x86_64-pc-linux-gnu-gcc-10.2.0 -pthread -fPIC -I/usr/include/python3.10 -c shared.c -o build/temp.linux-x86_64-3.10/shared.o x86_64-pc-linux-gnu-gcc-10.2.0 -pthread -shared -Wl,-O1 -Wl,--as-needed -Wl,--hash-style=gnu build/temp.linux-x86_64-3.10/foo.o build/temp.linux-x86_64-3.10/shared.o -L/usr/lib64 -o /home/mgorny/git/distutils-build_ext-race/foo.cpython-310-x86_64-linux-gnu.so x86_64-pc-linux-gnu-gcc-10.2.0 -pthread -shared -Wl,-O1 -Wl,--as-needed -Wl,--hash-style=gnu build/temp.linux-x86_64-3.10/bar.o build/temp.linux-x86_64-3.10/shared.o -L/usr/lib64 -o /home/mgorny/git/distutils-build_ext-race/bar.cpython-310-x86_64-linux-gnu.so + python3.10 -c 'import foo; import bar' Traceback (most recent call last): File "", line 1, in ImportError: /home/mgorny/git/distutils-build_ext-race/foo.cpython-310-x86_64-linux-gnu.so: undefined symbol: call_shared + echo 'Reproduced at iteration 256' Reproduced at iteration 256 + break ---------- components: Distutils files: 0001-A-reproducer-for-distutils-build_ext-race-condition.patch keywords: patch messages: 388410 nosy: dstufft, eric.araujo, mgorny priority: normal severity: normal status: open title: Race conditions when the same source file used to build mutliple extensions type: compile error versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49863/0001-A-reproducer-for-distutils-build_ext-race-condition.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 06:30:36 2021 From: report at bugs.python.org (Douglas Raillard) Date: Wed, 10 Mar 2021 11:30:36 +0000 Subject: [New-bugs-announce] [issue43460] Exception copy error Message-ID: <1615375836.74.0.101407862938.issue43460@roundup.psfhosted.org> New submission from Douglas Raillard : Instances of subclasses of BaseException created with keyword argument fail to copy properly as demonstrated by: import copy class E(BaseException): def __init__(self, x): self.x=x # works fine e = E(None) copy.copy(e) # raises e = E(x=None) copy.copy(e) This seems to affect all Python versions I've tested (3.6 <= Python <= 3.9). I've currently partially worked around the issue with a custom pickler that just restores __dict__, but: * "args" is not part of __dict__, and setting "args" key in __dict__ does not create a "working object" (i.e. the key is set, but is ignored for all intents and purposes except direct lookup in __dict__) * pickle is friendly: you can provide a custom pickler that chooses the reduce function for each single class. copy module is much less friendly: copyreg.pickle() only allow registering custom functions for specific classes. That means there is no way (that I know) to make copy.copy() select a custom reduce for a whole subclass tree. One the root of the issue: * exception from the standard library prevent keyword arguments (maybe because of that issue ?), but there is no such restriction on user-defined classes. * the culprit is BaseException_reduce() (in Objects/exceptions.c) [1] It seems that the current behavior is a consequence of the __dict__ being created lazily, I assume for speed and memory efficiency There seems to be a few approaches that would solve the issue: * keyword arguments passed to the constructor could be fused with the positional arguments in BaseException_new (using the signature, but signature might be not be available for extension types I suppose) * keyword arguments could just be stored like "args" in a "kwargs" attribute in PyException_HEAD, so they are preserved and passed again to __new__ when the instance is restored upon copying/pickling. * the fact that keyword arguments were used could be saved as a bool in PyException_HEAD. When set, this flag would make BaseException_reduce() only use __dict__ and not "args". This would technically probably be a breaking change, but the only cases I can think of where this would be observable are a bit far fetched (if __new__ or __init__ have side effects beyond storing attributes in __dict__). [1] https://github.com/python/cpython/blob/master/Objects/exceptions.c#L134 ---------- messages: 388427 nosy: douglas-raillard-arm priority: normal severity: normal status: open title: Exception copy error type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 07:23:57 2021 From: report at bugs.python.org (Jonathan Frawley) Date: Wed, 10 Mar 2021 12:23:57 +0000 Subject: [New-bugs-announce] [issue43461] Tottime column for cprofile output does not add up Message-ID: <1615379037.48.0.966046679388.issue43461@roundup.psfhosted.org> New submission from Jonathan Frawley : I am using cprofile and PStats to try and figure out where bottlenecks are in a program. When I sum up all of the times in the "tottime" column, it only comes to 57% of the total runtime. Is this due to rounding of times or some other issue? ---------- messages: 388430 nosy: jonathanfrawley priority: normal severity: normal status: open title: Tottime column for cprofile output does not add up type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 08:51:23 2021 From: report at bugs.python.org (Vincent) Date: Wed, 10 Mar 2021 13:51:23 +0000 Subject: [New-bugs-announce] [issue43462] canvas.bbox returns None on 'hidden' items while coords doesn't Message-ID: <1615384283.9.0.940431260638.issue43462@roundup.psfhosted.org> New submission from Vincent : canvas.bbox() should return a tuple containing values whether an item is hidden or not. canvax.coords() does return a tuple when an item is hidden. Steps to reproduce: ``` from tkinter import * root = Tk() canvas = Canvas(root) id1 = canvas.create_line(10,5,20,5, tags='tunnel') id2 = canvas.create_line(10,8,20,8, tags='tunnel') canvas.bbox('tunnel') # return a tupple canvas.itemconfig('tunnel', state='hidden') canvas.bbox('tunnel') # return nothing not even None ``` I need bbox to return a tuple containing values. The consequences is that the code must make the items temporarily visible before it can invoke the bbox function. This turning on and off creates flashing items in my program. Thanks in advance! ---------- components: Tkinter messages: 388432 nosy: Vincent priority: normal severity: normal status: open title: canvas.bbox returns None on 'hidden' items while coords doesn't versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 09:38:41 2021 From: report at bugs.python.org (Florian Bruhin) Date: Wed, 10 Mar 2021 14:38:41 +0000 Subject: [New-bugs-announce] [issue43463] typing.get_type_hints with TYPE_CHECKING imports / getting hints for single argument Message-ID: <1615387121.09.0.336024174905.issue43463@roundup.psfhosted.org> New submission from Florian Bruhin : Consider a file such as: # from __future__ import annotations from typing import TYPE_CHECKING, Union, get_type_hints if TYPE_CHECKING: import types def fun(a: 'types.SimpleNamespace', b: Union[int, str]): pass print(fun.__annotations__) print(get_type_hints(fun)) When running this, typing.get_type_hints fails (as you would expect): Traceback (most recent call last): File "/home/florian/tmp/x.py", line 11, in print(get_type_hints(fun)) File "/usr/lib/python3.9/typing.py", line 1449, in get_type_hints value = _eval_type(value, globalns, localns) File "/usr/lib/python3.9/typing.py", line 283, in _eval_type return t._evaluate(globalns, localns, recursive_guard) File "/usr/lib/python3.9/typing.py", line 539, in _evaluate eval(self.__forward_code__, globalns, localns), File "", line 1, in NameError: name 'types' is not defined However, in my case I'm not actually interested in the type of 'a', I only need the type for 'b'. Before Python 3.10 (or the __future__ import), I can do so by getting it from __annotations__ directly. With Python 3.10 (or the __future__ import), this doesn't seem to be possible anymore - I'd need to either evaluate the 'Union[int, str]' annotation manually (perhaps calling into private typing.py functions), or maybe work around the issue by passing some magical dict-like object as local/globals which ignores the NameError. Both of those seem suboptimal. Thus, I'd like a way to either: 1) Ignore exceptions in get_type_hints and instead get something like a typing.Unresolvable['types.SimpleNamespace'] back 2) Have something like a typing.get_argument_type_hints(fun, 'b') instead, allowing me to get the arguments one by one rather than resolving the whole thing 3) Have a public API to resolve a string type annotation (i.e. the equivalent of `typing._eval_type`) ---------- components: Library (Lib) messages: 388436 nosy: The Compiler, gvanrossum, levkivskyi priority: normal severity: normal status: open title: typing.get_type_hints with TYPE_CHECKING imports / getting hints for single argument type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 11:39:19 2021 From: report at bugs.python.org (Josh Rosenberg) Date: Wed, 10 Mar 2021 16:39:19 +0000 Subject: [New-bugs-announce] [issue43464] set intersections should short-circuit Message-ID: <1615394359.7.0.616950024555.issue43464@roundup.psfhosted.org> New submission from Josh Rosenberg : At present, set_intersection (the C name for set.intersection) optimizes for pairs of sets by iterating the smallest set and only adding entries found in the larger, meaning work is proportionate to the smallest input. But when the other input isn't a set, it goes with a naive solution, iterating the entire non-set, and adding entries found in the set. This is fine when the intersection will end up smaller than the original set (there's no way to avoid exhausting the non-set when that's the case), but when the intersection ends up being the same size as the original, we could add a cheap length check and short-circuit at that point. As is, {4}.intersection(range(10000)) takes close to 1000 times longer than {4}.intersection(range(10)) despite both of them reaching the point where the outcome will be {4} at the same time. Since the length check for short-circuiting only needs to be performed when input set actually contains the value, the cost should be fairly low. I figure this would be the worst case for the change: {3, 4}.intersection((4,) * 10000) where it performs the length check every time, and doesn't benefit from short-circuiting. But cases like: {4}.intersection((4,) * 10000) or {4}.intersection(range(10000)) would finish much faster. A similar optimization to set_intersection_multi (to stop when the intermediate result is empty) would make cases like: {4000}.intersection([1], range(10000), range(100000, 200000)) also finish dramatically quicker in the (I'd assume not uncommon case) where the intersection of many iterables is empty, and this could be known quite early on (the cost of this check would be even lower, since it would only be performed once per iterable, not per-value). Only behavioral change this would cause is that errors resulting from processing items in an iterable that is no longer run to exhaustion due to short-circuiting wouldn't happen ({4}.intersection([4, []]) currently dies, but would succeed with short-circuiting; same foes for {4}.intersection([5], [[]]) if set_intersection_multi is optimized), and input iterators might be left only partially consumed. If that's acceptable, the required code changes are trivial. ---------- components: C API keywords: easy (C) messages: 388442 nosy: josh.r priority: normal severity: normal status: open title: set intersections should short-circuit versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 14:24:39 2021 From: report at bugs.python.org (Enji Cooper) Date: Wed, 10 Mar 2021 19:24:39 +0000 Subject: [New-bugs-announce] [issue43465] ./configure --help describes what --with-ensurepip does poorly Message-ID: <1615404279.31.0.681520070427.issue43465@roundup.psfhosted.org> New submission from Enji Cooper : Many users are used to --without-* flags in autoconf disabling features (and optionally not installing them). --without-ensurepip (at face value to me) suggests it shouldn't be built/installed. This comment in https://bugs.python.org/issue20417 by dstufft implies otherwise. From https://bugs.python.org/msg209537 : > I don't see any reason not to install ensurepip in this situation. That flag controls whether or not ``python -m ensurepip`` will be executed during the install, but ensurepip itself will still be installed. It is not an optional module This isn't what "./configure --help" implies though: ``` $ git log --oneline -n 1 87f649a409 (HEAD -> master, upstream/master, origin/master, origin/logging-config-dictconfig-support-more-sysloghandler-options, origin/HEAD, logging-config-dictconfig-support-more-sysloghandler-options) bpo-43311: Create GIL autoTSSkey ealier (GH-24819) $ ./configure --help ... --with-ensurepip[=install|upgrade|no] "install" or "upgrade" using bundled pip (default is upgrade) $ ``` The wording should be clarified to note what the flag actually does instead of causing [valid] confusion to end-users which might make them think that the ensurepip module shouldn't be installed if --without-ensurepip is specified. ---------- components: Build messages: 388456 nosy: ngie priority: normal severity: normal status: open title: ./configure --help describes what --with-ensurepip does poorly versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 18:14:03 2021 From: report at bugs.python.org (Christian Heimes) Date: Wed, 10 Mar 2021 23:14:03 +0000 Subject: [New-bugs-announce] [issue43466] ssl/hashlib: Add configure option to set or auto-detect rpath to OpenSSL libs Message-ID: <1615418043.46.0.229114131093.issue43466@roundup.psfhosted.org> New submission from Christian Heimes : Python's configure script has the option --with-openssl. It sets a path to a custom OpenSSL installation. Internally it provides OPENSSL_INCLUDES, OPENSSL_LIBS, and OPENSSL_LDFLAGS. The setup.py script turns the variables into include_dirs, library_dirs, and libraries arguments for _ssl and _hashlib extension modules. However neither --with-openssl nor setup.py sets a custom runtime library path (rpath). This makes it confusing and hard for users to use a custom OpenSSL installation. They need to know that a) they have to take care of rpath on the first place, and b) how to set an rpath at compile or runtime. Without an rpath, the dynamic linker either fails to locate libssl/libcrypto or load system-provided shared libraries. Ticket bpo-34028 contains examples of user issues. I propose to include a new option to make it easier for users to use a custom build of OpenSSL: --with-openssl-rpath= no (default): don't set an rpath auto: auto-detect rpath from OPENSSL_LDFLAGS (--with-openssl or pkg-config) DIR: set a custom rpath The option will only affect the rpath of _ssl and _hashlib modules. The default value "no" is fully backwards compatible with 3.9 and earlier. ---------- assignee: christian.heimes components: Installation, SSL messages: 388463 nosy: barry, christian.heimes, gregory.p.smith, pablogsal priority: normal severity: normal stage: patch review status: open title: ssl/hashlib: Add configure option to set or auto-detect rpath to OpenSSL libs type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 10 18:57:39 2021 From: report at bugs.python.org (David E. Franco G.) Date: Wed, 10 Mar 2021 23:57:39 +0000 Subject: [New-bugs-announce] [issue43467] IDLE: horizontal scrollbar Message-ID: <1615420659.08.0.218297544111.issue43467@roundup.psfhosted.org> New submission from David E. Franco G. : I noticed that the horizontal scroll bar is missing, I think it was present in previous version, regardless it would be nice if its be present. Thanks. ---------- assignee: terry.reedy components: IDLE files: no scroll bar.PNG messages: 388469 nosy: David E. Franco G., terry.reedy priority: normal severity: normal status: open title: IDLE: horizontal scrollbar type: enhancement versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49865/no scroll bar.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 01:37:47 2021 From: report at bugs.python.org (Antti Haapala) Date: Thu, 11 Mar 2021 06:37:47 +0000 Subject: [New-bugs-announce] [issue43468] functools.cached_property locking is plain wrong. Message-ID: <1615444667.91.0.948817723419.issue43468@roundup.psfhosted.org> New submission from Antti Haapala : The locking on functools.cached_property (https://github.com/python/cpython/blob/87f649a409da9d99682e78a55a83fc43225a8729/Lib/functools.py#L934) as it was written is completely undesirable for I/O bound values, parallel processing. Instead of protecting the calculation of cached property to the same instance in two threads, it completely blocks parallel calculations of cached values to *distinct instances* of the same class. Here's the code of __get__ in cached_property: def __get__(self, instance, owner=None): if instance is None: return self if self.attrname is None: raise TypeError( "Cannot use cached_property instance without calling __set_name__ on it.") try: cache = instance.__dict__ except AttributeError: # not all objects have __dict__ (e.g. class defines slots) msg = ( f"No '__dict__' attribute on {type(instance).__name__!r} " f"instance to cache {self.attrname!r} property." ) raise TypeError(msg) from None val = cache.get(self.attrname, _NOT_FOUND) if val is _NOT_FOUND: with self.lock: # check if another thread filled cache while we awaited lock val = cache.get(self.attrname, _NOT_FOUND) if val is _NOT_FOUND: val = self.func(instance) try: cache[self.attrname] = val except TypeError: msg = ( f"The '__dict__' attribute on {type(instance).__name__!r} instance " f"does not support item assignment for caching {self.attrname!r} property." ) raise TypeError(msg) from None return val I noticed this because I was recommending that Pyramid web framework deprecate its much simpler [`reify`](https://docs.pylonsproject.org/projects/pyramid/en/latest/_modules/pyramid/decorator.html#reify) decorator in favour of using `cached_property`, and then noticed why it won't do. Here is the test case for cached_property: from functools import cached_property from threading import Thread from random import randint import time class Spam: @cached_property def ham(self): print(f'Calculating amount of ham in {self}') time.sleep(10) return randint(0, 100) def bacon(): spam = Spam() print(f'The amount of ham in {spam} is {spam.ham}') start = time.time() threads = [] for _ in range(3): t = Thread(target=bacon) threads.append(t) t.start() for t in threads: t.join() print(f'Total running time was {time.time() - start}') Calculating amount of ham in <__main__.Spam object at 0x7fa50bcaa220> The amount of ham in <__main__.Spam object at 0x7fa50bcaa220> is 97 Calculating amount of ham in <__main__.Spam object at 0x7fa50bcaa4f0> The amount of ham in <__main__.Spam object at 0x7fa50bcaa4f0> is 8 Calculating amount of ham in <__main__.Spam object at 0x7fa50bcaa7c0> The amount of ham in <__main__.Spam object at 0x7fa50bcaa7c0> is 53 Total running time was 30.02147102355957 The runtime is 30 seconds; for `pyramid.decorator.reify` the runtime would be 10 seconds: Calculating amount of ham in <__main__.Spam object at 0x7fc4d8272430> Calculating amount of ham in <__main__.Spam object at 0x7fc4d82726d0> Calculating amount of ham in <__main__.Spam object at 0x7fc4d8272970> The amount of ham in <__main__.Spam object at 0x7fc4d82726d0> is 94 The amount of ham in <__main__.Spam object at 0x7fc4d8272970> is 29 The amount of ham in <__main__.Spam object at 0x7fc4d8272430> is 93 Total running time was 10.010624170303345 `reify` in Pyramid is used heavily to add properties to incoming HTTP request objects - using `functools.cached_property` instead would mean that each independent request thread blocks others because most of them would always get the value for the same lazy property using the the same descriptor instance and locking the same lock. ---------- components: Library (Lib) messages: 388480 nosy: ztane priority: normal severity: normal status: open title: functools.cached_property locking is plain wrong. type: resource usage versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 02:17:30 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Thu, 11 Mar 2021 07:17:30 +0000 Subject: [New-bugs-announce] [issue43469] Python 3.6 fails to run on MacOS (Big Sur 11.2.3) Message-ID: <1615447050.11.0.0564734952656.issue43469@roundup.psfhosted.org> New submission from Xinmeng Xia : Python 3.6 can work well on old version of MacOS. When I upgrade MacOS to the latest version Big Sur 11.2.3. Python 3.6 fails to start and crashes. Python 3.7, 3.8, 3.9 can perform well on the new version MacOS Big Sur 11.2.3. The crash information attached as follows: Crash information ============================================================== >>python3.6 dyld: Library not loaded: /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/Resources/Python.app/Contents/MacOS/Python Reason: image not found Abort trap: 6 ============================================================== ---------- components: macOS messages: 388481 nosy: ned.deily, ronaldoussoren, xxm priority: normal severity: normal status: open title: Python 3.6 fails to run on MacOS (Big Sur 11.2.3) type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 02:18:38 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Thu, 11 Mar 2021 07:18:38 +0000 Subject: [New-bugs-announce] [issue43470] Installation of Python 3.6.13 fails on MacOS Big Sur 11.2.3 Message-ID: <1615447118.51.0.748339501359.issue43470@roundup.psfhosted.org> New submission from Xinmeng Xia : Installation of latest Python 3.6.13 fails on MacOS Big Sur 11.2.3. The source code is downloaded from python.org. Then we try to install it by commands "./configure;sudo make;sudo make install". However the installation crashes. The installation succeeds on Ubuntu. Crash information: ========================================================== >>./configure >>sudo make gcc -c -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Programs/python.o ./Programs/python.c ..... t -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -c ./Modules/posixmodule.c -o Modules/posixmodule.o ./Modules/posixmodule.c:8210:15: error: implicit declaration of function 'sendfile' is invalid in C99 [-Werror,-Wimplicit-function-declaration] ret = sendfile(in, out, offset, &sbytes, &sf, flags); ^ ./Modules/posixmodule.c:10432:5: warning: code will never be executed [-Wunreachable-code] Py_FatalError("abort() called from Python code didn't abort!"); ^~~~~~~~~~~~~ 1 warning and 1 error generated. make: *** [Modules/posixmodule.o] Error 1 ============================================================ ---------- components: Installation messages: 388482 nosy: xxm priority: normal severity: normal status: open title: Installation of Python 3.6.13 fails on MacOS Big Sur 11.2.3 type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 02:20:37 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Thu, 11 Mar 2021 07:20:37 +0000 Subject: [New-bugs-announce] [issue43471] Fails to import bz2 on Ubuntu Message-ID: <1615447237.87.0.642854040325.issue43471@roundup.psfhosted.org> New submission from Xinmeng Xia : Module bz2 fails to be imported on Ubuntu due to lack of '_bz2'. We try "import bz2" on Mac, it can work well. Errors on Ubuntu ========================================== >>import bz2 Traceback (most recent call last): File "/home/xxm/Desktop/apifuzz/doc/genDoc.py", line 97, in exec(compile(mstr,'','exec')) File "", line 1, in File "/home/xxm/Desktop/apifuzz/Python-3.9.2/Lib/bz2.py", line 18, in from _bz2 import BZ2Compressor, BZ2Decompressor ModuleNotFoundError: No module named '_bz2' =========================================== Python version: 3.9.2 Python installation: (1). download source code from python.org, (2). run command "./configure; sudo make; sudo make install. We install the same Python 3.9.2 in a same way on Mac and Ubuntu. ---------- components: Library (Lib) messages: 388483 nosy: xxm priority: normal severity: normal status: open title: Fails to import bz2 on Ubuntu type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 04:31:02 2021 From: report at bugs.python.org (Christian Heimes) Date: Thu, 11 Mar 2021 09:31:02 +0000 Subject: [New-bugs-announce] [issue43472] [security][subinterpreters] Add auditing hooks to subinterpreter module Message-ID: <1615455062.58.0.468186541544.issue43472@roundup.psfhosted.org> New submission from Christian Heimes : The subinterpreters module does not emit any audit events yet. It's possible to create a subinterpreter and run arbitrary code through run_string(). We should also improve documentation of sys.addaudithook() and explain what 'current interpreter' actually means. I guess most users don't realize the consequences for subinterpreters. $ ./python auditsub.py ('os.system', (b'echo main interpreter',)) main interpreter you got pwned [heimes at seneca cpython]$ cat au auditsub.py autom4te.cache/ [heimes at seneca cpython]$ cat auditsub.py import sys import _xxsubinterpreters def hook(*args): print(args) sys.addaudithook(hook) import os os.system('echo main interpreter') sub = _xxsubinterpreters.create() _xxsubinterpreters.run_string(sub, "import os; os.system('echo you got pwned')", None) $ ./python auditsub.py ('os.system', (b'echo main interpreter',)) main interpreter you got pwned ---------- components: Interpreter Core, Subinterpreters messages: 388489 nosy: christian.heimes, eric.snow, steve.dower priority: normal severity: normal status: open title: [security][subinterpreters] Add auditing hooks to subinterpreter module type: security versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 04:37:51 2021 From: report at bugs.python.org (Hubert Bonnisseur-De-La-Bathe) Date: Thu, 11 Mar 2021 09:37:51 +0000 Subject: [New-bugs-announce] [issue43473] Junks in difflib Message-ID: <1615455471.04.0.613792585483.issue43473@roundup.psfhosted.org> New submission from Hubert Bonnisseur-De-La-Bathe : Reading first at the documentation of difflib, I thought that the use of junks would have produced the result s = SequenceMatcher(lambda x : x == " ", "abcd efgh", "abcdefgh") s.get_matching_blocks() >>> [Match(a=0, b=0, size=8)] At a second lecture, it is clear that such evaluation will return in fact two matches of length 4. Would it be nicer to have get_matching_block return the length 8 match ? Don't know if it's in the spirit of the lib, I'm just asking. ---------- assignee: docs at python components: Documentation messages: 388491 nosy: docs at python, hubertbdlb priority: normal severity: normal status: open title: Junks in difflib type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 06:09:59 2021 From: report at bugs.python.org (grumblor) Date: Thu, 11 Mar 2021 11:09:59 +0000 Subject: [New-bugs-announce] [issue43474] http.server.BaseHTTPRequestHandler end_header() fails Message-ID: <1615460999.3.0.75730553286.issue43474@roundup.psfhosted.org> New submission from grumblor : Python Version 3.8 http.server version 0.6 This is current install in new xubuntu 20.04 LTS, no idea if this is fixed in other version but appears to be present on github https://github.com/python/cpython/blob/3.9/Lib/http/server.py at line 525 http.server.BaseHTTPRequestHandler end_headers() can reference _header_buffer array before it is assigned. Should this be updated to something like the following? This fixes the problem of end_headers() failing for me: def end_headers(self): if not hasattr(self, '_headers_buffer'): self._headers_buffer = [] """Send the blank line ending the MIME headers.""" if self.request_version != 'HTTP/0.9': self._headers_buffer.append(b"\r\n") self.flush_headers() This is my first issue, apologies for any mistakes I might have made. ---------- components: Library (Lib) messages: 388498 nosy: grumblor priority: normal severity: normal status: open title: http.server.BaseHTTPRequestHandler end_header() fails versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 09:55:33 2021 From: report at bugs.python.org (Cong Ma) Date: Thu, 11 Mar 2021 14:55:33 +0000 Subject: [New-bugs-announce] [issue43475] Worst-case behaviour of hash collision with float NaN Message-ID: <1615474533.28.0.166606930148.issue43475@roundup.psfhosted.org> New submission from Cong Ma : Summary: CPython hash all NaN values to 0. This guarantees worst-case behaviour for dict if numerous existing keys are NaN. I think by hashing NaN using the generic object (or "pointer") hash instead, the worst-case situation can be alleviated without changing the semantics of either dict or float. However, this also requires changes to how complex and Decimal objects hash, and moreover incompatible change to sys.hash_info. I would like to hear how Python developers think about this matter. -------- Currently CPython uses the hard-coded macro constant 0 (_PyHASH_NAN, defined in Include/pyhash.h) for the hash value of all floating point NaNs. The value is part of the sys.hashinfo API and is re-used by complex and Decimal in computing its hash in accordance with Python builtin-type documentation. [0] (The doc [0] specifically says that "[a]ll hashable nans have the same hash value.") This is normally not a great concern, except for the worst case performance. The problem is that, since they hash to the same value and they're guaranteed to compare unequal to any compatible numeric value -- not even to themselves, this means they're guaranteed to collide. For this reason I'd like to question whether it is a good idea to have all hashable NaNs with the same hash value. There has been some discussions about this over the Web for some time (see [1]). In [1] the demo Python script times the insertion of k distinct NaN keys (as different objects) into a freshly created dict. Since the keys are distinct and are guaranteed to collide with each other (if any), the run time of a single lookup/insertion is roughly linear to the existing number of NaN keys. I've recreated the same script using with a more modern Python (attached). I'd suggest a fix for this worst-case behaviour: instead of returning the hash value 0 for all NaNs, use the generic object (pointer) hash for these objects. As a PoC (also in the attached script), it roughly means ``` class myfloat(float): def __hash__(self): if self != self: # i.e., math.isnan(self) return object.__hash__(self) return super().__hash__(self) ``` This will - keep the current behaviour of dict intact; - keep the invariant `a == b implies hash(a) == hash(b)` intact, where applicable; - uphold all the other rules for Python numeric objects listed in [0]; - make hash collisions no more likely than with object() instances (dict lookup time is amortized constant w.r.t. existing number of NaN keys). However, it will - not keep the current rule "All hashable nans have the same hash value"; - not be compatible with the current sys.hash_info API (requires the removal of the "nan" attribute from there and documenting the change); - require congruent modifications in complex and Decimal too. Additionally, I don't think this will affect module-level NaN "constants" such as math.nan and how they behave. The "NaN constant" has never been a problem to begin with. It's only the *distinct* NaN objects that may cause the worst-case behaviour. -------- Just for the record I'd also like to clear some outdated info or misconception about NaN keys in Python dicts. It's not true that NaN keys, once inserted, cannot be retrieved (e.g., as claimed in [1][2]). In Python, they can be, if you keep the original key *object* around by keeping a reference to it (or obtaining a new one from the dict by iterating over it). This, I think, is because Python dict compares for object identity before rich-comparing for equality in `lookdict()` in Objects/dictobject.c, so this works for `d = dict()`: ``` f = float("nan") d[f] = "value" v = d[f] ``` but this fails with `KeyError`, as it should: ``` d[float("nan")] = "value" v = d[float("nan")] ``` In this regard the NaN float object behaves exactly like the object() instance as keys -- except for the collisions. That's why I think at least for floats the "object" hash is likely to work. The solution using PRNG [1] (implemented with the Go language) is not necessary for CPython because the live objects are already distinct. -------- Links: [0] https://docs.python.org/3/library/stdtypes.html#hashing-of-numeric-types [1] https://research.swtch.com/randhash [2] https://readafterwrite.wordpress.com/2017/03/23/how-to-hash-floating-point-numbers/ ---------- components: Interpreter Core, Library (Lib) files: nan_key.py messages: 388508 nosy: congma priority: normal severity: normal status: open title: Worst-case behaviour of hash collision with float NaN type: performance Added file: https://bugs.python.org/file49869/nan_key.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 15:38:57 2021 From: report at bugs.python.org (Andre Roberge) Date: Thu, 11 Mar 2021 20:38:57 +0000 Subject: [New-bugs-announce] [issue43476] Enabling access to showsyntaxerror for IDLE's shell Message-ID: <1615495137.57.0.647569220862.issue43476@roundup.psfhosted.org> New submission from Andre Roberge : As a result of https://bugs.python.org/issue43008, IDLE now supports custom exception hook for almost all cases except for code entered in the interactive shell that result in SyntaxError. It would be useful for some applications if a way to replace the error handling currently done by IDLE for this particular case was made available to user code. I consider this to be in the "would be nice to have" category, but in no means a high priority item. ---------- assignee: terry.reedy components: IDLE messages: 388524 nosy: aroberge, terry.reedy priority: normal severity: normal status: open title: Enabling access to showsyntaxerror for IDLE's shell type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 19:40:38 2021 From: report at bugs.python.org (Thomas) Date: Fri, 12 Mar 2021 00:40:38 +0000 Subject: [New-bugs-announce] [issue43477] from x import * behavior inconsistent between module types. Message-ID: <1615509638.08.0.549124406797.issue43477@roundup.psfhosted.org> New submission from Thomas : I'm looking for clarification as to how `from x import *` should operate when importing file/directory-based modules versus when importing a sub-module from within a directory-based module. While looking into a somewhat related issue with pylint, I noticed that `from x import *` appears to behave inconsistently when called from within a directory-based module on a sub-module. Whereas normally `from x import *` intentionally does not cause `x` to be added to the current namespace, when called within a directory-based module to import from a sub-module (so, `from .y import *` in an `__init__.py`, for example), the sub-module (let's say, `y`) *does* end up getting added to the importing namespace. From what I can tell, this should not be happening. If this oddity has been documented somewhere, I may have just missed it, so please let me know if it has been. This inconsistency is actually setting off pylint (and confusing its AST handling code) when you use the full path to reference any member of the `asyncio.subprocess` submodule (for example, `asyncio.subprocess.Process`) because, according to `asyncio`'s `__init__.py` file, no explicit import of the `subprocess` sub-module ever occurs, and yet you can draw the entire path all the way to it, and its members. I've attached a generic example of the different behaviors (tested with Python 3.9) using simple modules, including a demonstration of the sub-module import. Thomas ---------- components: Interpreter Core files: example.txz messages: 388530 nosy: kaorihinata priority: normal severity: normal status: open title: from x import * behavior inconsistent between module types. type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49871/example.txz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 11 20:53:43 2021 From: report at bugs.python.org (Matthew Suozzo) Date: Fri, 12 Mar 2021 01:53:43 +0000 Subject: [New-bugs-announce] [issue43478] Disallow Mock spec arguments from being Mocks Message-ID: <1615514023.14.0.0477922265442.issue43478@roundup.psfhosted.org> New submission from Matthew Suozzo : An unfortunately common pattern over large codebases of Python tests is for spec'd Mock instances to be provided with Mock objects as their specs. This gives the false sense that a spec constraint is being applied when, in fact, nothing will be disallowed. The two most frequently observed occurrences of this anti-pattern are as follows: * Re-patching an existing autospec. def setUp(self): mock.patch.object(mod, 'Klass', autospec=True).start() self.addCleanup(mock.patch.stopall) @mock.patch.object(mod, 'Klass', autospec=True) # :( def testFoo(self, mock_klass): # mod.Klass has no effective spec. * Deriving an autospec Mock from an already-mocked object def setUp(self): mock.patch.object(mod, 'Klass').start() ... mock_klass = mock.create_autospec(mod.Klass) # :( # mock_klass has no effective spec This is fairly easy to detect using _is_instance_mock at patch time however it can break existing tests. I have a patch ready and it seems like this error case is not frequent enough that it would be disruptive to address. Another option would be add it as a warning for a version e.g. 3.10 and then potentially make it a breaking change in 3.11. However considering this is a test-only change with a fairly clear path to fix it, that might be overly cautious. ---------- components: Tests messages: 388532 nosy: msuozzo priority: normal severity: normal status: open title: Disallow Mock spec arguments from being Mocks type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 12 05:48:33 2021 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 12 Mar 2021 10:48:33 +0000 Subject: [New-bugs-announce] [issue43479] Remove a duplicate comment and assignment in http.client Message-ID: <1615546113.27.0.0553252720004.issue43479@roundup.psfhosted.org> New submission from G?ry : Remove a duplicate comment and assignment following the usage of a name already assigned in the http.client standard library. ---------- components: Library (Lib) messages: 388538 nosy: maggyero priority: normal pull_requests: 23597 severity: normal status: open title: Remove a duplicate comment and assignment in http.client type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 12 10:49:24 2021 From: report at bugs.python.org (Patrick Reader) Date: Fri, 12 Mar 2021 15:49:24 +0000 Subject: [New-bugs-announce] [issue43480] Add .path method/property to tempfile.* for a pathlib.Path Message-ID: <1615564164.94.0.724086636584.issue43480@roundup.psfhosted.org> New submission from Patrick Reader : It would be nice to have a `.path` method or property on `tempfile.NamedTemporaryFile`, `tempfile.TemporaryDirectory` which produces a `pathlib.Path` of their `.name` attribute, so one can use the modern interface directly. I think a method would be more appropriate than a property because you're explicitly allocating a new object (unless you use `@functools.cached_property` or similar to have a shared object between successive calls) I can do a PR. ---------- components: Library (Lib) messages: 388540 nosy: pxeger priority: normal severity: normal status: open title: Add .path method/property to tempfile.* for a pathlib.Path type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 12 16:44:37 2021 From: report at bugs.python.org (Chris Morton) Date: Fri, 12 Mar 2021 21:44:37 +0000 Subject: [New-bugs-announce] [issue43481] PyEval_EvalCode() namespace issue not observed in Python 2.7. Message-ID: <1615585477.01.0.106665214672.issue43481@roundup.psfhosted.org> New submission from Chris Morton : Compiling (Window 10, MSVS 16): #include int main(int argc, char* argv[]) { const char* code = "c=[1,2,3,4]\nd={'list': [c[i] for i in range(len(c))]}\nprint(d)\n"; Py_Initialize(); PyObject* pycode = Py_CompileString(code, "", Py_file_input ); PyObject* main_module = PyImport_AddModule("__main__"); PyObject* global_dict = PyModule_GetDict(main_module); PyObject* local_dict = PyDict_New(); PyEval_EvalCode(pycode, global_dict, local_dict); // (PyCodeObject*) pycode in Python 2.7 Py_Finalize(); return 0; and executing yields: Traceback (most recent call last): File "", line 2, in File "", line 2, in NameError: name 'c' is not defined While not particularly clever python code, it is not clear why the reference c is not in scope having previously been defined. Replacing the clumsy list comprehension using range() with c[:] or [ci for ci in c] produces the expected result: {'list': [1, 2, 3, 4]} This issue is not observed with Python 2.7 (.18). ---------- components: C API messages: 388557 nosy: chrisgmorton priority: normal severity: normal status: open title: PyEval_EvalCode() namespace issue not observed in Python 2.7. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 12 17:11:40 2021 From: report at bugs.python.org (Chris Morton) Date: Fri, 12 Mar 2021 22:11:40 +0000 Subject: [New-bugs-announce] [issue43482] PyAddPendingCall Function Never Called in 3.8, works in 3.6 Message-ID: <1615587100.2.0.130476300444.issue43482@roundup.psfhosted.org> New submission from Chris Morton : Building code on Mac OSX or Linux Ubuntu 16.04 #include #include #include #include // MacOSX build: // g++ stop.cpp -I /include/pythonX.X -L /lib -lpythonX.X -o stop // Linuxe requires addtional linkage: // -lpthread void RunPythonScript() { PyRun_SimpleString("# import sys \n" "import time \n" "# sys.setcheckinterval(-1) \n" "while True: \n" " print('Running!') \n" " time.sleep(1) \n" ); std::cout << "Terminating Python Interpreter." << std::endl; } int Stop(void *) { std::cout << "We threw an exception." <", line 6, in RuntimeError: Stop Python Execution. Terminating Python Interpreter. Exiting Main Function. The Stop function is never called with the same code linked against Python 3.8. ---------- components: C API messages: 388559 nosy: chrisgmorton priority: normal severity: normal status: open title: PyAddPendingCall Function Never Called in 3.8, works in 3.6 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 12 20:35:52 2021 From: report at bugs.python.org (Larry Trammell) Date: Sat, 13 Mar 2021 01:35:52 +0000 Subject: [New-bugs-announce] [issue43483] Loss of content in simple (but oversize) SAX parsing Message-ID: <1615599352.35.0.560655392831.issue43483@roundup.psfhosted.org> New submission from Larry Trammell : == The Problem == I have observed a "loss of data" problem using the Python SAX parser, when processing an oversize but very simple machine-generated xhtml file. The file represents a single N x 11 data table. W3C "tidy" reports no xml errors. The table is constructed in an entirely plausible manner, using table, tr, and td tags to define the table structure, and p tags to bracket content, which consists of small chunks of quoted text. There is nothing pathological, no extraneous whitespace characters, no empty data fields. Everything works perfectly in small test cases. But when a very large number of rows are present, a few characters of content strings are occasionally lost. I have observed 2 or 6 characters dropped. But here's the strange part. The pathological behavior disappears (or moves to another location) when one or more non-significant whitespace characters are inserted at an arbitrary location early in the file... e.g. an extra linefeed before the first tr tag. == Context == I have observed identical behavior on desktop systems using an Intel Xeon E5-1607 or a Core-2 processor, running 32-bit or 64-bit Linux operating systems, variously using Python 3.8.5, 3.8, 3.7.3, and 3.5.1. == Observing the Problem == Sorry that the test data is so bulky (even at 0.5% of original size), but bulk appears to be a necessary condition to observe the problem. Run the following command line. python3 EnchXMLTest.py EnchTestData.html The test script invokes the SAX parser and generates messages on stdout. Using the original test data as provided, the test should run correctly to completion. Now modify the test data file, deleting the extraneous comment line (there is only one) found near the top of the file. Repeat the test run, and this time look for missing content characters in parsed content fields of the last record. == Any guesses? == Beyond "user is oblivious," possibly something abnormal can occur at seams between large blocks of buffered text. The presence or absence of an extra character early in the data stream results in a corresponding shift in content location at the end of the buffer. Other clues: is it relevant that the problem appears in a string field that contains slash characters? ---------- components: XML files: EnchSAXTest.zip messages: 388582 nosy: ridgerat1611 priority: normal severity: normal status: open title: Loss of content in simple (but oversize) SAX parsing type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49872/EnchSAXTest.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 12 22:30:12 2021 From: report at bugs.python.org (mike bayer) Date: Sat, 13 Mar 2021 03:30:12 +0000 Subject: [New-bugs-announce] [issue43484] we can create valid datetime objects that become invalid if the timezone is changed Message-ID: <1615606212.76.0.00672395304579.issue43484@roundup.psfhosted.org> New submission from mike bayer : So I'm pretty sure this is "not a bug" but it's a bit of a problem and I have a user suggesting the "security vulnerability" bell on this one, and to be honest I don't even know what any library would do to "prevent" this. Basically, the datetime() object limits based on a numerical year of MINYEAR, rather than limiting based on an actual logical date. So I can create an "impossible" date as follows: d = datetime.strptime("Mon Jan 1 00:00:00 0001 +01:00", "%c %z") or like this: d = datetime(year=1, month=1, day=1, tzinfo=timezone(timedelta(hours=1))) and....you can see where this is going - it can't be converted to a timezone that pushes the year to zero: >>> from datetime import datetime, timezone, timedelta >>> d = datetime(year=1, month=1, day=1, tzinfo=timezone(timedelta(hours=1))) >>> d.astimezone(timezone.utc) Traceback (most recent call last): File "", line 1, in OverflowError: date value out of range this because, after all, astimezone() is just subraction or addition and if it overflows past the artificial boundary, well you're out of luck. Why's this a security problem? ish? because PostgreSQL has a data type "TIMESTAMP WITH TIMEZONE" and if you take said date and INSERT it into your database, then SELECT it back using any Python DBAPI that returns datetime() objects like psycopg2, if your server is in a timezone with zero or negative offset compared to the given date, you get an error. So the mischievous user can create that datetime for some reason and now they've broken your website which can't SELECT that table anymore without crashing. So, suppose you maintain the database library that helps people send data in and out of psycopg2. We have, the end user's application, we have the database abstraction library, we have the psycopg2 driver, we have Python's datetime() object with MIN_YEAR, and finally we have PostgreSQL with the TIMEZONE WITH TIMESTAMP datatype that I've never liked. Of those five roles, whose bug is this? I'd like to say it's the end user for letting untrusted input that can put unusual timezones and timestamps in their system. But at the same time it's a little weird Python is letting me create this date that lacks the ability to convert into UTC. thanks for reading! ---------- components: Library (Lib) messages: 388585 nosy: zzzeek priority: normal severity: normal status: open title: we can create valid datetime objects that become invalid if the timezone is changed versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 12 23:03:54 2021 From: report at bugs.python.org (=?utf-8?q?Kadir_SEL=C3=87UK?=) Date: Sat, 13 Mar 2021 04:03:54 +0000 Subject: [New-bugs-announce] [issue43485] devturks Message-ID: <1615608234.17.0.36625339556.issue43485@roundup.psfhosted.org> Change by Kadir SEL?UK : ---------- assignee: docs at python components: 2to3 (2.x to 3.x conversion tool), Argument Clinic, Build, C API, Cross-Build, Demos and Tools, Distutils, Documentation, Extension Modules, FreeBSD, IDLE, IO, Installation, Interpreter Core, Library (Lib), Regular Expressions, SSL, Subinterpreters, Tests, Tkinter, Unicode, Windows, XML, asyncio, ctypes, email hgrepos: 399 nosy: Alex.Willmer, asvetlov, barry, docs at python, dstufft, eric.araujo, ezio.melotti, koobs, larry, mrabarnett, paul.moore, post.kadirselcuk, r.david.murray, steve.dower, terry.reedy, tim.golden, vstinner, yselivanov, zach.ware priority: normal severity: normal status: open title: devturks type: resource usage versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 13 08:20:10 2021 From: report at bugs.python.org (Adam) Date: Sat, 13 Mar 2021 13:20:10 +0000 Subject: [New-bugs-announce] [issue43486] Python 3.9 installer not updating ARP table Message-ID: <1615641610.33.0.0630077237897.issue43486@roundup.psfhosted.org> New submission from Adam : 1. Install 3.9.0 using the following command line options: python-3.9.0.exe /quiet InstallAllUsers=1 2. Install 3.9.2 using the following command line options: python-3.9.2.exe /quiet InstallAllUsers=1 3. Observe that 3.9.2 successfully installed, however the ARP table does not reflect the latest version (see first screenshot in the attachment) it still shows 3.9.0 as installed. 4. Uninstall 3.9.2 using the following command line options: python-3.9.2.exe /uninstall /silent 5. Observe that Python 3.9.0 is still listed as installed in the ARP table. Looking in the registry, all Python installed products are removed except for Python Launcher. Maybe it is by design to leave Python Launcher on the system, maybe not, but I think keeping the ARP table tidy would reduce confusion for users. See second screenshot in the attachment. ---------- components: Installation files: 1.jpg messages: 388615 nosy: codaamok priority: normal severity: normal status: open title: Python 3.9 installer not updating ARP table type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49873/1.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 13 11:14:46 2021 From: report at bugs.python.org (Bart Broere) Date: Sat, 13 Mar 2021 16:14:46 +0000 Subject: [New-bugs-announce] [issue43487] Rename __unicode__ methods to __str__ in 2to3 conversion Message-ID: <1615652086.71.0.102921679786.issue43487@roundup.psfhosted.org> New submission from Bart Broere : While porting a (Django) code base recently, using 2to3, I missed the conversion from __unicode__ to __str__. I have created my own 2to3 fixer, which might be useful for other people. If it's not useful enough to be included in lib2to3, or has side effects that I did not foresee, please let me know :-) ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 388627 nosy: bartbroere priority: normal severity: normal status: open title: Rename __unicode__ methods to __str__ in 2to3 conversion type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 13 13:18:11 2021 From: report at bugs.python.org (Ehsonjon Gadoev) Date: Sat, 13 Mar 2021 18:18:11 +0000 Subject: [New-bugs-announce] [issue43488] Added new methods to vectors.py Message-ID: <1615659491.16.0.390257567336.issue43488@roundup.psfhosted.org> New submission from Ehsonjon Gadoev : What's new in Vector (vector.py) 1) Added multiplay (Vector and Vector) 2) Added division (Vector and Vector) and (Vector and scalar) 3) Added FloorDiv (Vector and Vector) and (Vector and scalar) 4) Added __mod__ (Vector and Vector) and (Vector and scalar) This new methods is very useful! [It is beta version! By the way we will fix bugs] ---------- components: Demos and Tools hgrepos: 402 messages: 388631 nosy: Ehsonjon Gadoev priority: normal pull_requests: 23608 severity: normal status: open title: Added new methods to vectors.py versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 13 19:56:59 2021 From: report at bugs.python.org (Adrian LeDeaux) Date: Sun, 14 Mar 2021 00:56:59 +0000 Subject: [New-bugs-announce] [issue43489] Can't install, nothing to install Message-ID: <1615683419.81.0.261055415418.issue43489@roundup.psfhosted.org> New submission from Adrian LeDeaux : Python 2.7 won't install. I get the error "there is nothing to install" or something to that effect. I am using MacOS High Sierra 10.13.6. I tried both installer downloads. None worked. And I got the same error every time. Anyone have any ideas on what is going on? ---------- messages: 388642 nosy: aledeaux priority: normal severity: normal status: open title: Can't install, nothing to install _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 13 20:18:14 2021 From: report at bugs.python.org (Adrian LeDeaux) Date: Sun, 14 Mar 2021 01:18:14 +0000 Subject: [New-bugs-announce] [issue43490] IDLE freezes at random Message-ID: <1615684694.01.0.00673865405837.issue43490@roundup.psfhosted.org> New submission from Adrian LeDeaux : My IDLE shell keeps freezing when using the turtle module. I am using MacOS High Sierra 13.10.6. It says it is fine, but I can't get the window open. I have to restart the shell entirely. I can't type or do anything. I have to do the [command]+[Q] shortcut and then open the app again. Any ideas? ---------- messages: 388643 nosy: aledeaux priority: normal severity: normal status: open title: IDLE freezes at random versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 14 08:54:39 2021 From: report at bugs.python.org (parsa mpsh) Date: Sun, 14 Mar 2021 12:54:39 +0000 Subject: [New-bugs-announce] [issue43491] Windows filepath bug Message-ID: <1615726479.05.0.699625181333.issue43491@roundup.psfhosted.org> New submission from parsa mpsh : I was testing my program in github workflow and i detected a bug. i was opening a file in windows: ``` f = open(os.path.dirname(__FILE__) + '/some/file.txt') ``` means the file path will be `C:\some\path/some/file.txt`. but i received OSError. why? because in the above path, we have both `/` and `\`. I think this bug should be fixed by converting all of `/`s to `\` in file paths automatic. ---------- components: Windows messages: 388672 nosy: parsampsh, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows filepath bug type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 14 16:12:12 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Sun, 14 Mar 2021 20:12:12 +0000 Subject: [New-bugs-announce] [issue43492] Upgrade to SQLite 3.35.0 in macOS and Windows Message-ID: <1615752732.27.0.278073347477.issue43492@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : SQLite 3.35.0 was released a couple of days ago: https://www.sqlite.org/releaselog/3_35_0.html Suggesting to hold off for a week or two, to see if a bug-fix release happens. ---------- components: Windows, macOS messages: 388685 nosy: erlendaasland, ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Upgrade to SQLite 3.35.0 in macOS and Windows type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 14 16:46:29 2021 From: report at bugs.python.org (Mike Glover) Date: Sun, 14 Mar 2021 20:46:29 +0000 Subject: [New-bugs-announce] [issue43493] EmailMessage mis-folding headers of a certain length Message-ID: <1615754789.09.0.690251462615.issue43493@roundup.psfhosted.org> New submission from Mike Glover : The attached file demonstrates the incorrect folding behavior I'm seeing. Header lines of a certain total length get folded after the colon following the header name, which is not valid RFC. Slightly longer or shorter lines are folded correctly. Interestingly, the test file produces correct output on 3.5.2 $ python --version Python 3.8.5 $ sudo apt install python3 ... python3 is already the newest version (3.8.2-0ubuntu2). (yes, that difference has me scratching my head) And yes, I realize this is not the latest release of the 3.8 branch, but it *is* the latest available through apt on Ubuntu 20.04 LTS, and a search of the issue tracker and the release notes for all of 3.8.* turned up nothing applicable. ---------- components: email files: header_misfolding.py messages: 388687 nosy: barry, mglover, r.david.murray priority: normal severity: normal status: open title: EmailMessage mis-folding headers of a certain length type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49875/header_misfolding.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 14 16:56:45 2021 From: report at bugs.python.org (Skip Montanaro) Date: Sun, 14 Mar 2021 20:56:45 +0000 Subject: [New-bugs-announce] [issue43494] Minor changes to Objects/lnotab_notes.txt Message-ID: <1615755405.52.0.84657941654.issue43494@roundup.psfhosted.org> New submission from Skip Montanaro : For the VM work I'm doing I need to adapt to Mark's new line number table format. (I stalled for several months, hence this rather late report.) As I was reading Objects/lnotab_notes.txt I noticed a couple typos, fixed those and threw in a couple other minor edits. A PR is incoming. ---------- assignee: Mark.Shannon messages: 388688 nosy: Mark.Shannon, skip.montanaro priority: low severity: normal status: open title: Minor changes to Objects/lnotab_notes.txt versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 14 20:32:22 2021 From: report at bugs.python.org (Thomas Anderson) Date: Mon, 15 Mar 2021 00:32:22 +0000 Subject: [New-bugs-announce] [issue43495] Missing frame block push in compiler_async_comprehension_generator() Message-ID: <1615768342.52.0.844768701599.issue43495@roundup.psfhosted.org> New submission from Thomas Anderson : The runtime pushes a frame block in SETUP_FINALLY, so the compiler needs to account for that, otherwise the runtime block stack may overflow. ---------- components: Interpreter Core messages: 388696 nosy: tomkpz priority: normal severity: normal status: open title: Missing frame block push in compiler_async_comprehension_generator() versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 14 21:40:29 2021 From: report at bugs.python.org (Jacob Walls) Date: Mon, 15 Mar 2021 01:40:29 +0000 Subject: [New-bugs-announce] [issue43496] Save As dialog in IDLE doesn't accept keyboard shortcuts on MacOS Message-ID: <1615772429.67.0.84187114429.issue43496@roundup.psfhosted.org> New submission from Jacob Walls : Cmd-A to select all or Cmd-Z to undo, etc., have no effect when typing in the "Save As:" or "Tags:" fields of the native Save As... dialog on MacOS. Cmd-R, curiously, opens a Finder window. IDLE dialogs such as Search behave as expected (Cmd-A selects all). Python 3.9.2 macOS 10.15.7 (and 10.13.6) Pardon me if my search for existing tickets came up short. ---------- assignee: terry.reedy components: IDLE, macOS messages: 388699 nosy: jacobtylerwalls, ned.deily, ronaldoussoren, terry.reedy priority: normal severity: normal status: open title: Save As dialog in IDLE doesn't accept keyboard shortcuts on MacOS type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 03:31:25 2021 From: report at bugs.python.org (Greg Darke) Date: Mon, 15 Mar 2021 07:31:25 +0000 Subject: [New-bugs-announce] [issue43497] SyntaxWarning for "assertion is always true, perhaps remove parentheses?" does not work with constants Message-ID: <1615793485.02.0.990067169214.issue43497@roundup.psfhosted.org> New submission from Greg Darke : The following block of code does not produce a SyntaxWarning in python 3.7 and above (it does produce a warning in python 3.6 and below): ``` assert(False, 'msg') ``` If the tuple is not a constant (for example `(x, 'msg')`), then a warning is still produced. ---------- components: Interpreter Core messages: 388711 nosy: darke2 priority: normal severity: normal status: open title: SyntaxWarning for "assertion is always true, perhaps remove parentheses?" does not work with constants type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 04:23:36 2021 From: report at bugs.python.org (Jakub Kulik) Date: Mon, 15 Mar 2021 08:23:36 +0000 Subject: [New-bugs-announce] [issue43498] "dictionary changed size during iteration" error in _ExecutorManagerThread Message-ID: <1615796616.26.0.51182574064.issue43498@roundup.psfhosted.org> New submission from Jakub Kulik : Recently several of our Python 3.9 builds froze during `make install` with the following trace in logs: Listing .../components/python/python39/build/prototype/sparc/usr/lib/python3.9/lib2to3/tests/data/fixers/myfixes... Exception in thread Thread-1: Traceback (most recent call last): File ".../components/python/python39/build/prototype/sparc/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File ".../components/python/python39/build/prototype/sparc/usr/lib/python3.9/concurrent/futures/process.py", line 317, in run result_item, is_broken, cause = self.wait_result_broken_or_wakeup() File ".../components/python/python39/build/prototype/sparc/usr/lib/python3.9/concurrent/futures/process.py", line 376, in wait_result_broken_or_wakeup worker_sentinels = [p.sentinel for p in self.processes.values()] File ".../components/python/python39/build/prototype/sparc/usr/lib/python3.9/concurrent/futures/process.py", line 376, in worker_sentinels = [p.sentinel for p in self.processes.values()] RuntimeError: dictionary changed size during iteration After this, the build freezes and never ends (most likely waiting for the broken thread). We see this only in Python 3.9 (3.7 doesn't seem to be affected, and we don't deliver other versions) and only when doing full builds of the entire Userland, meaning that this might be related to big utilization of the build machine? That said, it only happened three or four times, so this might be just a coincidence. Simple fix seems to be this (PR shortly): --- Python-3.9.1/Lib/concurrent/futures/process.py +++ Python-3.9.1/Lib/concurrent/futures/process.py @@ -373,7 +373,7 @@ class _ExecutorManagerThread(threading.T assert not self.thread_wakeup._closed wakeup_reader = self.thread_wakeup._reader readers = [result_reader, wakeup_reader] - worker_sentinels = [p.sentinel for p in self.processes.values()] + worker_sentinels = [p.sentinel for p in self.processes.copy().values()] ready = mp.connection.wait(readers + worker_sentinels) cause = None This is on Oracle Solaris and on both SPARC and Intel machines. ---------- components: Installation, asyncio messages: 388712 nosy: asvetlov, kulikjak, yselivanov priority: normal severity: normal status: open title: "dictionary changed size during iteration" error in _ExecutorManagerThread type: crash versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 08:12:03 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 15 Mar 2021 12:12:03 +0000 Subject: [New-bugs-announce] [issue43499] Compiler warnings in building Python 3.9 on Windows Message-ID: <1615810323.65.0.00918597187884.issue43499@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently building Python 3.9 on Windows produce many compiler warnings. 3.8 and master are clean. * Warnings in the _sre module caused by the bug in MSVC (complains about automatic conversion of "void **" to "const void *"). Fixed by backporting PR20508. * Warnings in many files related to using legacy C API (e.g. PyUnicode_AsUnicode) in Windows specific code. * ..\Objects\exceptions.c(2313): warning C4098: 'MemoryError_dealloc': 'void' function returning a value * ..\Objects\frameobject.c(400): warning C4267: 'initializing': conversion from 'size_t' to 'int', possible loss of data The last two may be hidden bugs. ---------- components: Extension Modules, Interpreter Core messages: 388731 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Compiler warnings in building Python 3.9 on Windows versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 08:30:48 2021 From: report at bugs.python.org (wyz23x2) Date: Mon, 15 Mar 2021 12:30:48 +0000 Subject: [New-bugs-announce] [issue43500] Add filtercase() into fnmatch Message-ID: <1615811448.09.0.46189738482.issue43500@roundup.psfhosted.org> New submission from wyz23x2 : The fnmatch module has a filter() function: > Construct a list from those elements of the iterable names that match pattern. > It is the same as [n for n in names if fnmatch(n, pattern)], but implemented more efficiently. However, since there is the fnmatchcase() function, we should have filtercase() too. ---------- components: Library (Lib) messages: 388732 nosy: wyz23x2 priority: normal severity: normal status: open title: Add filtercase() into fnmatch versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 09:11:05 2021 From: report at bugs.python.org (Anton Khirnov) Date: Mon, 15 Mar 2021 13:11:05 +0000 Subject: [New-bugs-announce] [issue43501] email._header_value_parse throws AttributeError on display name ending with dot Message-ID: <1615813865.05.0.348558302999.issue43501@roundup.psfhosted.org> New submission from Anton Khirnov : On parsing an email where the display name in an address ends on a dot immediately followed by angle-addr, accessing the resulting mailbox display_name throws /usr/lib/python3.9/email/_header_value_parser.py in value(self) 589 if self[0].token_type=='cfws' or self[0][0].token_type=='cfws': 590 pre = ' ' --> 591 if self[-1].token_type=='cfws' or self[-1][-1].token_type=='cfws': 592 post = ' ' 593 return pre+quote_string(self.display_name)+post AttributeError: 'str' object has no attribute 'token_type' The problem is that self[-1] is the terminal DOT. An example of the problematic header is: From: foobar. ---------- components: email messages: 388738 nosy: barry, elenril, r.david.murray priority: normal severity: normal status: open title: email._header_value_parse throws AttributeError on display name ending with dot type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 09:54:44 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Mon, 15 Mar 2021 13:54:44 +0000 Subject: [New-bugs-announce] [issue43502] [C-API] Convert obvious unsafe macros to static inline functions Message-ID: <1615816484.12.0.00251754107838.issue43502@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : Convert macros to static inline functions if...: - the macro contains a clear pitfall (for example "duplication of side effects") - the fix is trivial - the macro is not used as an l-value (for example Py_TYPE()) See also: - https://gcc.gnu.org/onlinedocs/cpp/Macro-Pitfalls.html - bpo-43181 - bpo-39573 - bpo-40170 ---------- components: C API messages: 388739 nosy: corona10, erlendaasland, vstinner priority: normal severity: normal status: open title: [C-API] Convert obvious unsafe macros to static inline functions versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 14:53:43 2021 From: report at bugs.python.org (Eric Snow) Date: Mon, 15 Mar 2021 18:53:43 +0000 Subject: [New-bugs-announce] [issue43503] [subinterpreters] PyObject statics exposed in the limited API break isolation. Message-ID: <1615834423.72.0.113211272847.issue43503@roundup.psfhosted.org> New submission from Eric Snow : In the limited C-API we expose the following static PyObject variables: * 5 singletons * ~70 exception types * ~70 other types Since they are part of the limited API, they have a direct effect on the stable ABI. The problem is that these objects should not be shared between interpreters. There are a number of possible solutions for isolating the objects, but the big constraint is that the solution cannot break the stable ABI. ---------- components: C API messages: 388759 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: [subinterpreters] PyObject statics exposed in the limited API break isolation. type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 19:06:20 2021 From: report at bugs.python.org (Julien Palard) Date: Mon, 15 Mar 2021 23:06:20 +0000 Subject: [New-bugs-announce] [issue43504] effbot.org down Message-ID: <1615849580.97.0.243341860557.issue43504@roundup.psfhosted.org> New submission from Julien Palard : effbot.org is down, it's currently displaying: > effbot.org on hiatus > > effbot.org is taking a break. We?ll be back, in some form or another. But docs.python.org have a few links pointing to it, `git grep effbot` finds 11 of them in the Doc/. I think they should be manually reviewed one by one, checking on web archive [1] if they still contain relevant information. For example: Doc/library/xml.etree.elementtree.rst:See http://effbot.org/zone/element-index.htm for tutorials and links to other The given links give Python 2 examples, so I think it's OK just to remove it. [1]: https://web.archive.org/web/20200315142708/http://effbot.org/ ---------- messages: 388787 nosy: mdk priority: normal severity: normal status: open title: effbot.org down _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 19:09:44 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Mon, 15 Mar 2021 23:09:44 +0000 Subject: [New-bugs-announce] [issue43505] [sqlite3] Explicitly initialise and shut down sqlite3 Message-ID: <1615849784.81.0.125050267571.issue43505@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : We should explicitly initialise (and shut down) the SQLite library in the sqlite3 module. This may be required in future releases: Quoting from the SQLite docs: "For maximum portability, it is recommended that applications always invoke sqlite3_initialize() directly prior to using any other SQLite interface. Future releases of SQLite may require this. In other words, the behavior exhibited when SQLite is compiled with SQLITE_OMIT_AUTOINIT might become the default behavior in some future release of SQLite." Ref. - https://sqlite.org/c3ref/initialize.html - https://sqlite.org/compile.html#omit_autoinit ---------- components: Library (Lib) messages: 388788 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: [sqlite3] Explicitly initialise and shut down sqlite3 type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 22:38:55 2021 From: report at bugs.python.org (Inada Naoki) Date: Tue, 16 Mar 2021 02:38:55 +0000 Subject: [New-bugs-announce] [issue43506] PEP 624: Update document for removal schedule Message-ID: <1615862335.75.0.933917943771.issue43506@roundup.psfhosted.org> New submission from Inada Naoki : They are documented as "will be removed in 4.0" now. ---------- components: C API messages: 388800 nosy: methane priority: normal severity: normal status: open title: PEP 624: Update document for removal schedule versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 23:08:01 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 16 Mar 2021 03:08:01 +0000 Subject: [New-bugs-announce] [issue43507] Variables in locals scope fails to be printed. Message-ID: <1615864081.29.0.727460269138.issue43507@roundup.psfhosted.org> New submission from Xinmeng Xia : The following code 1 calls function 'compile' and 'exec' and execute a statement "s=1". Then we print the value of 's'. This code can perform well on Python 3.9.2 and output the expected result. However, we pack the whole code into a function (code 2). The execution fails. code 1: =================== mstr = "s=1" exec(compile(mstr,'','exec')) print(s) =================== output: 1 code2: =================== def foo(): mstr = "s=1" exec(compile(mstr,'','exec')) print(s) foo() =================== output: Traceback (most recent call last): File "/home/xxm/Desktop/apifuzz/doc/genDoc.py", line 37, in foo() File "/home/xxm/Desktop/apifuzz/doc/genDoc.py", line 35, in foo print(s) NameError: name 's' is not defined By the way, we print locals(). 's' exists in the local scope. It should not fail. >>print(locals()? {'mstr': 's=1', 's': 1} ---------- components: Interpreter Core messages: 388802 nosy: xxm priority: normal severity: normal status: open title: Variables in locals scope fails to be printed. type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 23:09:30 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 16 Mar 2021 03:09:30 +0000 Subject: [New-bugs-announce] [issue43508] Miscompilation information for tarfile.open() when given too many arguments Message-ID: <1615864170.77.0.811664842161.issue43508@roundup.psfhosted.org> New submission from Xinmeng Xia : In following example, we only give 10 arguments to tarfile.open(). The error message shows "11 arguments were given". We give it 5 arguments and the error message shows "6 were given". This is not correct. ========================================================== >>> tarfile.open(*[None]*10) Traceback (most recent call last): File "", line 1, in TypeError: open() takes from 1 to 5 positional arguments but 11 were given >>> tarfile.open(1,2,3,4,5) Traceback (most recent call last): File "", line 1, in TypeError: open() takes from 1 to 5 positional arguments but 6 were given ========================================================== Expected Output? For 10 given arguments. the error message is "open() takes from 1 to 5 positional arguments but 10 were given" Python: 3.9.2 System: ubuntu 16.04 ---------- components: Library (Lib) messages: 388803 nosy: xxm priority: normal severity: normal status: open title: Miscompilation information for tarfile.open() when given too many arguments type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 15 23:10:32 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 16 Mar 2021 03:10:32 +0000 Subject: [New-bugs-announce] [issue43509] CFunctionType object should be hashable in Python Message-ID: <1615864232.13.0.0243161103861.issue43509@roundup.psfhosted.org> New submission from Xinmeng Xia : See the following examples, ctypes.resize is a built-in function and it's hashable. ctypes.memset is a C function (CFunctionType object) and it's ?unhashable?. However, ctypes.resize and ctypes.memset are both immutable. They should act the same in Python. It should not report unhashable type error when ctypes.memset calls __hash__(). ----------------------------------------------- >>> import ctypes >>> ctypes.resize >>> ctypes.resize.__hash__() 146309 >>> ctypes.memset >>> ctypes.memset.__hash__() Traceback (most recent call last): File "", line 1, in TypeError: unhashable type ----------------------------------------------- Python version: 3.9.2 system: Ubuntu Expected output: ctypes.memset is hashable. ---------- components: Interpreter Core messages: 388804 nosy: xxm priority: normal severity: normal status: open title: CFunctionType object should be hashable in Python type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 00:19:49 2021 From: report at bugs.python.org (Inada Naoki) Date: Tue, 16 Mar 2021 04:19:49 +0000 Subject: [New-bugs-announce] [issue43510] PEP 597: Implemente encoding="locale" option and EncodingWarning Message-ID: <1615868389.17.0.260147645451.issue43510@roundup.psfhosted.org> New submission from Inada Naoki : PEP 597 is accepted. ---------- components: IO messages: 388809 nosy: methane priority: normal severity: normal status: open title: PEP 597: Implemente encoding="locale" option and EncodingWarning versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 02:13:14 2021 From: report at bugs.python.org (Thomas Wamm) Date: Tue, 16 Mar 2021 06:13:14 +0000 Subject: [New-bugs-announce] [issue43511] Tk 8.6.11 slow on M1 Mac Mini MacOS Python 3.9.2 native ARM version Message-ID: <1615875194.4.0.35991043769.issue43511@roundup.psfhosted.org> New submission from Thomas Wamm : Comparing performance of a Tkinter graphics program with Python 3.9.2 on an M1 Mac Mini, I find it to be slower than earlier versions, and much slower than the same program running on other computers such as a Raspberry Pi 3B (Python 3.7.3). Initial investigation suggests the slow down is because of Tk 8.6.11. The same program on Windows 10 on Intel i5-8250U is at least 10x faster, contrary to expectations. Has anyone noticed similar? The program is simple & portable, downloadable from: https://github.com/ThomasWamm/TerraLunar.git All it does is 2-D orbital mechanics simulation, to teach me Python. ---------- components: Tkinter messages: 388822 nosy: thomaswamm priority: normal severity: normal status: open title: Tk 8.6.11 slow on M1 Mac Mini MacOS Python 3.9.2 native ARM version type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 06:02:24 2021 From: report at bugs.python.org (Paul) Date: Tue, 16 Mar 2021 10:02:24 +0000 Subject: [New-bugs-announce] [issue43512] Bug in isinstance(instance, cls) with cls being a protocol? (PEP 544) Message-ID: <1615888944.07.0.094574416265.issue43512@roundup.psfhosted.org> New submission from Paul : The section "Subtyping relationships with other types" of PEP 544 states: "A concrete type X is a subtype of protocol P if and only if X implements all protocol members of P with compatible types. In other words, subtyping with respect to a protocol is always structural." This requirement is violated by the current implementation of CPython (version 3.9.2): ``` from typing import Protocol class P(Protocol): pm: str # no default value, but still a protocol member class C(P): # inherits P but does NOT implement pm, since P did not provide a default value pass assert isinstance(C(), P) # violates the PEP 544 requirement cited above C().pm # raises: AttributeError: 'C' object has no attribute 'pm' ``` ---------- components: Library (Lib) messages: 388827 nosy: paul-dest priority: normal severity: normal status: open title: Bug in isinstance(instance, cls) with cls being a protocol? (PEP 544) type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 09:57:10 2021 From: report at bugs.python.org (ThiefMaster) Date: Tue, 16 Mar 2021 13:57:10 +0000 Subject: [New-bugs-announce] [issue43513] venv: recreate symlinks on --upgrade Message-ID: <1615903030.34.0.439862030712.issue43513@roundup.psfhosted.org> New submission from ThiefMaster : When using `python -m venv --upgrade someenv`, it rewrites `pyvenv.cfg` with the current python version but leaves the python symlinks untouched (https://github.com/python/cpython/blob/a8ef4572a6b28bcfc0b10b34fa4204954b9dd761/Lib/venv/__init__.py#L248) This is of course fine when the original location of the Python interpreter is something like `/usr/bin/python3.9`, but when using pyenv it's a path containing the full version such as `/home/USER/.pyenv/versions/3.9.2/bin/python`, which makes in-place updates of minor Python versions harder than needed (manual update of the symlink needed). IfF you agree that this change makes sense, I wouldn't mind sending a PR for this... ---------- components: Library (Lib) messages: 388840 nosy: ThiefMaster priority: normal severity: normal status: open title: venv: recreate symlinks on --upgrade type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 10:41:07 2021 From: report at bugs.python.org (Frank Ueberschar) Date: Tue, 16 Mar 2021 14:41:07 +0000 Subject: [New-bugs-announce] [issue43514] Disallow fork in a subinterpreter affects multiprocessing plugin Message-ID: <1615905667.88.0.462528885021.issue43514@roundup.psfhosted.org> New submission from Frank Ueberschar : Related to this issue https://bugs.python.org/issue34651, our Bareos libcloud plugin cannot be run with Python > 3.7. We are using subprocesses in a C-subinterpreter environment. Is there a way to circumvent rewriting our code completely? ---------- components: C API messages: 388841 nosy: franku priority: normal severity: normal status: open title: Disallow fork in a subinterpreter affects multiprocessing plugin type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 10:58:40 2021 From: report at bugs.python.org (Antoine Pitrou) Date: Tue, 16 Mar 2021 14:58:40 +0000 Subject: [New-bugs-announce] [issue43515] Lazy import in concurrent.futures produces partial import errors Message-ID: <1615906720.22.0.921219635478.issue43515@roundup.psfhosted.org> New submission from Antoine Pitrou : Here is a reproducer script: https://gist.github.com/pitrou/a73fa2cfce2557e0dd435353b9976972 With Python 3.6 it works fine. ---------- components: Library (Lib) messages: 388844 nosy: methane, pitrou priority: normal severity: normal stage: needs patch status: open title: Lazy import in concurrent.futures produces partial import errors type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 11:50:02 2021 From: report at bugs.python.org (Yann Enoti) Date: Tue, 16 Mar 2021 15:50:02 +0000 Subject: [New-bugs-announce] [issue43516] python on raspberry pi Message-ID: <1615909802.06.0.398723617988.issue43516@roundup.psfhosted.org> New submission from Yann Enoti : Any idea why this doesnt work ? import socket HOST = "192.168.2.114" PORT = 8000 #initiate port no above 1024 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print('Socket created') #try: s.bind((HOST, PORT)) #bind host address and port together# #except socket.error as e: #print('Bind failed.') s.listen(10) #configure how many clients the server can listen to simultaneously print("Socket is Listening") #conn, addr = s.accept() #accept new connection #print("connection from:" + str(addr)) while True: conn, addr = s.accept() #accept new connection print("connection from:" + str(addr)) data = conn.recv(1024).decode() #receive data stream #if not data: #if data is not received break # break print("from Android App:" + str(data)) data = input ('from Python Server:') conn.send(data.encode()) #send data to client conn.close() #close the connection ---------- components: Build messages: 388850 nosy: gyenoti priority: normal severity: normal status: open title: python on raspberry pi _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 12:07:50 2021 From: report at bugs.python.org (Antoine Pitrou) Date: Tue, 16 Mar 2021 16:07:50 +0000 Subject: [New-bugs-announce] [issue43517] Fix false positives in circular import detection with from-imports Message-ID: <1615910870.21.0.167533445549.issue43517@roundup.psfhosted.org> Change by Antoine Pitrou : ---------- assignee: pitrou components: Library (Lib) nosy: pitrou priority: deferred blocker severity: normal stage: needs patch status: open title: Fix false positives in circular import detection with from-imports type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 12:20:43 2021 From: report at bugs.python.org (annesylvie) Date: Tue, 16 Mar 2021 16:20:43 +0000 Subject: [New-bugs-announce] [issue43518] textwrap.shorten does not always respect word boundaries Message-ID: <1615911643.42.0.876166879751.issue43518@roundup.psfhosted.org> Change by annesylvie : ---------- components: Library (Lib) nosy: annesylvie priority: normal severity: normal status: open title: textwrap.shorten does not always respect word boundaries type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 13:20:30 2021 From: report at bugs.python.org (David Elmakias) Date: Tue, 16 Mar 2021 17:20:30 +0000 Subject: [New-bugs-announce] [issue43519] access python private variable Message-ID: <1615915230.85.0.466298087541.issue43519@roundup.psfhosted.org> New submission from David Elmakias : It might be my lack of knowledge in python, however I find this behavior a bit strange. By declaring a private variable in a class, python creates an attribute with the name '___'. Both are located on a different location in memory. I found that by assigning data to the created variable with the exact name/notation '___' I changed the private variable data. ---------- components: Build files: access_class_private_variable.py messages: 388862 nosy: AluminumPirate priority: normal severity: normal status: open title: access python private variable type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49878/access_class_private_variable.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 14:09:04 2021 From: report at bugs.python.org (Carl Anderson) Date: Tue, 16 Mar 2021 18:09:04 +0000 Subject: [New-bugs-announce] [issue43520] Fraction only handles regular slashes ("/") and fails with other similar slashes Message-ID: <1615918144.03.0.939769511159.issue43520@roundup.psfhosted.org> New submission from Carl Anderson : Fraction works with a regular slash: >>> from fractions import Fraction >>> Fraction("1/2") Fraction(1, 2) but there are other similar slashes such as (0x2044) in which it throws an error: >>> Fraction("0?2") Traceback (most recent call last): File "", line 1, in File "/opt/anaconda3/lib/python3.7/fractions.py", line 138, in __new__ numerator) ValueError: Invalid literal for Fraction: '0?2' This seems to come from the (?:/(?P\d+))? section of the regex _RATIONAL_FORMAT in fractions.py ---------- components: Library (Lib) messages: 388865 nosy: weightwatchers-carlanderson priority: normal severity: normal status: open title: Fraction only handles regular slashes ("/") and fails with other similar slashes type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 15:39:06 2021 From: report at bugs.python.org (Kodiologist) Date: Tue, 16 Mar 2021 19:39:06 +0000 Subject: [New-bugs-announce] [issue43521] Allow `ast.unparse` to handle NaNs and empty sets Message-ID: <1615923546.68.0.291337558212.issue43521@roundup.psfhosted.org> New submission from Kodiologist : `ast.unparse` throws an error on an empty set, and it produces `nan` for NaN, which isn't a legal Python literal. PR to follow shortly. ---------- messages: 388872 nosy: Kodiologist priority: normal severity: normal status: open title: Allow `ast.unparse` to handle NaNs and empty sets type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 16:07:50 2021 From: report at bugs.python.org (Quentin Pradet) Date: Tue, 16 Mar 2021 20:07:50 +0000 Subject: [New-bugs-announce] [issue43522] SSLContext.hostname_checks_common_name appears to have no effect Message-ID: <1615925270.47.0.272234983925.issue43522@roundup.psfhosted.org> New submission from Quentin Pradet : urllib3 is preparing a v2 with various SSL improvements, such as leaning on the ssl module to match hostnames when possible and reject certificates without a SAN. See https://urllib3.readthedocs.io/en/latest/v2-roadmap.html#modern-security-by-default for more details. For this reason, we want to set `hostname_checks_common_name` to False on Python 3.7+ and OpenSSL 1.1.0+. (In other cases, we use a modified version of `ssl.match_hostname` that does not consider common names.) I would expect that setting `hostname_checks_common_name` to False would rejects certificates without SANs, but that does not appear to be the case. I used the following Python code: import socket import ssl print(ssl.OPENSSL_VERSION) hostname = 'localhost' context = ssl.create_default_context() context.load_verify_locations("client.pem") context.hostname_checks_common_name = False with socket.create_connection((hostname, 8000)) as sock: with context.wrap_socket(sock, server_hostname=hostname) as ssock: assert "subjectAltName" not in ssock.getpeercert() which prints `OpenSSL 1.1.1i 8 Dec 2020` and does not fail as expected. I'm testing this on macOS 11.2.2 but this currently breaks our test suite on Ubuntu, Windows and macOS, including on Python 3.10, see https://github.com/urllib3/urllib3/runs/2122811894?check_suite_focus=true. To reproduce this, I used trustme (https://trustme.readthedocs.io/en/latest/). I modified the code to not include a SAN at all and ran `gunicorn --keyfile server.key --certfile server.pem app:app`, with app being the Flask quickstart application. I'll try to attach all those files if I manage to do it. What am I missing? ---------- assignee: christian.heimes components: SSL files: no_san_ignored.py messages: 388875 nosy: Quentin.Pradet, christian.heimes priority: normal severity: normal status: open title: SSLContext.hostname_checks_common_name appears to have no effect versions: Python 3.10 Added file: https://bugs.python.org/file49879/no_san_ignored.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 17:48:50 2021 From: report at bugs.python.org (George Sovetov) Date: Tue, 16 Mar 2021 21:48:50 +0000 Subject: [New-bugs-announce] [issue43523] Handling Ctrl+C when waiting on stdin on Windows via winrs Message-ID: <1615931330.13.0.919468061035.issue43523@roundup.psfhosted.org> New submission from George Sovetov : Ctrl+C alone has no effect, but Ctrl+Break works: ``` winrs -r:127.0.0.1:20465 -u:Administrator -p:qweasd123 python -c "import sys;sys.stdin.read(1)" ``` Although, if I press Ctrl+C, type zero or more symbols and then press Enter, KeyboardInterrupt is raised: ``` lalala Traceback (most recent call last): File "", line 1, in File "C:\Program Files\Python39\lib\encodings\cp1252.py", line 22, in decode def decode(self, input, final=False): KeyboardInterrupt ^C^C ``` With the following commands, both Ctrl+C and Ctrl+Break work: ``` winrs -r:127.0.0.1:20465 -u:Administrator -p:qweasd123 python -c "import time;time.sleep(10)" "c:\Program Files\Python39\python.exe" -c "import sys; sys.stdin.read(1)" "c:\Program Files\Python39\python.exe" -c "import time;time.sleep(10)" ``` I faced this issue when working with WSMV (Windows remoting API) directly, but I reproduced this with winrs to make sure it's not a bug in my code. I send the Ctrl+C signal, got a no-error response, then poll the running command. It behaves as if a signal had no effect. ---------- messages: 388890 nosy: sovetov priority: normal severity: normal status: open title: Handling Ctrl+C when waiting on stdin on Windows via winrs versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 16 19:51:15 2021 From: report at bugs.python.org (Awal Garg) Date: Tue, 16 Mar 2021 23:51:15 +0000 Subject: [New-bugs-announce] [issue43524] Addition of peek and peekexactly methods to asyncio.StreamReader Message-ID: <1615938675.3.0.818734595532.issue43524@roundup.psfhosted.org> New submission from Awal Garg : I propose the addition of the following methods to asyncio.StreamReader: > coroutine peek(n=-1) > Same as read, but does not remove the returned data from the internal buffer. > > coroutine peekexactly(n) > Same as readexactly, but does not remove the returned data from the internal buffer. My use case is to multiplex a few protocols over a single TCP socket, for which I need to non-destructively read a few bytes from the socket to decide which parser to hand the stream over to. Thoughts? ---------- components: asyncio messages: 388895 nosy: asvetlov, awalgarg, yselivanov priority: normal severity: normal status: open title: Addition of peek and peekexactly methods to asyncio.StreamReader type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 00:04:55 2021 From: report at bugs.python.org (diegoe) Date: Wed, 17 Mar 2021 04:04:55 +0000 Subject: [New-bugs-announce] [issue43525] pathlib: Highlight operator behavior with anchored paths Message-ID: <1615953895.06.0.527756201685.issue43525@roundup.psfhosted.org> New submission from diegoe : In the '/' operator documentation for `pathlib`, the behavior for anchored paths is not described: https://docs.python.org/3/library/pathlib.html#operators The behavior (prefer the second/right-hand root/anchor) is only explained in the `PurePath` class: https://docs.python.org/3/library/pathlib.html#pathlib.PurePath I ran into this while helping migrate a code base that was using "naive" concatenation of strings, so this: ``` PROJECT_DIR = ROOT_DIR + "/project-name" ``` was migrated to: ``` PROJECT_DIR = ROOT_DIR / "/project-name" ``` Note that, of course, we missed the leading "/". Although the docs _do_ describe the behavior somewhere else, I believe it's worth being redundant in the operator section. I believe it's a reasonable mistake to warn new users against, specially since "naive" concatenation is a common "ugly" pattern that many would be migrating from. Plus, a leading "/" is easy to miss, which would only compound the confusion if you are seeing your path "omit the (left-hand) Path object" (because the anchored string took precedence). ---------- assignee: docs at python components: Documentation messages: 388904 nosy: diegoe, docs at python priority: normal severity: normal status: open title: pathlib: Highlight operator behavior with anchored paths versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 05:03:05 2021 From: report at bugs.python.org (Xavier Morel) Date: Wed, 17 Mar 2021 09:03:05 +0000 Subject: [New-bugs-announce] [issue43526] Programmatic management of BytesWarning doesn't work for native triggers. Message-ID: <1615971785.71.0.488992120219.issue43526@roundup.psfhosted.org> New submission from Xavier Morel : When setting `BytesWarning` programmatically (via the warnings API), though the `warnings.filters` value matches what's obtained via `python -b` and an explicit `warnings.warn` trigger will trigger, "native" triggers of the warning fail to trigger properly: import warnings warnings.simplefilter('default', category=BytesWarning) str(b'') warnings.warn('test', category=BytesWarning) If run using `python`, this will print: test.py:4: BytesWarning: test warnings.warn('test', category=BytesWarning) There is no warning for the string-ification of the bytes instance. If run using `python -b`, the behaviour is as one would expect: test.py:3: BytesWarning: str() on a bytes instance str(b'') test.py:4: BytesWarning: test warnings.warn('test', category=BytesWarning) Inspecting `warnings.filters` shows now difference in their contents, in both cases it is: [('default', None, , None, 0), ('default', None, , '__main__', 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0), ('ignore', None, , None, 0)] (in Python 3.9). The warning module's own suggestion: import sys if not sys.warnoptions: import warnings warnings.simplefilter("default") # Change the filter in this process also fails to enable BytesWarning. If this is intended behaviour, which seems to be the case according to ncoghlan's comment https://bugs.python.org/issue32230#msg307721, it should be clearly documented, as it's rather frustrating. ---------- components: Library (Lib) messages: 388912 nosy: xmorel priority: normal severity: normal status: open title: Programmatic management of BytesWarning doesn't work for native triggers. type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 05:39:37 2021 From: report at bugs.python.org (Xavier Morel) Date: Wed, 17 Mar 2021 09:39:37 +0000 Subject: [New-bugs-announce] [issue43527] Support full stack trace extraction in warnings. Message-ID: <1615973977.17.0.0612429597045.issue43527@roundup.psfhosted.org> New submission from Xavier Morel : When triggering warnings, it's possible to pass in a `stacklevel` in order to point to a more informative cause than the `warnings.warn` call. For instance `stacklevel=2` is a common one for DeprecationWarning in order to mark the call itself as deprecated in the caller's codebase. The problem with this is that it's not transitive, so when a dependency triggers a warning it can be hard to know where that comes from in the codebase (at least without `-Werror` which can prevent reaching the interesting warning entirely), and whether this is an issue in the codebase (e.g. passing bytes where the library really works in terms of strings) or whether it would be possible to work around the warning by using some other API. In that case, the ability to show a full stack trace from the `stacklevel` down is very useful to diagnose such issues. Not quite sure how it would be managed though: I'd think this should be part of the warnings filter information, but the `stacklevel` currently isn't stored there, and it might be risky to extend the warnings filter with a 6th field). ---------- components: Library (Lib) messages: 388914 nosy: xmorel priority: normal severity: normal status: open title: Support full stack trace extraction in warnings. type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 08:43:44 2021 From: report at bugs.python.org (Ivan Kravets) Date: Wed, 17 Mar 2021 12:43:44 +0000 Subject: [New-bugs-announce] [issue43528] "connect_read_pipe" raises errors on Windows for STDIN Message-ID: <1615985024.13.0.408410632002.issue43528@roundup.psfhosted.org> New submission from Ivan Kravets : Hi there, It seems that "connect_read_pipe" is not implemented in ProactorEventLoop. Does it make sense to update docs in these places? - https://docs.python.org/3/library/asyncio-platforms.html#windows - https://docs.python.org/3/library/asyncio-eventloop.html#working-with-pipes Or, this is a bug? # The code to reproduce ``` import asyncio import sys async def read_stdin(): reader = asyncio.StreamReader() protocol = asyncio.StreamReaderProtocol(reader) await asyncio.get_running_loop().connect_read_pipe(lambda: protocol, sys.stdin) while True: line = await reader.readline() print("stdin > ", line) async def main(): task = asyncio.create_task(read_stdin()) await asyncio.sleep(5) task.cancel() if __name__ == "__main__": asyncio.run(main()) ``` P.S: The "loop.add_reader()" raises "NotImplementedError" which is clear according to the docs. Thanks in advance! # Log ``` C:\Users\USER>.platformio\python3\python.exe test.py Exception in callback _ProactorReadPipeTransport._loop_reading() handle: Traceback (most recent call last): File "C:\Users\USER\.platformio\python3\lib\asyncio\proactor_events.py", line 299, in _loop_reading self._read_fut = self._loop._proactor.recv(self._sock, 32768) File "C:\Users\USER\.platformio\python3\lib\asyncio\windows_events.py", line 445, in recv self._register_with_iocp(conn) File "C:\Users\USER\.platformio\python3\lib\asyncio\windows_events.py", line 718, in _register_with_iocp _overlapped.CreateIoCompletionPort(obj.fileno(), self._iocp, 0, 0) OSError: [WinError 6] The handle is invalid During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\USER\.platformio\python3\lib\asyncio\events.py", line 80, in _run self._context.run(self._callback, *self._args) File "C:\Users\USER\.platformio\python3\lib\asyncio\proactor_events.py", line 309, in _loop_reading self._fatal_error(exc, 'Fatal read error on pipe transport') File "C:\Users\USER\.platformio\python3\lib\asyncio\proactor_events.py", line 131, in _fatal_error self._force_close(exc) File "C:\Users\USER\.platformio\python3\lib\asyncio\proactor_events.py", line 134, in _force_close if self._empty_waiter is not None and not self._empty_waiter.done(): AttributeError: '_ProactorReadPipeTransport' object has no attribute '_empty_waiter' Exception ignored in: Traceback (most recent call last): File "C:\Users\USER\.platformio\python3\lib\asyncio\proactor_events.py", line 116, in __del__ self.close() File "C:\Users\USER\.platformio\python3\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\USER\.platformio\python3\lib\asyncio\base_events.py", line 746, in call_soon self._check_closed() File "C:\Users\USER\.platformio\python3\lib\asyncio\base_events.py", line 510, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed ``` ---------- components: asyncio messages: 388919 nosy: asvetlov, ivankravets, yselivanov priority: normal severity: normal status: open title: "connect_read_pipe" raises errors on Windows for STDIN versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 11:23:36 2021 From: report at bugs.python.org (Eric Frederich) Date: Wed, 17 Mar 2021 15:23:36 +0000 Subject: [New-bugs-announce] [issue43529] pathlib.Path.glob causes OSError encountering symlinks to long filenames Message-ID: <1615994616.04.0.762651186772.issue43529@roundup.psfhosted.org> New submission from Eric Frederich : Calling pathlib.Path.glob("**/*) on a directory containing a symlink which resolves to a very long filename causes OSError. This is completely avoidable since symlinks are not followed anyway. In pathlib.py, the _RecursiveWildcardSelector has a method _iterate_directories which first calls entry.is_dir() prior to excluding based on entry.is_symlink(). It's the entry.is_dir() which is failing. If the check for entry.is_symlink() were to happen first this error would be avoided. It's worth noting that on Linux "ls -l bad_link" works fine. Also "find /some/path/containing/bad/link" works fine. You do get an error however when running "ls bad_link" I believe Python's glob() should act like "find" on Linux and not fail. Because it is explicitly ignoring symlinks anyway, it has no business calling is_dir() on a symlink. I have attached a file which reproduces this problem. It's meant to be ran inside of an empty directory. ---------- files: uhoh.py messages: 388927 nosy: eric.frederich priority: normal severity: normal status: open title: pathlib.Path.glob causes OSError encountering symlinks to long filenames Added file: https://bugs.python.org/file49884/uhoh.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 11:45:56 2021 From: report at bugs.python.org (tzing) Date: Wed, 17 Mar 2021 15:45:56 +0000 Subject: [New-bugs-announce] [issue43530] email.parser.BytesParser failed to parse mail when it is with BOM Message-ID: <1615995956.48.0.229285559191.issue43530@roundup.psfhosted.org> New submission from tzing : Python's builtin `email.parser.BytesParser` could not properly parse the message when the bytes starts with BOM. Not 100% ensured- but this issue seems cause by that `FeedParser._parsegen` could not match any of the header line after the data is decoded. Steps to reproduce: 1. get email sample. any from https://github.com/python/cpython/tree/master/Lib/test/test_email/data. I use msg_01.txt in following code 2. re-encoded the mail sample to some encoding with BOM 3. use `email.parser.BytesParser` to parse it ```py import email with open('msg_01.txt', 'rb') as fp: msg = email.parser.BytesParser().parse(fp) print(msg.get('Message-ID')) ``` Expect output `<15090.61304.110929.45684 at aaa.zzz.org>`, got `None` ---------- components: Library (Lib) messages: 388929 nosy: tzing priority: normal severity: normal status: open title: email.parser.BytesParser failed to parse mail when it is with BOM type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 12:06:02 2021 From: report at bugs.python.org (Adrian LeDeaux) Date: Wed, 17 Mar 2021 16:06:02 +0000 Subject: [New-bugs-announce] [issue43531] Turtle module does not work Message-ID: <1615997162.13.0.111429479171.issue43531@roundup.psfhosted.org> New submission from Adrian LeDeaux : So when I try to do the command "import turtle" all I get back is: Traceback (most recent call last): File "", line 1, in import turtle File "/Users/Virsatech/Documents/turtle.py", line 2, in t = turtle.Pen() AttributeError: partially initialized module 'turtle' has no attribute 'Pen' (most likely due to a circular import) that error exactly. And I have tried many times. Anyone know how to fix? ---------- assignee: terry.reedy components: IDLE messages: 388931 nosy: aledeaux, terry.reedy priority: normal severity: normal status: open title: Turtle module does not work type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 13:39:21 2021 From: report at bugs.python.org (Eric V. Smith) Date: Wed, 17 Mar 2021 17:39:21 +0000 Subject: [New-bugs-announce] [issue43532] Add keyword-only fields to dataclasses Message-ID: <1616002761.92.0.652624260802.issue43532@roundup.psfhosted.org> New submission from Eric V. Smith : The idea is that a keyword-only field becomes a keyword-only argument to __init__(). For the proposal and a discussion, see https://mail.python.org/archives/list/python-ideas at python.org/message/FI6KS4O67XDEIDYOFWCXMDLDOSCNSEYG/ The @dataclass decorator will get a new parameter, kw_only, which defaults to False. If kw_only=True, all fields in the dataclass will be by efault keyword-only. In addition, field() will have a new kw_only parameter. If true, the field will be keyword-only. If false, it will not be keyword-only. If unspecified, it will use the value of dataclass's kw_only parameter. In addition, a module-level variable KW_ONLY will be added. If a field has this type, then all fields after it will default to kw_only=True. The field is otherwise completely ignored. Examples: @dataclasses.dataclass class A: a: Any = field(kw_only=True) Will have __init__(self, *, a) @dataclasses.dataclass(kw_only=True) class B: a: Any b: Any Will have __init__(self, *, a, b) @dataclasses.dataclass class C: a: Any _: dataclasses.KW_ONLY b: Any c: Any Will have __init__(self, a, *, b, c) If any non-keyword-only parameters are present, they will be moved before all keyword-only parameters, only for the generated __init__. All other generated methods (__repr__, __lt__, etc.) will keep fields in the declared order, which is the case in versions 3.9 and earlier. @dataclasses.dataclass class D: a: Any b: Any = field(kw_only=True) c: Any Will have __init__(self, a, c, *, b) PR to follow. ---------- assignee: eric.smith components: Library (Lib) messages: 388949 nosy: eric.smith priority: normal severity: normal status: open title: Add keyword-only fields to dataclasses type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 20:49:33 2021 From: report at bugs.python.org (Ran Chen) Date: Thu, 18 Mar 2021 00:49:33 +0000 Subject: [New-bugs-announce] [issue43533] Exception and contextmanager in __getattr__ causes reference cycle Message-ID: <1616028573.19.0.764954620341.issue43533@roundup.psfhosted.org> New submission from Ran Chen : If __getattr__ raises exception within a contextlib.context_manager, it creates a reference cycle and prevents the frame from being garbage collected. This only happens if the exception is raised inside a context manager inside __getattr__. It doesn't happen if there's no context manager. Repro: ``` import contextlib import gc @contextlib.contextmanager def ct(): yield class A(object): def __getattr__(self, name): with ct(): raise AttributeError() def f(): a = A() hasattr(a, 'notexist') gc.set_debug(gc.DEBUG_LEAK) f() gc.collect() ``` It also doesn't happen if we catch the exception outside of the context manager and re-raise it: ``` def __getattr__(self, name): try: with ct(): raise AttributeError() except: raise ``` ---------- components: Library (Lib) messages: 388977 nosy: crccw priority: normal severity: normal status: open title: Exception and contextmanager in __getattr__ causes reference cycle type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 21:12:16 2021 From: report at bugs.python.org (Chris Winkler) Date: Thu, 18 Mar 2021 01:12:16 +0000 Subject: [New-bugs-announce] [issue43534] turtle.textinput window is not transient Message-ID: <1616029936.62.0.714792032874.issue43534@roundup.psfhosted.org> New submission from Chris Winkler : When `turtle.textinput` is called in Python 3.9.2, the resulting dialog window is not marked as transient. This is not a problem in 3.9.1. The offending change seems to come from bpo-42630. Specifically, `SimpleDialog.__init__` is being passed `parent=None`, and because of this `self.transient(parent)` is not being called. A minimal program to reproduce the bug is attached. I'm happy to submit a pull request or something if it would help, but I don't know whether it's more correct to replace `parent` with `master` in the aforementioned if statement or something else. ---------- components: Tkinter files: textinput_test.py messages: 388980 nosy: quid256 priority: normal severity: normal status: open title: turtle.textinput window is not transient type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49885/textinput_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 17 22:29:06 2021 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 18 Mar 2021 02:29:06 +0000 Subject: [New-bugs-announce] [issue43535] Make str.join auto-convert inputs to strings. Message-ID: <1616034546.27.0.0632217994438.issue43535@roundup.psfhosted.org> New submission from Raymond Hettinger : Rather than just erroring-out, it would be nice if str.join converted inputs to strings when needed. Currently: data = [10, 20, 30, 40, 50] s = ', '.join(map(str, data)) Proposed: s = ', '.join(data) That would simplify a common idiom. That is nice win for beginners and it makes code more readable. The join() method is unfriendly in a number of ways. This would make it a bit nicer. There is likely to be a performance win as well. The existing idiom with map() roughly runs like this: * Get iterator over: map(str, data) * Without length knowledge, build-up a list of strings periodically resizing and recopying data (1st pass) * Loop over the list strings to compute the combined size (2nd pass) * Allocate a buffer for the target size * Loop over the list strings (3rd pass), copying each into the buffer and wrap the result in a string object. But, it could run like this: * Use len(data) or a length-hint to presize the list of strings. * Loop over the data, converting each input to a string if needed, keeping a running total of the target size, and storing in the pre-sized list of strings (all this in a single 1st pass) * Allocate a buffer for the target size * Loop over the list strings (2nd pass), copying each into the buffer * Loop over the list strings (3rd pass), copying each into the buffer and wrap the result in a string object. AFAICT, the proposal is mostly backwards compatible, the only change is that code that currently errors-out will succeed. For bytes.join() and bytearray.join(), the only auto-conversion that makes sense is from ints to bytes so that you could write: b' '.join(data) instead of the current: b' '.join([bytes([x]) for x in data]) ---------- components: Interpreter Core messages: 388983 nosy: pablogsal, rhettinger priority: normal severity: normal status: open title: Make str.join auto-convert inputs to strings. type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 00:08:10 2021 From: report at bugs.python.org (Thermi) Date: Thu, 18 Mar 2021 04:08:10 +0000 Subject: [New-bugs-announce] [issue43536] 3.9.2 --without-pymalloc --with-pydebug --with-valgrind: test failed: test_posix Message-ID: <1616040490.74.0.308907840702.issue43536@roundup.psfhosted.org> New submission from Thermi : ---------------------------------------------------------------------- Ran 210 tests in 0.950s OK (skipped=26) == Tests result: FAILURE == 412 tests OK. 1 test failed: test_posix 10 tests skipped: test_devpoll test_gdb test_kqueue test_msilib test_ossaudiodev test_startfile test_winconsoleio test_winreg test_winsound test_zipfile64 Total duration: 1 hour 3 min Tests result: FAILURE test test_posix failed 0:38:00 load avg: 1.89 [265/423/1] test_posixpath -- test_posix failed Possibly related: test_setscheduler_with_policy (test.test_posix.TestPosixSpawnP) ... ERROR test_setscheduler_with_policy (test.test_posix.TestPosixSpawn) ... ERROR Distribution: Arch Linux Linux 5.11.6-arch1-1 gcc 10.2.0-6 glibc 2.33-4 valgrind 3.16.1-4 ---------- components: Tests files: config.log messages: 388986 nosy: Thermi priority: normal severity: normal status: open title: 3.9.2 --without-pymalloc --with-pydebug --with-valgrind: test failed: test_posix type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49887/config.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 02:48:08 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Thu, 18 Mar 2021 06:48:08 +0000 Subject: [New-bugs-announce] [issue43537] nterpreter crashes when handling long text in input() Message-ID: <1616050088.79.0.477395111242.issue43537@roundup.psfhosted.org> New submission from Xinmeng Xia : When the argument of input() is very long text, the interpreter crashes. This bug can be reproduced Python 3.9.2 and Python 2.7.18 on Ubuntu 3.9.2 with GCC7.5.0. I try to reproduce this bug on other version of Python and Operating System, but it fails. This bug seems to have a connection with the version of GCC. Python 3.9.2 (default, Mar 12 2021, 15:08:35) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> input([1,2]*10000) *** Error in `/home/xxm/Desktop/apifuzz/Python-3.9.2/python': realloc(): invalid next size: 0x000000000135fd40 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777f5)[0x7f714431b7f5] /lib/x86_64-linux-gnu/libc.so.6(+0x834da)[0x7f71443274da] /lib/x86_64-linux-gnu/libc.so.6(realloc+0x199)[0x7f71443288a9] /lib/x86_64-linux-gnu/libreadline.so.6(xrealloc+0xe)[0x7f71446a1ffe] /lib/x86_64-linux-gnu/libreadline.so.6(rl_redisplay+0x125f)[0x7f714469451f] /lib/x86_64-linux-gnu/libreadline.so.6(readline_internal_setup+0xb0)[0x7f7144681340] /lib/x86_64-linux-gnu/libreadline.so.6(+0x2a4ac)[0x7f71446984ac] /home/xxm/Desktop/apifuzz/Python-3.9.2/python[0x5d60b2] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(PyOS_Readline+0x116)[0x5da536] /home/xxm/Desktop/apifuzz/Python-3.9.2/python[0x648495] /home/xxm/Desktop/apifuzz/Python-3.9.2/python[0x613f26] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(_PyEval_EvalFrameDefault+0x54e2)[0x4267a2] /home/xxm/Desktop/apifuzz/Python-3.9.2/python[0x4fa3e9] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(PyEval_EvalCode+0x36)[0x4fa746] /home/xxm/Desktop/apifuzz/Python-3.9.2/python[0x543adf] /home/xxm/Desktop/apifuzz/Python-3.9.2/python[0x546d82] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(PyRun_InteractiveLoopFlags+0x8e)[0x54704e] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(PyRun_AnyFileExFlags+0x3c)[0x5478fc] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(Py_RunMain+0x8d7)[0x42b1e7] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(Py_BytesMain+0x56)[0x42b586] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f71442c4840] /home/xxm/Desktop/apifuzz/Python-3.9.2/python(_start+0x29)[0x42a289] ======= Memory map: ======== 00400000-00762000 r-xp 00000000 08:07 7740578 /home/xxm/Desktop/apifuzz/Python-3.9.2/python 00961000-00962000 r--p 00361000 08:07 7740578 /home/xxm/Desktop/apifuzz/Python-3.9.2/python 00962000-0099a000 rw-p 00362000 08:07 7740578 /home/xxm/Desktop/apifuzz/Python-3.9.2/python 0099a000-009be000 rw-p 00000000 00:00 0 012dc000-013ce000 rw-p 00000000 00:00 0 [heap] 7f713c000000-7f713c021000 rw-p 00000000 00:00 0 7f713c021000-7f7140000000 ---p 00000000 00:00 0 7f71439b5000-7f71439cc000 r-xp 00000000 08:07 1966109 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f71439cc000-7f7143bcb000 ---p 00017000 08:07 1966109 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f7143bcb000-7f7143bcc000 r--p 00016000 08:07 1966109 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f7143bcc000-7f7143bcd000 rw-p 00017000 08:07 1966109 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f7143bf0000-7f714407b000 r--p 00000000 08:07 4326136 /usr/lib/locale/locale-archive 7f714407b000-7f71440a0000 r-xp 00000000 08:07 1970777 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7f71440a0000-7f714429f000 ---p 00025000 08:07 1970777 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7f714429f000-7f71442a3000 r--p 00024000 08:07 1970777 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7f71442a3000-7f71442a4000 rw-p 00028000 08:07 1970777 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7f71442a4000-7f7144464000 r-xp 00000000 08:07 1966308 /lib/x86_64-linux-gnu/libc-2.23.so 7f7144464000-7f7144664000 ---p 001c0000 08:07 1966308 /lib/x86_64-linux-gnu/libc-2.23.so 7f7144664000-7f7144668000 r--p 001c0000 08:07 1966308 /lib/x86_64-linux-gnu/libc-2.23.so 7f7144668000-7f714466a000 rw-p 001c4000 08:07 1966308 /lib/x86_64-linux-gnu/libc-2.23.so 7f714466a000-7f714466e000 rw-p 00000000 00:00 0 7f714466e000-7f71446ab000 r-xp 00000000 08:07 1970756 /lib/x86_64-linux-gnu/libreadline.so.6.3 7f71446ab000-7f71448ab000 ---p 0003d000 08:07 1970756 /lib/x86_64-linux-gnu/libreadline.so.6.3 7f71448ab000-7f71448ad000 r--p 0003d000 08:07 1970756 /lib/x86_64-linux-gnu/libreadline.so.6.3 7f71448ad000-7f71448b3000 rw-p 0003f000 08:07 1970756 /lib/x86_64-linux-gnu/libreadline.so.6.3 7f71448b3000-7f71448b4000 rw-p 00000000 00:00 0 7f71448b4000-7f71449bc000 r-xp 00000000 08:07 1966312 /lib/x86_64-linux-gnu/libm-2.23.so 7f71449bc000-7f7144bbb000 ---p 00108000 08:07 1966312 /lib/x86_64-linux-gnu/libm-2.23.so 7f7144bbb000-7f7144bbc000 r--p 00107000 08:07 1966312 /lib/x86_64-linux-gnu/libm-2.23.so 7f7144bbc000-7f7144bbd000 rw-p 00108000 08:07 1966312 /lib/x86_64-linux-gnu/libm-2.23.so 7f7144bbd000-7f7144bbf000 r-xp 00000000 08:07 1966307 /lib/x86_64-linux-gnu/libutil-2.23.so 7f7144bbf000-7f7144dbe000 ---p 00002000 08:07 1966307 /lib/x86_64-linux-gnu/libutil-2.23.so 7f7144dbe000-7f7144dbf000 r--p 00001000 08:07 1966307 /lib/x86_64-linux-gnu/libutil-2.23.so 7f7144dbf000-7f7144dc0000 rw-p 00002000 08:07 1966307 /lib/x86_64-linux-gnu/libutil-2.23.so 7f7144dc0000-7f7144dc3000 r-xp 00000000 08:07 1966306 /lib/x86_64-linux-gnu/libdl-2.23.so 7f7144dc3000-7f7144fc2000 ---p 00003000 08:07 1966306 /lib/x86_64-linux-gnu/libdl-2.23.so 7f7144fc2000-7f7144fc3000 r--p 00002000 08:07 1966306 /lib/x86_64-linux-gnu/libdl-2.23.so 7f7144fc3000-7f7144fc4000 rw-p 00003000 08:07 1966306 /lib/x86_64-linux-gnu/libdl-2.23.so 7f7144fc4000-7f7144fdc000 r-xp 00000000 08:07 1966309 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f7144fdc000-7f71451db000 ---p 00018000 08:07 1966309 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f71451db000-7f71451dc000 r--p 00017000 08:07 1966309 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f71451dc000-7f71451dd000 rw-p 00018000 08:07 1966309 /lib/x86_64-linux-gnu/libpthread-2.23.so 7f71451dd000-7f71451e1000 rw-p 00000000 00:00 0 7f71451e1000-7f7145207000 r-xp 00000000 08:07 1966319 /lib/x86_64-linux-gnu/ld-2.23.so 7f7145210000-7f71453e3000 rw-p 00000000 00:00 0 7f71453fe000-7f71453ff000 rw-p 00000000 00:00 0 7f71453ff000-7f7145406000 r--s 00000000 08:07 4589769 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache 7f7145406000-7f7145407000 r--p 00025000 08:07 1966319 /lib/x86_64-linux-gnu/ld-2.23.so 7f7145407000-7f7145408000 rw-p 00026000 08:07 1966319 /lib/x86_64-linux-gnu/ld-2.23.so 7f7145408000-7f7145409000 rw-p 00000000 00:00 0 7ffefb5a0000-7ffefb5c1000 rw-p 00000000 00:00 0 [stack] 7ffefb5de000-7ffefb5e1000 r--p 00000000 00:00 0 [vvar] 7ffefb5e1000-7ffefb5e3000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] Aborted (core dumped) ---------- components: Interpreter Core messages: 388990 nosy: xxm priority: normal severity: normal status: open title: nterpreter crashes when handling long text in input() type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 03:23:53 2021 From: report at bugs.python.org (Eryk Sun) Date: Thu, 18 Mar 2021 07:23:53 +0000 Subject: [New-bugs-announce] [issue43538] [Windows] support args and cwd in os.startfile() Message-ID: <1616052233.51.0.171946221972.issue43538@roundup.psfhosted.org> New submission from Eryk Sun : bpo-8232 has a patch to add an `arguments` parameter to os.startfile(). This improvement is needlessly tied to that issue. It's useful in general as a safer way to execute applications and scripts compared to using subprocess.Popen() with shell=True. It also enables passing arguments to applications and scripts when using the "runas" operation (prompts with a UAC dialog) and "runasuser" operation (prompts with a credential dialog). The latter operations are supported by default for binary executables and batch scripts in Windows 10, and they can be implemented by the progid of any file type. Setting the working directory with a cwd parameter is not as generally useful, but it's not entirely useless and simple to add at the same time when adding the `args` parameter. ---------- components: Extension Modules, Windows messages: 388991 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: [Windows] support args and cwd in os.startfile() type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 04:27:23 2021 From: report at bugs.python.org (STINNER Victor) Date: Thu, 18 Mar 2021 08:27:23 +0000 Subject: [New-bugs-announce] [issue43539] test_asyncio: test_sendfile_close_peer_in_the_middle_of_receiving() fails randomly Message-ID: <1616056043.89.0.64862359152.issue43539@roundup.psfhosted.org> New submission from STINNER Victor : Seen on the Windows x64 job of GitHub Actions: https://github.com/python/cpython/pull/24913/checks?check_run_id=2137800313 ====================================================================== FAIL: test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.ProactorEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\cpython\cpython\lib\test\test_asyncio\test_sendfile.py", line 458, in test_sendfile_close_peer_in_the_middle_of_receiving self.run_loop( AssertionError: ConnectionError not raised (...) 0:12:31 load avg: 0.39 Re-running test_asyncio in verbose mode (...) ====================================================================== FAIL: test_sendfile_close_peer_in_the_middle_of_receiving (test.test_asyncio.test_sendfile.ProactorEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\cpython\cpython\lib\test\test_asyncio\test_sendfile.py", line 458, in test_sendfile_close_peer_in_the_middle_of_receiving self.run_loop( AssertionError: ConnectionError not raised ---------- components: Tests, asyncio messages: 388997 nosy: asvetlov, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: test_sendfile_close_peer_in_the_middle_of_receiving() fails randomly versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 06:19:33 2021 From: report at bugs.python.org (STINNER Victor) Date: Thu, 18 Mar 2021 10:19:33 +0000 Subject: [New-bugs-announce] [issue43540] importlib: Document how to replace load_module() in What's New in Python 3.10 Message-ID: <1616062773.63.0.458155955758.issue43540@roundup.psfhosted.org> New submission from STINNER Victor : The load_module() method of importlib loaders is deprecated which cause test failures in multiple projects. It is not easy to guess how to replace it. Examples: * pkg_resources fix adding create_module() and exec_module() methods: https://github.com/pypa/setuptools/commit/6ad2fb0b78d11e22672f56ef9d65d13ebd3475a9 * pkg_resources fix replacing importlib.load_module() function call (not loader methods) with importlib.import_module(): https://github.com/pypa/setuptools/commit/a54d9e6b30c6da0542698144d2ff149ae7cadc9a Cython uses this code: if sys.version_info[:2] < (3, 3): import imp def load_dynamic(name, module_path): return imp.load_dynamic(name, module_path) else: from importlib.machinery import ExtensionFileLoader def load_dynamic(name, module_path): return ExtensionFileLoader(name, module_path).load_module() Fixed Cython code: if sys.version_info < (3, 5): import imp def load_dynamic(name, module_path): return imp.load_dynamic(name, module_path) else: import importlib.util as _importlib_util def load_dynamic(name, module_path): spec = _importlib_util.spec_from_file_location(name, module_path) module = _importlib_util.module_from_spec(spec) # sys.modules[name] = module spec.loader.exec_module(module) return module ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 389007 nosy: brett.cannon, docs at python, vstinner priority: normal severity: normal status: open title: importlib: Document how to replace load_module() in What's New in Python 3.10 versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 07:11:14 2021 From: report at bugs.python.org (STINNER Victor) Date: Thu, 18 Mar 2021 11:11:14 +0000 Subject: [New-bugs-announce] [issue43541] PyEval_EvalCodeEx() can no longer be called with code which has (CO_NEWLOCALS | CO_OPTIMIZED) flags Message-ID: <1616065874.01.0.576723621403.issue43541@roundup.psfhosted.org> New submission from STINNER Victor : Cython generates a __Pyx_PyFunction_FastCallDict() function which calls PyEval_EvalCodeEx(). With Python 3.9, it worked well. With Python 3.10 in debug mode, it fails with an assertion error: python3.10: Python/ceval.c:5148: PyEval_EvalCodeEx: Assertion `(((PyCodeObject *)_co)->co_flags & (CO_NEWLOCALS | CO_OPTIMIZED)) == 0' failed. With Python 3.10 in release mode, it does crash. Context of the failed assertion: * Assertion added recently to CPython 3.10 by python/cpython at 0332e56 * The code object flags = (CO_NEWLOCALS | CO_OPTIMIZED | CO_NOFREE) * Code co_argcount = 2 * Code co_kwonlyargcount = 0 * Cython __Pyx_PyFunction_FastCallDict() called with: nargs=1 and kwargs=NULL See the Cython issue to a reproducer: https://github.com/cython/cython/issues/4025#issuecomment-801829541 In Python 3.9, _PyFunction_Vectorcall() has the following fast-path: if (co->co_kwonlyargcount == 0 && nkwargs == 0 && (co->co_flags & ~PyCF_MASK) == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { if (argdefs == NULL && co->co_argcount == nargs) { return function_code_fastcall(tstate, co, stack, nargs, globals); } else if (nargs == 0 && argdefs != NULL && co->co_argcount == PyTuple_GET_SIZE(argdefs)) { /* function called with no arguments, but all parameters have a default value: use default values as arguments .*/ stack = _PyTuple_ITEMS(argdefs); return function_code_fastcall(tstate, co, stack, PyTuple_GET_SIZE(argdefs), globals); } } When the bug occurs, __Pyx_PyFunction_FastCallDict() doesn't take the fast-path because nargs < co_argcount (1 < 2). In Python 3.10, _PyFunction_Vectorcall() is very different: if (((PyCodeObject *)f->fc_code)->co_flags & CO_OPTIMIZED) { return _PyEval_Vector(tstate, f, NULL, stack, nargs, kwnames); } else { return _PyEval_Vector(tstate, f, f->fc_globals, stack, nargs, kwnames); } PyEval_EvalCodeEx() must not crash if the code object has (CO_NEWLOCALS | CO_OPTIMIZED | CO_NOFREE) flags. ---------- components: C API messages: 389009 nosy: Mark.Shannon, pablogsal, vstinner priority: normal severity: normal status: open title: PyEval_EvalCodeEx() can no longer be called with code which has (CO_NEWLOCALS | CO_OPTIMIZED) flags versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 07:34:06 2021 From: report at bugs.python.org (Ilya) Date: Thu, 18 Mar 2021 11:34:06 +0000 Subject: [New-bugs-announce] [issue43542] Add image/heif(heic) to list of media types in mimetypes.py Message-ID: <1616067246.11.0.218850830146.issue43542@roundup.psfhosted.org> New submission from Ilya : Add HEIF and HEIC format to list of media types. It has IANA registration. IANA: https://www.iana.org/assignments/media-types/image/heic HEIF Github: https://github.com/nokiatech/heif ---------- components: Library (Lib) messages: 389012 nosy: martbln priority: normal severity: normal status: open title: Add image/heif(heic) to list of media types in mimetypes.py type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 11:39:47 2021 From: report at bugs.python.org (=?utf-8?b?0JnQvtGC0LAgOjM=?=) Date: Thu, 18 Mar 2021 15:39:47 +0000 Subject: [New-bugs-announce] [issue43543] stupid download Message-ID: <1616081987.44.0.777033345777.issue43543@roundup.psfhosted.org> New submission from ???? :3 : fucking retards, how to new or default people can download old versions of python? Maybe you can made better and easy desing? Stupud idiots ---------- components: C API messages: 389022 nosy: yotabestww priority: normal severity: normal status: open title: stupid download type: resource usage _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 11:49:43 2021 From: report at bugs.python.org (=?utf-8?q?Gu=C3=A9na=C3=ABl_Muller?=) Date: Thu, 18 Mar 2021 15:49:43 +0000 Subject: [New-bugs-announce] [issue43544] mimetype default list make a wrong guess for illustrator file Message-ID: <1616082583.58.0.664742834021.issue43544@roundup.psfhosted.org> New submission from Gu?na?l Muller : mimetypes lib consider illustrator file ('.ai') as 'application/postscript' type. This is correct... but also wrong. Old illustrator file (illustrator 9) are real postscript file but modern one are technically pdf. So guessing .ai as postscript lead to wrong guessing if you're using some software that make decision based on mimetype, you can of course, check both file_extension and mimetype but this remove usefulness of mimetype. ---------- messages: 389023 nosy: Inkhey priority: normal severity: normal status: open title: mimetype default list make a wrong guess for illustrator file versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 12:03:58 2021 From: report at bugs.python.org (Dan Snider) Date: Thu, 18 Mar 2021 16:03:58 +0000 Subject: [New-bugs-announce] [issue43545] Use LOAD_GLOBAL to set __module__ in class def Message-ID: <1616083438.62.0.932778825065.issue43545@roundup.psfhosted.org> New submission from Dan Snider : Other than obvious performance implications this has, usage of LOAD_NAME makes defining cls.__name__ from within metaclass.__prepare__ difficult. ---------- messages: 389026 nosy: bup priority: normal severity: normal status: open title: Use LOAD_GLOBAL to set __module__ in class def type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 14:58:26 2021 From: report at bugs.python.org (Anentropic) Date: Thu, 18 Mar 2021 18:58:26 +0000 Subject: [New-bugs-announce] [issue43546] "Impossible" KeyError from importlib._bootstrap acquire line 110 Message-ID: <1616093906.25.0.990565288145.issue43546@roundup.psfhosted.org> New submission from Anentropic : We have a Django 2.2.19 project on Python 3.9.2 on Debian (slim-buster) in Docker. A bizarre problem started happening to us this week. First I'll show the symptom, we started getting the following error: ... File "/root/.pyenv/versions/3.9.2/lib/python3.9/site-packages/django/db/migrations/autodetector.py", line 10, in from django.db.migrations.optimizer import MigrationOptimizer File "", line 1004, in _find_and_load File "", line 158, in __enter__ File "", line 110, in acquire KeyError: 140426340123264 If I look at the source for _bootstrap.py, that error should be impossible: https://github.com/python/cpython/blob/v3.9.2/Lib/importlib/_bootstrap.py#L110 At the top of the acquire method it does: tid = _thread.get_ident() _blocking_on[tid] = self and then on line 110 where we get the KeyError: del _blocking_on[tid] both `tid` and `_blocking_on` are local vars and none of the other lines in the method touch them So how do we get a KeyError? I can only think that something mutates the underlying value of `tid`, but it's supposed to be an int so that's very weird. I started with the symptom because our context for this is complicated to explain. I did find a fix that prevents the error but I do not understand the link between cause and effect. Our context: - we have a large unit test suite for the project which we run in Jenkins - we split the tests across several Jenkins nodes to run in parallel in isolated docker environments - we use some bash to like this to split the test cases: find project/ -iname "test*.py" -print0 | \ xargs --null grep -E '(def test)|(def step_)' -l | \ split -n "r/$NODE_ID/$NODES" | \ xargs ci/bin/run-tests - ci/bin/run-tests is just a wrapper which calls Django's manage.py test command so it receives a list of filenames like "project/metrics/tests/test_client.py" as args - using "nose" test runner via django-nose FWIW We currently split tests across 3 nodes, and it was always node 2 which would fail. I found that commenting out a test case in any of the files being passed to node 2 would prevent the error from occurring. Note that in this case we are still passing *exactly the same filenames* as cli args to the test runner. Splitting the tests across 4 nodes instead of 3 also seems to prevent the error. So it seems like, in some way I don't understand, we just have too many test cases. Perhaps nose is doing something wrong or inefficient when given lots of filenames. But I'm reporting here because the error we get from importlib._bootstrap looks like it should be impossible. ---------- messages: 389034 nosy: anentropic priority: normal severity: normal status: open title: "Impossible" KeyError from importlib._bootstrap acquire line 110 type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 17:25:38 2021 From: report at bugs.python.org (Hans-Christoph Steiner) Date: Thu, 18 Mar 2021 21:25:38 +0000 Subject: [New-bugs-announce] [issue43547] support ZIP files with zeroed out fields (e.g. for reproducible builds) Message-ID: <1616102738.26.0.451702371832.issue43547@roundup.psfhosted.org> New submission from Hans-Christoph Steiner : It is now standard for Java JARs and Android APKs (both ZIP files) to zero out lots of the fields in the ZIP header. For example: * each file entry has the date set to zero * the create_system is always set to zero on all platforms zipfile currently cannot create such ZIPs because of two small restrictions that it introduced: * must use a tuple of 6 values to set the date * forced create_system value based on sys.platform == 'win32' * maybe other fields? I lump these together because it might make sense to handle this with a single argument, something like zero_header=True. The use case is for working with ZIP, JAR, APK, AAR files for reproducible builds. The whole build system for F-Droid is built in Python. We need to be able to copy the JAR/APK signatures in order to reproduce signed builds using only the source code and the signature files themselves. Right now, that's not possible because building a ZIP with Python's zipfile cannot zero out the ZIP header like other tools can, including Java. ---------- components: IO, Library (Lib) messages: 389040 nosy: eighthave priority: normal severity: normal status: open title: support ZIP files with zeroed out fields (e.g. for reproducible builds) versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 18 22:44:55 2021 From: report at bugs.python.org (behind thebrain) Date: Fri, 19 Mar 2021 02:44:55 +0000 Subject: [New-bugs-announce] [issue43548] RecursionError depth exceptions break pdb's interactive tracing. Message-ID: <1616121895.83.0.21835485789.issue43548@roundup.psfhosted.org> New submission from behind thebrain : If pdb encounters most exception types, it handles them as would be expected. However, if pdb encounters a RecursionError: maximum recursion depth exceeded while calling a Python object, then it will continue to execute the code accurately, but the debugger itself will no longer interactively wait for user input, but instead, just speed through the rest of execution. The code below reproduces the error on python 3.7, 3.8, and 3.9. ```python3 import sys import inspect sys.setrecursionlimit(50) def except_works() -> None: raise Exception try: except_works() except Exception as e: print("Exception was:", e) def funcy(depth: int) -> None: print(f"Stack depth is:{len(inspect.stack())}") if depth == 0: return funcy(depth - 1) try: funcy(60) except Exception as e: print("Exception was:", e) print("This executes without the debugger navigating to it.") ``` ---------- components: Interpreter Core files: runawaystepping.py messages: 389051 nosy: behindthebrain priority: normal severity: normal status: open title: RecursionError depth exceptions break pdb's interactive tracing. versions: Python 3.8 Added file: https://bugs.python.org/file49891/runawaystepping.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 00:22:52 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Fri, 19 Mar 2021 04:22:52 +0000 Subject: [New-bugs-announce] [issue43549] Outdated descriptions for configuring valgrind. Message-ID: <1616127772.89.0.516830285286.issue43549@roundup.psfhosted.org> New submission from Xinmeng Xia : At line 12-20, cpython/Misc/README.valgrind, the descriptions are out of date. File "Objects/obmalloc.c" does not contain Py_USING_MEMORY_DEBUGGER any more since Python 3.6. The descriptions should be modified for Python 3.6-3.10 Attached line 12-20, cpython/Misc/README.valgrind: ================================================= If you don't want to read about the details of using Valgrind, there are still two things you must do to suppress the warnings. First, you must use a suppressions file. One is supplied in Misc/valgrind-python.supp. Second, you must do one of the following: * Uncomment Py_USING_MEMORY_DEBUGGER in Objects/obmalloc.c, then rebuild Python * Uncomment the lines in Misc/valgrind-python.supp that suppress the warnings for PyObject_Free and PyObject_Realloc ================================================= ---------- assignee: docs at python components: Documentation messages: 389052 nosy: docs at python, xxm priority: normal severity: normal status: open title: Outdated descriptions for configuring valgrind. type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 01:53:05 2021 From: report at bugs.python.org (Georgios Petrou) Date: Fri, 19 Mar 2021 05:53:05 +0000 Subject: [New-bugs-announce] [issue43550] pip.exe is missing from the NuGet package Message-ID: <1616133185.95.0.11826827312.issue43550@roundup.psfhosted.org> New submission from Georgios Petrou : When downloading a package from https://www.nuget.org/packages/python the pip.exe is not included. As far as I understand, the recommended way to use pip from a script is to call it from subprocess. Would it be possible to include the exe in the package? ---------- components: Windows messages: 389055 nosy: gipetrou, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: pip.exe is missing from the NuGet package type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 03:06:36 2021 From: report at bugs.python.org (junyixie) Date: Fri, 19 Mar 2021 07:06:36 +0000 Subject: [New-bugs-announce] [issue43551] [Subinterpreters]: PyImport_Import use static silly_list under building Python with --with-experimental-isolated-subinterpreters share silly_list in multi subinterpreters cause crash. Message-ID: <1616137596.99.0.963932745487.issue43551@roundup.psfhosted.org> New submission from junyixie : fix PyImport_Import use static silly_list under building Python with --with-experimental-isolated-subinterpreters share silly_list in multi subinterpreters cause crash. Under the sub interpreters parallel, PyObject_CallFunction clean stack, Py_DECREF(stack[i]), Py_DECREF silly_list is not thread safe. cause crash ``` PyObject * PyImport_Import(PyObject *module_name) { PyThreadState *tstate = _PyThreadState_GET(); static PyObject *silly_list = NULL; ... /* Initialize constant string objects */ if (silly_list == NULL) { import_str = PyUnicode_InternFromString("__import__"); if (import_str == NULL) return NULL; builtins_str = PyUnicode_InternFromString("__builtins__"); if (builtins_str == NULL) return NULL; silly_list = PyList_New(0); if (silly_list == NULL) return NULL; } ... /* Call the __import__ function with the proper argument list Always use absolute import here. Calling for side-effect of import. */ r = PyObject_CallFunction(import, "OOOOi", module_name, globals, globals, silly_list, 0, NULL); ``` ---------- messages: 389056 nosy: JunyiXie, vstinner priority: normal severity: normal status: open title: [Subinterpreters]: PyImport_Import use static silly_list under building Python with --with-experimental-isolated-subinterpreters share silly_list in multi subinterpreters cause crash. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 05:17:13 2021 From: report at bugs.python.org (STINNER Victor) Date: Fri, 19 Mar 2021 09:17:13 +0000 Subject: [New-bugs-announce] [issue43552] Add locale.get_locale_encoding() and locale.get_current_locale_encoding() Message-ID: <1616145433.55.0.350148222552.issue43552@roundup.psfhosted.org> New submission from STINNER Victor : I propose to add two new functions: * locale.get_locale_encoding(): it's exactly the same than locale.getpreferredencoding(False). * locale.get_current_locale_encoding(): always get the current locale encoding. Read the ANSI code page on Windows, or nl_langinfo(CODESET) on other platforms. Ignore the UTF-8 Mode. Don't always return "UTF-8" on macOS, Android, VxWorks. Technically, locale.get_locale_encoding() would simply expose _locale.get_locale_encoding() that I added recently. It calls the new private _Py_GetLocaleEncoding() function (which has no argument). By the way, Python requires nl_langinfo(CODESET) to be built. It's not a new requirement of Python 3.10, but I wanted to note that, I noticed it when I implemented _locale.get_locale_encoding() :-) Python has a bad habit of lying to the user: locale.getpreferredencoding(False) is *NOT* the current locale encoding in multiple cases. * locale.getpreferredencoding(False) always return "UTF-8" on macOS, Android and VxWorks * locale.getpreferredencoding(False) always return "UTF-8" if the UTF-8 Mode is enabled * otherwise, it returns the current locale encoding: ANSI code page on Windwos, or nl_langinfo(CODESET) on other platforms Even if locale.getpreferredencoding(False) already exists, I propose to add locale.get_locale_encoding() because I dislike locale.getpreferredencoding() API. By default, this function sets temporarily LC_CTYPE to the user preferred locale. It can cause mojibake in other threads since setlocale(LC_CTYPE, "") affects all threads :-( Calling locale.getpreferredencoding(), rather than locale.getpreferredencoding(False), is not what most people expect. This API can be misused. On the other side, locale.get_locale_encoding() does exactly what it says: only *get* the encoding, don't *set* temporarily a locale to something else. By the way, the locale.localeconv() function can change temporarily LC_CTYPE locale to the LC_MONETARY locale which can cause other threads to use the wrong LC_CTYPE locale! But this is a different issue. ---------- components: Library (Lib) messages: 389057 nosy: vstinner priority: normal severity: normal status: open title: Add locale.get_locale_encoding() and locale.get_current_locale_encoding() versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 05:40:27 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Fri, 19 Mar 2021 09:40:27 +0000 Subject: [New-bugs-announce] [issue43553] [sqlite3] Improve test coverage Message-ID: <1616146827.28.0.306861710818.issue43553@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : Attached patch improves the code coverage of the sqlite3 module. I've used llvm-cov for coverage measurement. I'll create a PR for this, if you're fine with this, Berker/Serhiy. Filename Regions Missed Regions Cover Functions Missed Functions Executed Lines Missed Lines Cover ------------------------------------------------------------------------------------------------------------------------------------------------- prepare_protocol.c 10 7 30.00% 3 2 33.33% 16 11 31.25% util.c 65 21 67.69% 3 0 100.00% 78 26 66.67% module.c 306 59 80.72% 10 1 90.00% 236 45 80.93% row.c 173 16 90.75% 11 0 100.00% 146 13 91.10% microprotocols.c 81 9 88.89% 3 0 100.00% 98 15 84.69% connection.c 1113 155 86.07% 43 0 100.00% 1366 179 86.90% cache.c 136 38 72.06% 7 1 85.71% 227 59 74.01% cursor.c 758 116 84.70% 19 0 100.00% 794 122 84.63% statement.c 340 22 93.53% 10 0 100.00% 392 29 92.60% ------------------------------------------------------------------------------------------------------------------------------------------------- TOTAL 2982 443 85.14% 109 4 96.33% 3353 499 85.12% ---------- components: Library (Lib) files: improve-sqlite3-coverage.diff keywords: patch messages: 389060 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: [sqlite3] Improve test coverage type: enhancement versions: Python 3.10 Added file: https://bugs.python.org/file49892/improve-sqlite3-coverage.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 05:45:02 2021 From: report at bugs.python.org (Emil Styrke) Date: Fri, 19 Mar 2021 09:45:02 +0000 Subject: [New-bugs-announce] [issue43554] email: encoded headers lose their quoting when refolded Message-ID: <1616147102.5.0.601004900737.issue43554@roundup.psfhosted.org> New submission from Emil Styrke : When a header with an encoded (QP or Base64) display_name is refolded, it may lose (some of) its encoding. If it then contains illegal "atext" tokens, an invalid header will result. For example, `From: =?utf-8?Q?a=2C=20123456789012345678901234567890123456?= ` will become `From: a, 123456789012345678901234567890123456 ` This contains a comma character which needs to be quoted: correct rendering would be `From: "a, 123456789012345678901234567890123456" `. Note that this example isn't even folded to multiple lines, since the decoded text is short enough to fit in one line. This can be triggered by `BytesParser(policy=policy.default).parsebytes("From: =?utf-8?Q?a=2C=20123456789012345678901234567890123456?= ").as_bytes()`, but the offending code seems to be in or below `email.policy.EmailPolicy.fold`. See attached file for examples with and without folding. ---------- components: Library (Lib) files: test_folding_bug.py messages: 389061 nosy: Emil.Styrke priority: normal severity: normal status: open title: email: encoded headers lose their quoting when refolded type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49893/test_folding_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 07:18:52 2021 From: report at bugs.python.org (Andre Roberge) Date: Fri, 19 Mar 2021 11:18:52 +0000 Subject: [New-bugs-announce] [issue43555] Location of SyntaxError with new parser missing (after continuation character) Message-ID: <1616152732.25.0.508578313833.issue43555@roundup.psfhosted.org> New submission from Andre Roberge : Normally, for SyntaxErrors, the location of the error is indicated by a ^. There is at least one case where the location is missing for 3.9 and 3.10.0a6 where it was shown before. Using the old parser for 3.9, or with previous versions of Python, the location is shown. Python 3.10.0a6 ... on win32 >>> a = 3 \ 4 File "", line 1 SyntaxError: unexpected character after line continuation character >>> Python 3.9.0 ... on win32 Type "help", "copyright", "credits" or "license" for more information. >>> a = 3 \ 4 File "", line 1 SyntaxError: unexpected character after line continuation character >>> Using the old parser with Python 3.9, the location of the error is shown *after* the unexpected character. > python -X oldparser Python 3.9.0 ... on win32 >>> a = 3 \ 4 File "", line 1 a = 3 \ 4 ^ SyntaxError: unexpected character after line continuation character >>> Using Python 3.8 (and 3.7, 3.6), the location is pointing at the unexpected character. Python 3.8.4 ... on win32 >>> a = 3 \ 4 File "", line 1 a = 3 \ 4 ^ SyntaxError: unexpected character after line continuation character >>> ---------- components: Interpreter Core messages: 389071 nosy: aroberge priority: normal severity: normal status: open title: Location of SyntaxError with new parser missing (after continuation character) type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 08:01:11 2021 From: report at bugs.python.org (Samwyse) Date: Fri, 19 Mar 2021 12:01:11 +0000 Subject: [New-bugs-announce] [issue43556] fix attr names for ast.expr and ast.stmt Message-ID: <1616155271.78.0.879688630382.issue43556@roundup.psfhosted.org> New submission from Samwyse : In Doc/library/ast.rst, the lineno and end_col attributes are repeated; the second set should have 'end_' prefixed to them. Also, there's a minor indentation error in the RST file. # diff ast.rst ast.rst~ 78c78 < col_offset --- > col_offset 83c83 < :attr:`lineno`, :attr:`col_offset`, :attr:`end_lineno`, and :attr:`end_col_offset` --- > :attr:`lineno`, :attr:`col_offset`, :attr:`lineno`, and :attr:`col_offset` ---------- assignee: docs at python components: Documentation messages: 389077 nosy: docs at python, samwyse priority: normal severity: normal status: open title: fix attr names for ast.expr and ast.stmt type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 09:27:27 2021 From: report at bugs.python.org (STINNER Victor) Date: Fri, 19 Mar 2021 13:27:27 +0000 Subject: [New-bugs-announce] [issue43557] Deprecate getdefaultlocale(), getlocale() and normalize() functions Message-ID: <1616160447.56.0.84902377628.issue43557@roundup.psfhosted.org> New submission from STINNER Victor : I propose to deprecate getdefaultlocale(), getlocale() and normalize() functions since they have multiple issues, and remove them in Python 3.12. The normalize() function uses the locale.locale_alias dictionary which was copied from the X11 locale database in 2000. It's hard to keep this dictionary up to date and to support all locales of all platforms supported by Python. There are multiple issues on macOS for example. getdefaultlocale() and getlocale() use heuristics to get an encoding from the locale name. These heuristics are not reliable. getdefaultlocale() only rely on environment variables. When setlocale() is called, environment variables are not updated, and so the encoding returned by getdefaultlocale() is not the effective LC_CTYPE locale encoding. Example: https://bugs.python.org/issue43552#msg389069 getlocale() open issues: * bpo-20088: locale.getlocale() fails if locale name doesn't include encoding * bpo-23425: Windows getlocale unix-like with french, german, portuguese, spanish * bpo-33934: locale.getlocale() seems wrong when the locale is yet unset (python3 on linux) * bpo-38805: locale.getlocale() returns a non RFC1766 language code * bpo-43115: locale.getlocale fails if locale is set getdefaultlocale() open issue: * bpo-6981: locale.getdefaultlocale() envvars default code and documentation mismatch * bpo-30755: locale.normalize() and getdefaultlocale() convert C.UTF-8 to en_US.UTF-8 Replacements: * getdefaultlocale()[1] => getpreferredencoding(False) or get_current_locale_encoding(), see bpo-43552 * getlocale(loc) => setlocale(loc) or setlocale(loc, None) * normalize => no replacement. There is no standard way to normalize a locale name. ---------- components: Library (Lib) messages: 389086 nosy: vstinner priority: normal severity: normal status: open title: Deprecate getdefaultlocale(), getlocale() and normalize() functions versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 10:47:18 2021 From: report at bugs.python.org (Eric V. Smith) Date: Fri, 19 Mar 2021 14:47:18 +0000 Subject: [New-bugs-announce] [issue43558] The dataclasses documentation should mention how to call super().__init__ Message-ID: <1616165238.01.0.0594716110393.issue43558@roundup.psfhosted.org> New submission from Eric V. Smith : https://docs.python.org/3/library/dataclasses.html#post-init-processing should mention that if you need to call super().__init__, you should do it in __post_init__. Dataclasses cannot know what parameters to pass to the super class's __init__, so you'll need to do it yourself manually in __post_init__. ---------- assignee: eric.smith components: Documentation messages: 389097 nosy: eric.smith priority: low severity: normal stage: needs patch status: open title: The dataclasses documentation should mention how to call super().__init__ versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 14:19:33 2021 From: report at bugs.python.org (=?utf-8?q?Canberk_S=C3=B6nmez?=) Date: Fri, 19 Mar 2021 18:19:33 +0000 Subject: [New-bugs-announce] [issue43559] ctypes: Heap Pointer is damaged between C and Python Message-ID: <1616177973.2.0.159103598259.issue43559@roundup.psfhosted.org> New submission from Canberk S?nmez : Please see the SO post: https://stackoverflow.com/questions/66713071/ctypes-heap-pointer-is-damaged-between-c-and-python-linux-x86-64 In summary, when I return a pointer to a heap-allocated memory location from a C function, its most significant 32 bits are chopped off for some reason. I observed this behavior in Python 3.7 and Python 3.8, on Ubuntu 18.04 and Centos 7 (x86_64). ---------- messages: 389107 nosy: canberk.sonmez.409 priority: normal severity: normal status: open title: ctypes: Heap Pointer is damaged between C and Python _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 14:39:18 2021 From: report at bugs.python.org (Larry Trammell) Date: Fri, 19 Mar 2021 18:39:18 +0000 Subject: [New-bugs-announce] [issue43560] Modify SAX/expat parsing to avoid fragmentation of already-tiny content chunks Message-ID: <1616179158.85.0.552129981658.issue43560@roundup.psfhosted.org> New submission from Larry Trammell : Issue 43483 was posted as a "bug" but retracted. Though the problem is real, it is tricky to declare an UNSPECIFIED behavior to be a bug. See that issue page for more discussion and a test case. A brief overview is repeated here. SCENARIO - XML PARSING LOSES DATA (or not) The parsing attempts to capture text consisting of very tiny quoted strings. A typical content line reads something like this:

Colchuck

The parser implements a scheme presented at various tutorial Web sites, using two member functions. # Note the name attribute of the current tag group def element_handler(self, tagname, attrs) : self.CurrentTag = tagname # Record the content from each "p" tag when encountered def characters(self, content): if self.CurrentTag == "p": self.name = content ... > print(parser.name) "Colchuck" But then, after successfully extracting content from perhaps hundreds of thousands of XML tag sets in this way, the parsing suddenly "drops" a few characters of content. > print(parser.name) "lchuck" While this problem was observed with a SAX parser, it can affect expat parsers as well. It affects 32-bit and 64-bit implementations the same, over several major releases of the Python 3 system. SPECIFIED BEHAVIOR (or not) The "xml.sax.handler" page in the Python 3.9.2 Documentation for the Python Standard Library (and many prior versions) states: ----------- ContentHandler.characters(content) -- The Parser will call this method to report each chunk of character data. SAX parsers may return all contiguous character data in a single chunk, or they may split it into several chunks... ----------- If it happens that the content is delivered in two chunks instead of one, the characters() method shown above overwrites the first part of the text with the second part, and some content seems lost. This completely explains the observed behavior. EXPECTED BEHAVIOR (or not) Even though the behavior is unspecified, users can have certain expectations about what a reasonable parser should do. Among these: -- EFFICIENCY: the parser should do simple things simply, and complicated things as simply as possible -- CONSISTENCY: the parser behavior should be repeatable and dependable The design can be considered "poor" if thorough testing cannot identify what the actual behaviors are going to be, because those behaviors are rare and unpredictable. The obvious "simple thing," from the user perspective, is that the parser should return each tiny text string as one tiny text chunk. In fact, this is precisely what it does... 99.999% of the time. But then, suddenly, it doesn't. One hypothesis is that when the parsing scan of raw input text reaches the end of a large internal text buffer, it is easier from the implementer's perspective to flush any text remaining in the old buffer prior to fetching a new one, even if that produces a fragmented chunk with only a couple of characters. IMPROVEMENTS REQUIRED Review the code to determine whether the text buffer scenario is in fact the primary cause of inconsistent behavior. Modify the data handling to defer delivery of content fragments that are small, carrying over a small amount of previously scanned text so that small contiguous text chunks are recombined rather than reported as multiple fragments. If the length of the content text to carry over is greater than some configurable xml.sax.handler.ContiguousChunkLength, the parser can go ahead and deliver it as a fragment. DOCUMENTING THE IMPROVEMENTS Strictly speaking: none required. Undefined behaviors are undefined, whether consistent or otherwise. But after the improvements are implemented, it would be helpful to modify documentation to expose the new performance guarantees, making users more aware of the possible hazards. For example, a new description in the "xml.sax.handler" page might read as follows: ----------- ContentHandler.characters(content) -- The Parser will call this method to report chunks of character data. In general, character data may be reported as a single chunk or as sequence of chunks; but character data sequences with fewer than xml.sax.handler.ContiguousChunkLength characters, when uninterrupted any other xml.sax.handler.ContentHandler event, are guaranteed to be delivered as a single chunk... ----------- ---------- components: XML messages: 389108 nosy: ridgerat1611 priority: normal severity: normal status: open title: Modify SAX/expat parsing to avoid fragmentation of already-tiny content chunks type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 14:52:35 2021 From: report at bugs.python.org (Larry Trammell) Date: Fri, 19 Mar 2021 18:52:35 +0000 Subject: [New-bugs-announce] [issue43561] Modify XML parsing library descriptions to forewarn of content loss hazard Message-ID: <1616179955.42.0.129876890529.issue43561@roundup.psfhosted.org> New submission from Larry Trammell : With reference to improvement issue 43560 : If those improvements remain unimplemented, or are demoted to "don't fix", users are left in the tricky situation where XML parsing applications can fail, apparently "losing content" in a rare and unpredictable manner. It would be useful to patch the documentation to give users fair warning of this hazard. For example: the "xml.sax.handler" page in the Python 3.9.2 Documentation for the Python Standard Library (and many prior versions) currently states: ----------- ContentHandler.characters(content) -- The Parser will call this method to report each chunk of character data. SAX parsers may return all contiguous character data in a single chunk, or they may split it into several chunks... ----------- The modified documentation would read something like the following: ----------- ContentHandler.characters(content) -- The Parser will call this method to report each chunk of character data. SAX parsers may return all contiguous character data in a single chunk, or they may split it into several chunks... To avoid a situation in which one small content fragment unexpectedly overwrites another one, it is essential for the characters() method to collect content by appending, rather than by assignment. ----------- To give a concrete example, suppose that a Python programming site recommends the following coding to preserve a small text chunk bracketed by "

" tags: # Note the name attribute of the current tag group def element_handler(self, tagname, attrs) : self.CurrentTag = tagname # Record the content from each "p" tag when encountered def characters(self, content): if self.CurrentTag == "p" : self.name = content Even though that coding could be expected to work most of the time, it is exposed to the hazard that an unanticipated sequence of calls to the characters() function would overwrite data. Instead, the coding should look something like this. # Note the name attribute of the current tag group def element_handler(self, tagname, attrs) : self.CurrentTag = tagname self.name = "" # Accumulate the content from each "p" tag when encountered def characters(self, content): if self.CurrentTag == "p": self.name.append(content) ---------- assignee: docs at python components: Documentation messages: 389111 nosy: docs at python, ridgerat1611 priority: normal severity: normal status: open title: Modify XML parsing library descriptions to forewarn of content loss hazard versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 16:57:27 2021 From: report at bugs.python.org (Carl Meyer) Date: Fri, 19 Mar 2021 20:57:27 +0000 Subject: [New-bugs-announce] [issue43562] test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is unreachable Message-ID: <1616187447.7.0.603298479381.issue43562@roundup.psfhosted.org> New submission from Carl Meyer : In general it seems the CPython test suite takes care to not fail if the network is unreachable, but `test_timeout_connect_ex` fails because the result code of the connection is checked without any exception being raised that would reach `support.transient_internet`. ---------- components: Tests messages: 389113 nosy: carljm priority: normal severity: normal status: open title: test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is unreachable type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 16:57:57 2021 From: report at bugs.python.org (Vladimir Matveev) Date: Fri, 19 Mar 2021 20:57:57 +0000 Subject: [New-bugs-announce] [issue43563] Use dedicated opcodes to speed up calls/attribute lookups with super() as receiver Message-ID: <1616187477.95.0.162642980766.issue43563@roundup.psfhosted.org> New submission from Vladimir Matveev : Calling methods and lookup up attributes when receiver is `super()` has extra cost comparing to regular attribute lookup. It mainly comes from the need to allocate and initialize the instance of the `super` which for zero argument case also include peeking into frame/code object for the `__class__` cell and first argument. In addition because `PySuper_Type` has custom implementation of tp_getattro - `_PyObject_GetMethod` would always return bound method. ``` import timeit setup = """ class A: def f(self): pass class B(A): def f(self): super().f() def g(self): A.f(self) b = B() """ print(timeit.timeit("b.f()", setup=setup, number=20000000)) print(timeit.timeit("b.g()", setup=setup, number=20000000)) 7.329449548968114 3.892987059080042 ``` One option to improve it could be to make compiler/interpreter aware of super calls so they can be treated specially. Attached patch introduces two new opcodes LOAD_METHOD_SUPER and LOAD_ATTR_SUPER that are intended to be counterparts for LOAD_METHOD and LOAD_ATTR for cases when receiver is super with either zero or two arguments. Immediate argument for both LOAD_METHOD_SUPER and LOAD_ATTR_SUPER is a pair that consist of: 0: index of method/attribute in co_names 1: Py_True if super was originally called with 0 arguments and Py_False otherwise. Both LOAD_METHOD_SUPER and LOAD_ATTR_SUPER expect 3 elements on the stack: TOS3: global_super TOS2: type TOS1: self/cls Result of LOAD_METHOD_SUPER is the same as LOAD_METHOD. Result of LOAD_ATTR_SUPER is the same as LOAD_ATTR In runtime both LOAD_METHOD_SUPER and LOAD_ATTR_SUPER will check if `global_super` is `PySuper_Type` to handle situations when `super` is patched. If `global_super` is `PySuper_Type` then it can use dedicated routine to perform the lookup for provided `__class__` and `cls/self` without allocating new `super` instance. If `global_super` is different from `PySuper_Type` then runtime will fallback to the original logic using `global_super` and original number of arguments that was captured in immediate. Benchmark results with patch: 4.381768501014449 3.9492998640052974 ---------- components: Interpreter Core messages: 389114 nosy: v2m priority: normal severity: normal status: open title: Use dedicated opcodes to speed up calls/attribute lookups with super() as receiver versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 17:21:09 2021 From: report at bugs.python.org (Carl Meyer) Date: Fri, 19 Mar 2021 21:21:09 +0000 Subject: [New-bugs-announce] [issue43564] some tests in test_urllib2net fail instead of skipping on unreachable network Message-ID: <1616188869.43.0.436044106244.issue43564@roundup.psfhosted.org> New submission from Carl Meyer : In general it seems the CPython test suite takes care to skip instead of failing networked tests when the network is unavailable (c.f. `support.transient_internet` test helper). In this case of the 5 FTP tests in `test_urllib2net` (that is, `test_ftp`, `test_ftp_basic`, `test_ftp_default_timeout`, `test_ftp_no_timeout`, and `test_ftp_timeout`), even though they use `support_transient_internet`, they still fail if the network is unavailable. The reason is that they make calls which end up raising an exception in the form `URLError("ftp error: OSError(101, 'Network is unreachable')"` -- the original OSError is flattened into the exception string message, but is otherwise not in the exception args. This means that `transient_network` does not detect it as a suppressable exception. It seems like many uses of `URLError` in urllib pass the original `OSError` directly to `URLError.__init__()`, which means it ends up in `args` and the unwrapping code in `transient_internet` is able to find the original `OSError`. But the ftp code instead directly interpolates the `OSError` into a new message string. ---------- components: Tests messages: 389115 nosy: carljm priority: normal severity: normal status: open title: some tests in test_urllib2net fail instead of skipping on unreachable network type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 19 21:10:53 2021 From: report at bugs.python.org (Max Bachmann) Date: Sat, 20 Mar 2021 01:10:53 +0000 Subject: [New-bugs-announce] [issue43565] PyUnicode_KIND macro does not has specified return type Message-ID: <1616202653.73.0.639291876959.issue43565@roundup.psfhosted.org> New submission from Max Bachmann : The documentation stated, that the PyUnicode_KIND macro has the following interface: - int PyUnicode_KIND(PyObject *o) However it actually returns a value of the underlying type of the PyUnicode_Kind enum. This could be e.g. an unsigned int as well. ---------- components: C API messages: 389133 nosy: maxbachmann priority: normal severity: normal status: open title: PyUnicode_KIND macro does not has specified return type type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 02:19:44 2021 From: report at bugs.python.org (Chris Wilson) Date: Sat, 20 Mar 2021 06:19:44 +0000 Subject: [New-bugs-announce] [issue43566] Docs say int('010', 0) is not legal, but it is Message-ID: <1616221184.91.0.644321599481.issue43566@roundup.psfhosted.org> New submission from Chris Wilson : The documentation for the int() builtin says: Base 0 means to interpret exactly as a code literal, so that the actual base is 2, 8, 10, or 16, and so that int('010', 0) is not legal, while int('010') is, as well as int('010', 8). https://docs.python.org/3/library/functions.html#int However 010 is a valid code literal, and int('010', 0) is legal (both are correctly interpreted as octal). ---------- assignee: docs at python components: Documentation messages: 389145 nosy: docs at python, wilscm priority: normal severity: normal status: open title: Docs say int('010', 0) is not legal, but it is versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 07:02:18 2021 From: report at bugs.python.org (Jiaxin Peng) Date: Sat, 20 Mar 2021 11:02:18 +0000 Subject: [New-bugs-announce] [issue43567] regen.vcxproj cannot regenerate some necessary files Message-ID: <1616238138.44.0.687655765468.issue43567@roundup.psfhosted.org> New submission from Jiaxin Peng : I tried to modify Grammar/python.gram, Grammar/Tokens, Parser/Python.asdl, to add a new token to the grammar. And when using `build.bat --regen`, only parser.c is newly generated. Other files, as mentioned in https://devguide.python.org/grammar/ that should be updated:ast.c, Python-ast.h etc. regen.vcxproj is now not capable for the new PEG parser. So an update is need. ---------- components: Build messages: 389155 nosy: pjx206 priority: normal severity: normal status: open title: regen.vcxproj cannot regenerate some necessary files type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 09:08:22 2021 From: report at bugs.python.org (Joshua Root) Date: Sat, 20 Mar 2021 13:08:22 +0000 Subject: [New-bugs-announce] [issue43568] Drop support for Mac OS X < 10.3 module linking Message-ID: <1616245702.65.0.278864424752.issue43568@roundup.psfhosted.org> New submission from Joshua Root : The `-undefined dynamic_lookup` option can only be used in LDSHARED on Mac OS X 10.3 and later. There is a fallback to explicitly linking with the framework for 10.2 and earlier. I'm pretty sure that currently supported Python versions don't build on 10.2 or older for several other reasons (I happen to know that even building on 10.5 requires a little patching.) So it's probably reasonable to just drop this code path. There is a closely related check in distutils, though you would only know it's related if you looked through the history as I did. It errors out if you try to build a module for an older MACOSX_DEPLOYMENT_TARGET than Python was configured with. The purpose of that is to prevent using the wrong LDSHARED flags for the target platform. If 10.2 support is dropped, that check can be removed entirely. I am aware that distutils is deprecated, going away, etc., and I am submitting a PR to setuptools as well. But setuptools does not yet override the stdlib distutils with its own by default, so bugs in the stdlib copy are still relevant. If it's decided to keep 10.2 support, the check in distutils should still be relaxed to error only if the current MDT is < 10.3 and the configured MDT is >= 10.3. I can easily put together a PR for that if needed. Either way, the approach taken in setuptools will depend on how LDSHARED is handled here. ---------- components: Build, Distutils, macOS messages: 389157 nosy: dstufft, eric.araujo, jmr, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Drop support for Mac OS X < 10.3 module linking type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 10:58:06 2021 From: report at bugs.python.org (STINNER Victor) Date: Sat, 20 Mar 2021 14:58:06 +0000 Subject: [New-bugs-announce] [issue43569] test_importlib failed on installed Python Message-ID: <1616252286.95.0.663253122373.issue43569@roundup.psfhosted.org> New submission from STINNER Victor : Example on aarch64 Fedora Stable Clang Installed 3.x: https://buildbot.python.org/all/#/builders/14/builds/804 ====================================================================== ERROR: test_open_binary (test.test_importlib.test_open.OpenDiskNamespaceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_open.py", line 67, in setUp from . import namespacedata01 ImportError: cannot import name 'namespacedata01' from 'test.test_importlib' (/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/__init__.py) ====================================================================== ERROR: test_open_binary_FileNotFoundError (test.test_importlib.test_open.OpenDiskNamespaceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_open.py", line 67, in setUp from . import namespacedata01 ImportError: cannot import name 'namespacedata01' from 'test.test_importlib' (/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/__init__.py) ====================================================================== ERROR: test_open_text_FileNotFoundError (test.test_importlib.test_open.OpenDiskNamespaceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_open.py", line 67, in setUp from . import namespacedata01 ImportError: cannot import name 'namespacedata01' from 'test.test_importlib' (/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/__init__.py) ====================================================================== ERROR: test_open_text_default_encoding (test.test_importlib.test_open.OpenDiskNamespaceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_open.py", line 67, in setUp from . import namespacedata01 ImportError: cannot import name 'namespacedata01' from 'test.test_importlib' (/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/__init__.py) ====================================================================== ERROR: test_open_text_given_encoding (test.test_importlib.test_open.OpenDiskNamespaceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_open.py", line 67, in setUp from . import namespacedata01 ImportError: cannot import name 'namespacedata01' from 'test.test_importlib' (/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/__init__.py) ====================================================================== ERROR: test_open_text_with_errors (test.test_importlib.test_open.OpenDiskNamespaceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_open.py", line 67, in setUp from . import namespacedata01 ImportError: cannot import name 'namespacedata01' from 'test.test_importlib' (/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/__init__.py) ====================================================================== ERROR: test_is_dir (test.test_importlib.test_reader.MultiplexedPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 48, in test_is_dir self.assertEqual(MultiplexedPath(self.folder).is_dir(), True) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/readers.py", line 63, in __init__ raise NotADirectoryError('MultiplexedPath only supports directories') NotADirectoryError: MultiplexedPath only supports directories ====================================================================== ERROR: test_is_file (test.test_importlib.test_reader.MultiplexedPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 51, in test_is_file self.assertEqual(MultiplexedPath(self.folder).is_file(), False) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/readers.py", line 63, in __init__ raise NotADirectoryError('MultiplexedPath only supports directories') NotADirectoryError: MultiplexedPath only supports directories ====================================================================== ERROR: test_iterdir (test.test_importlib.test_reader.MultiplexedPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 25, in test_iterdir contents = {path.name for path in MultiplexedPath(self.folder).iterdir()} File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/readers.py", line 63, in __init__ raise NotADirectoryError('MultiplexedPath only supports directories') NotADirectoryError: MultiplexedPath only supports directories ====================================================================== ERROR: test_iterdir_duplicate (test.test_importlib.test_reader.MultiplexedPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 35, in test_iterdir_duplicate path.name for path in MultiplexedPath(self.folder, data01).iterdir() File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/readers.py", line 63, in __init__ raise NotADirectoryError('MultiplexedPath only supports directories') NotADirectoryError: MultiplexedPath only supports directories ====================================================================== ERROR: test_join_path (test.test_importlib.test_reader.MultiplexedPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 66, in test_join_path path = MultiplexedPath(self.folder, data01) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/readers.py", line 63, in __init__ raise NotADirectoryError('MultiplexedPath only supports directories') NotADirectoryError: MultiplexedPath only supports directories ====================================================================== ERROR: test_open_file (test.test_importlib.test_reader.MultiplexedPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 54, in test_open_file path = MultiplexedPath(self.folder) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/readers.py", line 63, in __init__ raise NotADirectoryError('MultiplexedPath only supports directories') NotADirectoryError: MultiplexedPath only supports directories ====================================================================== ERROR: test_repr (test.test_importlib.test_reader.MultiplexedPathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 82, in test_repr repr(MultiplexedPath(self.folder)), File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/readers.py", line 63, in __init__ raise NotADirectoryError('MultiplexedPath only supports directories') NotADirectoryError: MultiplexedPath only supports directories ====================================================================== ERROR: test_files (test.test_importlib.test_reader.NamespaceReaderTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 115, in test_files namespacedata01 = import_module('namespacedata01') File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1049, in _gcd_import File "", line 1026, in _find_and_load File "", line 1003, in _find_and_load_unlocked ModuleNotFoundError: No module named 'namespacedata01' ====================================================================== ERROR: test_resource_path (test.test_importlib.test_reader.NamespaceReaderTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_reader.py", line 103, in test_resource_path namespacedata01 = import_module('namespacedata01') File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1049, in _gcd_import File "", line 1026, in _find_and_load File "", line 1003, in _find_and_load_unlocked ModuleNotFoundError: No module named 'namespacedata01' ====================================================================== ERROR: test_is_submodule_resource (test.test_importlib.test_resource.ResourceFromNamespaceTest01) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_resource.py", line 230, in test_is_submodule_resource resources.is_resource(import_module('namespacedata01'), 'binary.file') File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1049, in _gcd_import File "", line 1026, in _find_and_load File "", line 1003, in _find_and_load_unlocked ModuleNotFoundError: No module named 'namespacedata01' ====================================================================== ERROR: test_read_submodule_resource_by_name (test.test_importlib.test_resource.ResourceFromNamespaceTest01) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_resource.py", line 234, in test_read_submodule_resource_by_name self.assertTrue(resources.is_resource('namespacedata01', 'binary.file')) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/resources.py", line 152, in is_resource package = _common.get_package(package) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/_common.py", line 65, in get_package resolved = resolve(package) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/_common.py", line 56, in resolve return cand if isinstance(cand, types.ModuleType) else importlib.import_module(cand) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1049, in _gcd_import File "", line 1026, in _find_and_load File "", line 1003, in _find_and_load_unlocked ModuleNotFoundError: No module named 'namespacedata01' ====================================================================== ERROR: test_submodule_contents (test.test_importlib.test_resource.ResourceFromNamespaceTest01) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_resource.py", line 237, in test_submodule_contents contents = set(resources.contents(import_module('namespacedata01'))) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1049, in _gcd_import File "", line 1026, in _find_and_load File "", line 1003, in _find_and_load_unlocked ModuleNotFoundError: No module named 'namespacedata01' ====================================================================== ERROR: test_submodule_contents_by_name (test.test_importlib.test_resource.ResourceFromNamespaceTest01) ---------------------------------------------------------------------- test test_importlib failed Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/test/test_importlib/test_resource.py", line 245, in test_submodule_contents_by_name contents = set(resources.contents('namespacedata01')) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/resources.py", line 170, in contents package = _common.get_package(package) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/_common.py", line 65, in get_package resolved = resolve(package) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/_common.py", line 56, in resolve return cand if isinstance(cand, types.ModuleType) else importlib.import_module(cand) File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-aarch64.clang-installed/build/target/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1049, in _gcd_import File "", line 1026, in _find_and_load File "", line 1003, in _find_and_load_unlocked ModuleNotFoundError: No module named 'namespacedata01' ---------- components: Tests messages: 389160 nosy: vstinner priority: normal severity: normal status: open title: test_importlib failed on installed Python versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 12:26:35 2021 From: report at bugs.python.org (Julien Palard) Date: Sat, 20 Mar 2021 16:26:35 +0000 Subject: [New-bugs-announce] [issue43570] pyspecific.py > AuditEvent mess with translations Message-ID: <1616257595.38.0.162769956248.issue43570@roundup.psfhosted.org> New submission from Julien Palard : In case an `.. audit-event::` has a content, Sphinx gets confused: It will provide both "auto-generated" and the content in po files, for interactivehook for example we have: #: library/sys.rst:953 msgid "" "Raises an :ref:`auditing event ` ``cpython.run_interactivehook`` " "with argument ``hook``." msgstr "" #: library/sys.rst:955 msgid "" "Raises an :ref:`auditing event ` ``cpython.run_interactivehook`` " "with the hook object as the argument when the hook is called on startup." msgstr "" "L?ve un :ref:`?v?nement d'audit ` ``cpython.run_interactivehook`` " "avec l'objet de point d'entr?e comme argument lorsqu'il est appel? au " "d?marrage." Which is not needed as only the content is used to render the doc, but it's the least issue. The issue is that Sphinx will then check the used one (content) against the translation of the auto-generated one leading it to trigger a warning on case the :ref: used don't match, typically for: .. audit-event:: sys.unraisablehook hook,unraisable sys.unraisablehook Raise an auditing event ``sys.unraisablehook`` with arguments ``hook``, ``unraisable`` when an exception that cannot be handled occurs. The ``unraisable`` object is the same as what will be passed to the hook. If no hook has been set, ``hook`` may be ``None``. Sphinx will compare the auto-generated one: Raises an :ref:`auditing event ` ``sys.unraisablehook`` with arguments ``hook``, ``unraisable``. Against our translated one (L?ve un ?v?nement d'audit ...). Issue is, as in "Raise an auditing event" there's no :ref:, but as we translated "Raises an :ref:`auditing event `" we used one, Sphinx whines about inconsistent term references. As far as I understand it, it's related, or near, the: if self.content: self.state.nested_parse(self.content, self.content_offset, pnode) else: n, m = self.state.inline_text(text, self.lineno) pnode.extend(n + m) part of pyspecific.py. ---------- assignee: docs at python components: Documentation messages: 389169 nosy: docs at python, mdk, steve.dower priority: normal severity: normal status: open title: pyspecific.py > AuditEvent mess with translations versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 12:53:08 2021 From: report at bugs.python.org (Rui Cunha) Date: Sat, 20 Mar 2021 16:53:08 +0000 Subject: [New-bugs-announce] [issue43571] Add option to create MPTCP sockets Message-ID: <1616259188.08.0.788716564189.issue43571@roundup.psfhosted.org> Change by Rui Cunha : ---------- components: Extension Modules nosy: RuiCunhaM, ncoghlan, petr.viktorin priority: normal severity: normal status: open title: Add option to create MPTCP sockets type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 17:51:02 2021 From: report at bugs.python.org (Antoine Pitrou) Date: Sat, 20 Mar 2021 21:51:02 +0000 Subject: [New-bugs-announce] [issue43572] "Too many open files" on macOS buildbot Message-ID: <1616277062.42.0.179950736754.issue43572@roundup.psfhosted.org> New submission from Antoine Pitrou : See https://buildbot.python.org/all/#/builders/366/builds/960/steps/5/logs/stdio ---------- messages: 389184 nosy: mattbillenstein, pablogsal, pitrou, zach.ware priority: normal severity: normal status: open title: "Too many open files" on macOS buildbot _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 21:13:24 2021 From: report at bugs.python.org (Brett Cannon) Date: Sun, 21 Mar 2021 01:13:24 +0000 Subject: [New-bugs-announce] [issue43573] [types] Document __spec__ for types.ModuleType Message-ID: <1616289204.63.0.465511334804.issue43573@roundup.psfhosted.org> New submission from Brett Cannon : https://docs.python.org/3/library/types.html#types.ModuleType does not document __spec__. ---------- assignee: docs at python components: Documentation messages: 389204 nosy: brett.cannon, docs at python priority: normal severity: normal status: open title: [types] Document __spec__ for types.ModuleType versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 22:31:54 2021 From: report at bugs.python.org (Chad Netzer) Date: Sun, 21 Mar 2021 02:31:54 +0000 Subject: [New-bugs-announce] [issue43574] Regression in overallocation for literal list initialization in v3.9+ Message-ID: <1616293914.75.0.599644741052.issue43574@roundup.psfhosted.org> New submission from Chad Netzer : In Python v3.9+ there was a regression in the amount of used memory for list-literals, due to switching to using list_extend() to allocate memory for the new list to accomodate the literal elements. Example, in Python v3.8.x (and before): ``` $ python38 Python 3.8.5 (default, Sep 4 2020, 02:22:02) >>> [1].__sizeof__() 48 >>> [1,2].__sizeof__() 56 >>> [1,2,3].__sizeof__() 64 ``` whereas for v3.9 (and later): ``` $ python39 Python 3.9.2 (default, Feb 19 2021, 17:09:53) >>> [1].__sizeof__() 48 >>> [1,2].__sizeof__() 56 >>> [1,2,3].__sizeof__() 104 # a 60% increase in memory allocated ``` However, this seems like an unintented regression, and is a side-effect of the new way of building the lists from literals, using the list_extend() function (via list_resize(), which overallocates). In particular, a consequence is that making a copy of the list that's initialized from a literal can end up using less memory: ``` $ python39 Python 3.9.2 (default, Feb 19 2021, 17:09:53) >>> a = [1,2,3] >>> b = list(a) # Same behavior if list.copy() or slice copy is performed >>> a.__sizeof__() 104 >>> b.__sizeof__() 64 ``` Prior to v3.9, the byte-code for making a list from a literal had the "BUILD_LIST" opcode with an explicit length argument, allowing allocation of the exact amount of memory needed for the literal. As of v3.9, the LIST_EXTEND opcode is used, instead. I believe the simplest way of restoring the old behavior is to change list_extend() to not overallocate when the list being extended currently has 0 elements. Ie. a minimal-change patch to restore the previous behavior (though with a side-effect of removing the overallocaton of a list that is initialzed empty, and then immediately extended): diff --git a/Objects/listobject.c b/Objects/listobject.c index e7987a6d35..7820e033af 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -75,8 +75,9 @@ list_resize(PyListObject *self, Py_ssize_t newsize) if (newsize - Py_SIZE(self) > (Py_ssize_t)(new_allocated - newsize)) new_allocated = ((size_t)newsize + 3) & ~(size_t)3; - if (newsize == 0) - new_allocated = 0; + /* Don't overallocate for lists that start empty or are set to empty. */ + if (newsize == 0 || Py_SIZE(self) == 0) + new_allocated = newsize; num_allocated_bytes = new_allocated * sizeof(PyObject *); items = (PyObject **)PyMem_Realloc(self->ob_item, num_allocated_bytes); if (items == NULL) { Relevant/related bugs/PRs: # Switched to initializing list literals w/ LIST_EXTEND https://bugs.python.org/issue39320 https://github.com/python/cpython/pull/17984 # Commit where over-allocation of list literals first appeared https://bugs.python.org/issue38328 https://github.com/python/cpython/pull/17114 https://github.com/python/cpython/commit/6dd9b64770af8905bef293c81d541eaaf8d8df52 https://bugs.python.org/issue38373 https://github.com/python/cpython/pull/18952 https://github.com/python/cpython/commit/2fe815edd6778fb9deef8f8044848647659c2eb8 ---------- components: Interpreter Core messages: 389207 nosy: Chad.Netzer priority: normal severity: normal status: open title: Regression in overallocation for literal list initialization in v3.9+ type: resource usage versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 20 23:25:03 2021 From: report at bugs.python.org (Dong-hee Na) Date: Sun, 21 Mar 2021 03:25:03 +0000 Subject: [New-bugs-announce] [issue43575] map() instantiation time reducing by using PEP 590 vectorcall Message-ID: <1616297103.49.0.396627193734.issue43575@roundup.psfhosted.org> New submission from Dong-hee Na : +-----------+------------------+----------------------+ | Benchmark | map_bench_master | map_bench_vectorcall | +===========+==================+======================+ | bench map | 151 ns | 116 ns: 1.30x faster | +-----------+------------------+----------------------+ We already apply this feature for filter(). No reason not to apply map(). ---------- assignee: corona10 components: Interpreter Core files: map_bench.py messages: 389210 nosy: corona10, vstinner priority: normal severity: normal status: open title: map() instantiation time reducing by using PEP 590 vectorcall type: performance versions: Python 3.10 Added file: https://bugs.python.org/file49896/map_bench.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 00:46:52 2021 From: report at bugs.python.org (rushant) Date: Sun, 21 Mar 2021 04:46:52 +0000 Subject: [New-bugs-announce] [issue43576] python3.6.4 os.environ error when write chinese to file Message-ID: <1616302012.93.0.356074990603.issue43576@roundup.psfhosted.org> New submission from rushant <953779014 at qq.com>: # -*- coding: utf-8 -*- import os job_name = os.environ['a'] print(job_name) print(isinstance(job_name, str)) print(type(job_name)) with open('name.txt', 'w', encoding='utf-8')as fw: fw.write(job_name) i have set environment param by : export a="??" it returns error: ?? True Traceback (most recent call last): File "aa.py", line 8, in fw.write(job_name) UnicodeEncodeError: 'utf-8' codec can't encode characters in position 0-5: surrogates not allowed ---------- components: C API messages: 389215 nosy: rushant priority: normal severity: normal status: open title: python3.6.4 os.environ error when write chinese to file type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 01:00:42 2021 From: report at bugs.python.org (Andrew Dailey) Date: Sun, 21 Mar 2021 05:00:42 +0000 Subject: [New-bugs-announce] [issue43577] Deadlock when using SSLContext._msg_callback and SSLContext.sni_callback Message-ID: <1616302842.44.0.286720324373.issue43577@roundup.psfhosted.org> New submission from Andrew Dailey : Hello, I think I might've stumbled onto an oversight with how an SSLSocket handles overwriting its SSLContext within an sni_callback. If both "_msg_callback" and "sni_callback" are defined on an SSLContext object and the sni_callback replaces the context with new one, the interpreter locks up indefinitely. It fails to respond to keyboard interrupts and must be forcefully killed. This seems to be a common use case of the sni_callback: create a new context with a different cert chain and attach it to the current socket (which replaces the existing one). If _msg_callback never gets defined on the original context then this deadlock never occurs. Curiously, if you assign the same _msg_callback to the new context before replacement, this also avoids the deadlock. I've attached as minimal of a reproduction as I could come up with. I think the code within will probably do a better job explaining this problem than I've done here in prose. I've only tested it on a couple Linux distros (Ubuntu Server and Void Linux) but the lock occurs 100% of the time in my experience. In the brief time I've spent digging into the CPython source, I've come to understand that replacing the SSLContext on an SSLSocket isn't "just" a simple replacement but actually involves some OpenSSL mechanics (specifically, SSL_set_SSL_CTX) [0]. I'm wondering if maybe this context update routine isn't properly cleaning up whatever resources / references were being used by the msg_callback? Maybe this is even closer to an OpenSSL bug (or a least a gotcha)? I also feel the need to explain why I'd even be using an undocumented property (SSLContext._msg_callback) in the first place. I'm trying to implement a program that automatically manages TLS certs on a socket via Let's Encrypt and the ACME protocol (RFC8555). Part of this process involves serving up a specific cert when a connection requests the acme-tls/1 ALPN protocol. Given the existing Python SSL API, I don't believe there is any way for me to do this "correctly". The documentation for SSLContext.sni_callback [1] mentions that the selected_alpn_protocol function should be usable within the callback but I don't that is quite true. According to the OpenSSL docs [2]: Several callbacks are executed during ClientHello processing, including the ClientHello, ALPN, and servername callbacks. The ClientHello callback is executed first, then the servername callback, followed by the ALPN callback. If there is a better way for me to identify a specific ALPN protocol _before_ the sni_callback, I could definitely use the guidance. That would avoid this deadlock altogether (even though it'd still be waiting to catch someone else...). This is my first Python issue so I hope what I've supplied makes sense. If there is anything more I can do to help or provide more info, please let me know. [0] https://github.com/python/cpython/blob/3.9/Modules/_ssl.c#L2194 [1] https://docs.python.org/3/library/ssl.html#ssl.SSLContext.sni_callback [2] https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_tlsext_servername_callback.html ---------- assignee: christian.heimes components: SSL files: deadlock.zip messages: 389216 nosy: christian.heimes, theandrew168 priority: normal severity: normal status: open title: Deadlock when using SSLContext._msg_callback and SSLContext.sni_callback type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49897/deadlock.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 02:53:15 2021 From: report at bugs.python.org (lincheney) Date: Sun, 21 Mar 2021 06:53:15 +0000 Subject: [New-bugs-announce] [issue43578] With asyncio subprocess, send_signal() and the child process watcher will both call waitpid() Message-ID: <1616309595.59.0.425616668728.issue43578@roundup.psfhosted.org> New submission from lincheney : Under unix, when creating a asyncio subprocess, the child process watcher will call waitpid() to reap the child, but if you call send_signal() (or terminate() or kill() ) on the asyncio subprocess, this will also call waitpid(), causing exactly one of these to fail, as you cannot call waitpid() on a PID more than once. If the send_signal() fails, this doesn't seem much of an issue. If the child process watcher fails however, it sets the returncode to 255 and also returns 255 when running wait() and also emits a warning. I've seen this behaviour with the ThreadedChildWatcher, but possibly other Unix child watchers that use waitpid() suffer from the same problem. The behaviour is racey (it depends on which one completes the waitpid() first), but if you try it enough it will appear: ``` import asyncio import signal async def main(): while True: proc = await asyncio.create_subprocess_exec('sleep', '0.1') await asyncio.sleep(0.1) try: proc.send_signal(signal.SIGUSR1) except ProcessLookupError: pass assert (await proc.wait() != 255) asyncio.run(main()) ``` The output looks like: ``` Unknown child process pid 1394331, will report returncode 255 Traceback (most recent call last): File "/tmp/bob.py", line 14, in asyncio.run(main()) File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/tmp/bob.py", line 12, in main assert (await proc.wait() != 255) AssertionError ``` This would be expected behaviour if I were explicitly calling waitpid() myself (ie I'm shooting my own foot, so I'd deserve the bad behaviour), but that's not the case here nor any other exotic code. ---------- components: asyncio messages: 389218 nosy: asvetlov, lincheney, yselivanov priority: normal severity: normal status: open title: With asyncio subprocess, send_signal() and the child process watcher will both call waitpid() type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 03:11:11 2021 From: report at bugs.python.org (begnac) Date: Sun, 21 Mar 2021 07:11:11 +0000 Subject: [New-bugs-announce] [issue43579] Leak in asyncio.selector_events._SelectorSocketTransport Message-ID: <1616310671.0.0.615896867507.issue43579@roundup.psfhosted.org> New submission from begnac : Hello, Even after close()ing, asyncio.selector_events._SelectorSocketTransport keeps a reference to itself via self._read_ready_cb. should probably add def close(self): super().close() self._read_ready_cb = None Cheers ! Ita?. ---------- components: asyncio messages: 389219 nosy: asvetlov, begnac, yselivanov priority: normal severity: normal status: open title: Leak in asyncio.selector_events._SelectorSocketTransport type: performance versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 05:07:50 2021 From: report at bugs.python.org (LittleGuy) Date: Sun, 21 Mar 2021 09:07:50 +0000 Subject: [New-bugs-announce] [issue43580] A Question about List Slice Message-ID: <1616317670.15.0.486423863335.issue43580@roundup.psfhosted.org> New submission from LittleGuy <674980165 at qq.com>: # There is a question when I use list. # If I run this following code: x = list(range(10)) a = x[1 : -1 : -1] print(a) # the answer will be: [] # the right answer should be: [1, 0] # But in some cases, it works well, like: a = x[4 : 2 : -1] print(a) # the answer will be: [4, 3] # so, there may be some problems. ---------- components: Regular Expressions messages: 389220 nosy: YangS007, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: A Question about List Slice type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 10:45:47 2021 From: report at bugs.python.org (Wen Hao) Date: Sun, 21 Mar 2021 14:45:47 +0000 Subject: [New-bugs-announce] [issue43581] array assignment error Message-ID: <1616337947.23.0.922610060887.issue43581@roundup.psfhosted.org> New submission from Wen Hao : >>>mat = [[0]*4]*4 >>> mat [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]] >>> mat[1][2]=1 >>> mat [[0, 0, 1, 0], [0, 0, 1, 0], [0, 0, 1, 0], [0, 0, 1, 0]] ---------- components: Build messages: 389230 nosy: haowenqi.zz priority: normal severity: normal status: open title: array assignment error type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 10:53:57 2021 From: report at bugs.python.org (Andrew Dailey) Date: Sun, 21 Mar 2021 14:53:57 +0000 Subject: [New-bugs-announce] [issue43582] SSLContext.sni_callback docs inaccurately describe available handshake info Message-ID: <1616338437.57.0.677049840589.issue43582@roundup.psfhosted.org> New submission from Andrew Dailey : Hello, The documentation for SSLContext.sni_callback [0] seems to incorrectly describe the information available at that stage of the TLS handshake. According to the docs: Due to the early negotiation phase of the TLS connection, only limited methods and attributes are usable like SSLSocket.selected_alpn_protocol() and SSLSocket.context. SSLSocket.getpeercert(), SSLSocket.getpeercert(), SSLSocket.cipher() and SSLSocket.compress() methods require that the TLS connection has progressed beyond the TLS Client Hello and therefore will not contain return meaningful values nor can they be called safely. This paragraph claims that SSLSocket.selected_alpn_protocol() should be usable within sni_callback but I think this is inaccurate. Based on the OpenSSL docs [1] and my own testing, the servername callback occurs after ClientHello but _before_ the ALPN callback. This prevents accurate ALPN information from being available until later. I believe that any call to SSLSocket.selected_alpn_protocol() within an SSLContext.sni_callback will simply return None. Excerpt from the OpenSSL docs: Several callbacks are executed during ClientHello processing, including the ClientHello, ALPN, and servername callbacks. The ClientHello callback is executed first, then the servername callback, followed by the ALPN callback. I think it'd be better to explain that the only "useful" thing you can do within sni_callback is to see what sni_name is desired an optionally swap out the context for one with a more appropriate cert chain. Any information about the selected ALPN protocol has to wait until later in the handshake. [0] https://docs.python.org/3/library/ssl.html#ssl.SSLContext.sni_callback [1] https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_tlsext_servername_callback.html ---------- assignee: docs at python components: Documentation, SSL messages: 389231 nosy: docs at python, theandrew168 priority: normal severity: normal status: open title: SSLContext.sni_callback docs inaccurately describe available handshake info type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 11:38:59 2021 From: report at bugs.python.org (Pattesvador) Date: Sun, 21 Mar 2021 15:38:59 +0000 Subject: [New-bugs-announce] [issue43583] make test failures, 2 tests failed: test_embed test_tabnanny Message-ID: <1616341139.4.0.592020830762.issue43583@roundup.psfhosted.org> New submission from Pattesvador : I'm trying to install python 3.9.2 on a 18.04.5 ubuntu. I downloaded the Python-3.9.2.tar.xz and followed the readme.txt installation instructions. When executing the "make test" command I get this error : == Tests result: FAILURE then FAILURE == 397 tests OK. 2 tests failed: test_embed test_tabnanny 26 tests skipped: test_bz2 test_dbm_gnu test_dbm_ndbm test_devpoll test_idle test_ioctl test_kqueue test_lzma test_msilib test_ossaudiodev test_readline test_smtpnet test_sqlite test_ssl test_startfile test_tcl test_tix test_tk test_ttk_guionly test_ttk_textonly test_turtle test_winconsoleio test_winreg test_winsound test_zipfile64 test_zoneinfo 2 re-run tests: test_embed test_tabnanny Total duration: 9 min 26 sec Tests result: FAILURE then FAILURE R?colte du processus fils perdant 0x55e48cc2b400 PID 15880 Makefile:1199: recipe for target 'test' failed make: *** [test] Error 2 Retrait du processus fils 0x55e48cc2b400 PID 15880 de la cha?ne. I don't know what to do. I've read one issue on the topic but I have to admit that I didn't understand a thing. Here's the issue https://bugs.python.org/issue43001 Thank you for your answers. ---------- messages: 389237 nosy: Pattesvador priority: normal severity: normal status: open title: make test failures, 2 tests failed: test_embed test_tabnanny versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 12:11:00 2021 From: report at bugs.python.org (Grant Edwards) Date: Sun, 21 Mar 2021 16:11:00 +0000 Subject: [New-bugs-announce] [issue43584] Doc description of str.title() upper case vs. title case. Message-ID: <1616343060.26.0.656949080319.issue43584@roundup.psfhosted.org> New submission from Grant Edwards : The documentation for str.title() states that the first character in each world is converted to upper case. That is not correct for recent versions of Python. The first character in each word is converted to title case. Title and upper may be the same for English/ASCII, but other languages have characters for which upper and title case are different. ---------- assignee: docs at python components: Documentation messages: 389242 nosy: docs at python, grant.b.edwards priority: normal severity: normal status: open title: Doc description of str.title() upper case vs. title case. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 21 12:38:13 2021 From: report at bugs.python.org (Tobi) Date: Sun, 21 Mar 2021 16:38:13 +0000 Subject: [New-bugs-announce] [issue43585] perf_counter() returns computers uptime Message-ID: <1616344693.53.0.493745901493.issue43585@roundup.psfhosted.org> New submission from Tobi : perf_counter() does not behave as expected ---------- messages: 389248 nosy: txhx38 priority: normal severity: normal status: open versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 00:14:42 2021 From: report at bugs.python.org (Shin Ryu) Date: Mon, 22 Mar 2021 04:14:42 +0000 Subject: [New-bugs-announce] [issue43586] sys.path is weird in Windows 10. Message-ID: <1616386482.98.0.154100807253.issue43586@roundup.psfhosted.org> New submission from Shin Ryu : import sys print(sys.path) only on windows, they print sys.path[0] is python38.zip not "". (docs.python.org says "As initialized upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of PYTHONPATH.") ---------- components: Windows messages: 389275 nosy: RyuSh1n, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: sys.path is weird in Windows 10. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 03:12:49 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Mon, 22 Mar 2021 07:12:49 +0000 Subject: [New-bugs-announce] [issue43587] Long string arguments cause nis.map() segfault Message-ID: <1616397169.5.0.857668484113.issue43587@roundup.psfhosted.org> New submission from Xinmeng Xia : nis.maps() with long string argument will lead to segfault of interpreter. See the following example: ===================================================== Python 3.10.0a6 (default, Mar 19 2021, 11:45:56) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import nis; >>> nis.maps('abs/'*10000000) Segmentation fault (core dumped) ===================================================== System: ubuntu 16.04 ---------- components: Library (Lib) messages: 389280 nosy: xxm priority: normal severity: normal status: open title: Long string arguments cause nis.map() segfault type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 04:09:32 2021 From: report at bugs.python.org (junyixie) Date: Mon, 22 Mar 2021 08:09:32 +0000 Subject: [New-bugs-announce] [issue43588] [Subinterpreters]: use static variable under building Python with --with-experimental-isolated-subinterpreters cause crash. Message-ID: <1616400572.08.0.67077600893.issue43588@roundup.psfhosted.org> New submission from junyixie : use static module variable under building Python with --with-experimental-isolated-subinterpreters cause crash. compiler_mod(struct compiler *c, mod_ty mod) { PyCodeObject *co; int addNone = 1; static PyObject *module; if (!module) { module = PyUnicode_InternFromString(""); if (!module) return NULL; } ... } ---------- components: Subinterpreters messages: 389282 nosy: JunyiXie, vstinner priority: normal severity: normal status: open title: [Subinterpreters]: use static variable under building Python with --with-experimental-isolated-subinterpreters cause crash. type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 05:15:29 2021 From: report at bugs.python.org (Tenzin) Date: Mon, 22 Mar 2021 09:15:29 +0000 Subject: [New-bugs-announce] [issue43589] Using defaultdict as kwarg to function reuses same dictionary every function call Message-ID: <1616404529.38.0.239841546052.issue43589@roundup.psfhosted.org> New submission from Tenzin : When using a `defaultdict` as a kwarg to a function that requires another argument, every call to the function uses the same dictionary instance instead of creating a new one. ```>>> from collections import defaultdict >>> def meow(a, b=defaultdict(list)): ... b[a].append('moo') ... return b ... >>> c = meow('hi') >>> c defaultdict(, {'hi': ['moo']}) >>> c = meow('bye') >>> c defaultdict(, {'hi': ['moo'], 'bye': ['moo']}) >>> d = meow('hello') >>> d defaultdict(, {'hi': ['moo'], 'bye': ['moo'], 'hello': ['moo']})``` Is this the correct behaviour? Occurred in 3.6.12, 3.7.9, 3.8.5, 3.8.6 and 3.9.0. ---------- messages: 389289 nosy: TenzinCHW priority: normal severity: normal status: open title: Using defaultdict as kwarg to function reuses same dictionary every function call type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 06:25:25 2021 From: report at bugs.python.org (Lucijan Drnasin) Date: Mon, 22 Mar 2021 10:25:25 +0000 Subject: [New-bugs-announce] [issue43590] Collapse sidebar issue on https://docs.python.org/3/ Message-ID: <1616408725.68.0.0305800294993.issue43590@roundup.psfhosted.org> New submission from Lucijan Drnasin : Sidebar bug on python welcoming page for version 3.9.2. Two-arrow span on sidebar (span that has fixed position) is going off the sidebar if I scroll all the way down and then I inspect the page. After inspecting (opening dev tools) it appears there is a litle bit more room to scrool down and span(litle two arrows) goes off the page. ---------- messages: 389296 nosy: lucijan345 priority: normal severity: normal status: open title: Collapse sidebar issue on https://docs.python.org/3/ type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 06:25:46 2021 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Mon, 22 Mar 2021 10:25:46 +0000 Subject: [New-bugs-announce] [issue43591] Parser aborts on incomplete/incorrect unicode literals in interactive mode Message-ID: <1616408746.94.0.619968687767.issue43591@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : Incomplete unicode literals abort iso. generating SyntaxError: (lldb) target create "./python.exe" Current executable set to '/Users/erlendaasland/src/cpython.git/python.exe' (x86_64). (lldb) r Process 98955 launched: '/Users/erlendaasland/src/cpython.git/python.exe' (x86_64) Python 3.10.0a6+ (heads/main:9a50ef43e4, Mar 22 2021, 11:18:33) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> "\u1f" Assertion failed: (col_offset >= 0 && (unsigned long)col_offset <= strlen(str)), function byte_offset_to_character_offset, file Parser/pegen.c, line 150. Process 98955 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = hit program assert frame #4: 0x0000000100009bd6 python.exe`byte_offset_to_character_offset(line=0x00000001013f1220, col_offset=7) at pegen.c:150:5 147 if (!str) { 148 return 0; 149 } -> 150 assert(col_offset >= 0 && (unsigned long)col_offset <= strlen(str)); 151 PyObject *text = PyUnicode_DecodeUTF8(str, col_offset, "replace"); 152 if (!text) { 153 return 0; Target 0: (python.exe) stopped. (lldb) p col_offset (Py_ssize_t) $0 = 7 (lldb) p str (const char *) $1 = 0x00000001013f1250 "\"\\u1f\"" (lldb) p (size_t) strlen(str) (size_t) $2 = 6 Python 3.9 behaviour: Python 3.9.2 (v3.9.2:1a79785e3e, Feb 19 2021, 09:06:10) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> "\u1f" File "", line 1 "\u1f" ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 0-3: truncated \uXXXX escape Git bisect says the regression was introduced by this commit: commit 08fb8ac99ab03d767aa0f1cfab3573eddf9df018 Author: Pablo Galindo Date: Thu Mar 18 01:03:11 2021 +0000 bpo-42128: Add 'missing :' syntax error message to match statements (GH-24733) I made a workaround (see attached patch), but I guess that's far from the correct solution :) ---------- components: Unicode files: patch.diff keywords: patch messages: 389297 nosy: erlendaasland, ezio.melotti, lys.nikolaou, pablogsal, vstinner priority: normal severity: normal status: open title: Parser aborts on incomplete/incorrect unicode literals in interactive mode type: crash versions: Python 3.10 Added file: https://bugs.python.org/file49900/patch.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 06:33:02 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 22 Mar 2021 10:33:02 +0000 Subject: [New-bugs-announce] [issue43592] test_importlib: test_multiprocessing_pool_circular_import() fails with "Too many open files" error on os.pipe() Message-ID: <1616409182.92.0.0882668450853.issue43592@roundup.psfhosted.org> New submission from STINNER Victor : x86-64 macOS 3.x: https://buildbot.python.org/all/#/builders/366/builds/969 Build triggered by the commit 88d9983b561cd59e5f186d98227de0c1a022b498 which changes PyImport_Import(). The buildbot is running on macOS 10.15.7 (Darwin Kernel Version 19.6.0) with a limit of 256 file descriptors. The latest successful build is build 968, whereas the the RLIMIT_NOFILE resource soft limit was also set to 256: https://buildbot.python.org/all/#/builders/366/builds/968 test.pythoninfo: * os.uname: posix.uname_result(sysname='Darwin', nodename='mattb-mbp2', release='19.6.0', version='Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64', machine='x86_64') * sysconfig[HOST_GNU_TYPE]: x86_64-apple-darwin19.6.0 * platform.platform: macOS-10.15.7-x86_64-i386-64bit * resource.RLIMIT_NOFILE: (256, 9223372036854775807) FAIL: test_multiprocessing_pool_circular_import (test.test_importlib.test_threaded_import.ThreadedImportTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/test/test_importlib/test_threaded_import.py", line 258, in test_multiprocessing_pool_circular_import script_helper.assert_python_ok(fn) File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/test/support/script_helper.py", line 160, in assert_python_ok return _assert_python(True, *args, **env_vars) File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/test/support/script_helper.py", line 145, in _assert_python res.fail(cmd_line) File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/test/support/script_helper.py", line 72, in fail raise AssertionError("Process return code is %d\n" AssertionError: Process return code is 1 command line: ['/Users/buildbot/buildarea/3.x.billenstein-macos/build/python.exe', '-X', 'faulthandler', '-I', '/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/test/test_importlib/partial/pool_in_threads.py'] stdout: --- --- stderr: --- Traceback (most recent call last): File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/test/test_importlib/partial/pool_in_threads.py", line 9, in t File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/context.py", line 119, in Pool File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/pool.py", line 196, in __init__ File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/context.py", line 113, in SimpleQueue File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/queues.py", line 341, in __init__ self._reader, self._writer = connection.Pipe(duplex=False) File "/Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/connection.py", line 532, in Pipe fd1, fd2 = os.pipe() OSError: [Errno 24] Too many open files /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 110 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-56i6i4ap': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-0au5otkl': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-vcv0xwbi': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-vxfb4ks9': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-5e2_0z1f': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-6vsgax4k': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-mq51g4b_': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-eik0n2aq': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-g7oeb4aw': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-tiabsvgr': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-ykag01b2': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/buildbot/buildarea/3.x.billenstein-macos/build/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-nl2kdidn': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) --- ---------- components: Tests messages: 389301 nosy: vstinner priority: normal severity: normal status: open title: test_importlib: test_multiprocessing_pool_circular_import() fails with "Too many open files" error on os.pipe() versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 09:29:47 2021 From: report at bugs.python.org (ggardet) Date: Mon, 22 Mar 2021 13:29:47 +0000 Subject: [New-bugs-announce] [issue43593] pymalloc is not aware of Memory Tagging Extension (MTE) and crashes Message-ID: <1616419787.13.0.506382582675.issue43593@roundup.psfhosted.org> New submission from ggardet : When Memory Tagging Extension (MTE) [0] is enabled on aarch64, python malloc make programs to crash. I noticed it while trying to use GDB with MTE enabled in user-space [1], and gdb crashed on start-up. Rebuilding python (3.8) using '--without-pymalloc' option allows to workaround the problem. For glibc, you need version 2.33 or later and build glibc with '--enable-memory-tagging' option. I guess that patches similar to glibc's patches are required for pymalloc. [0]: https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/enhancing-memory-safety [1]: https://en.opensuse.org/ARM_architecture_support#User-space_support ---------- components: Library (Lib) messages: 389312 nosy: ggardet priority: normal severity: normal status: open title: pymalloc is not aware of Memory Tagging Extension (MTE) and crashes versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 09:42:00 2021 From: report at bugs.python.org (Erez Zinman) Date: Mon, 22 Mar 2021 13:42:00 +0000 Subject: [New-bugs-announce] [issue43594] A metaclass that inherits both `ABC` and `ABCMeta` breaks on `__subclasscheck__` Message-ID: <1616420520.96.0.331295909751.issue43594@roundup.psfhosted.org> New submission from Erez Zinman : Consider the following example: ``` from abc import ABCMeta, ABC class MetaclassMixin(ABC): pass class Meta(MetaclassMixin, ABCMeta): pass class A(metaclass=Meta): pass ``` Then the call `isinstance(A, Meta)` returns `True` but `isinstance(1, Meta)` raises >>> TypeError: __subclasscheck__() missing 1 required positional argument: 'subclass' Checked on 3.6.9, 3.8.0 & 3.8.8 ---------- components: Library (Lib) messages: 389314 nosy: erezinman priority: normal severity: normal status: open title: A metaclass that inherits both `ABC` and `ABCMeta` breaks on `__subclasscheck__` versions: Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 09:53:50 2021 From: report at bugs.python.org (Erez Zinman) Date: Mon, 22 Mar 2021 13:53:50 +0000 Subject: [New-bugs-announce] [issue43595] Can not add a metclass that inherits both ABCMeta & ABC to a Union Message-ID: <1616421230.45.0.492887326456.issue43595@roundup.psfhosted.org> New submission from Erez Zinman : Related to Issue #43594. When running the following code ``` from abc import ABCMeta, ABC from typing import Union class MetaclassMixin(ABC): pass class Meta(MetaclassMixin, ABCMeta): pass print(Union[str, Meta]) ``` An exception is raised >>> TypeError: descriptor '__subclasses__' of 'type' object needs an argument Tested on v3.6.9 ---------- messages: 389317 nosy: erezinman priority: normal severity: normal status: open title: Can not add a metclass that inherits both ABCMeta & ABC to a Union versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 12:16:06 2021 From: report at bugs.python.org (R. Samuel Klatchko) Date: Mon, 22 Mar 2021 16:16:06 +0000 Subject: [New-bugs-announce] [issue43596] change assertRaises message when wrong exception is raised Message-ID: <1616429766.92.0.297524624601.issue43596@roundup.psfhosted.org> New submission from R. Samuel Klatchko : Right now, this code: class FooError(Exception): pass class BarError(Exception): pass def test_me(self): with self.assertRaises(FooError): raise BarError("something") will have the error "BarError: something" with no indication that an exception was expected but just that we got the wrong one. It would be help to change the message to something like: Expected exception of type FooError but exception BarError('something') was raised. ---------- messages: 389328 nosy: rsk2 priority: normal severity: normal status: open title: change assertRaises message when wrong exception is raised _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 19:08:07 2021 From: report at bugs.python.org (Tarun Chinmai Sekar) Date: Mon, 22 Mar 2021 23:08:07 +0000 Subject: [New-bugs-announce] [issue43597] robotparser should support specifying SSL context Message-ID: <1616454487.19.0.339190443454.issue43597@roundup.psfhosted.org> Change by Tarun Chinmai Sekar : ---------- nosy: Tchinmai7 priority: normal severity: normal status: open title: robotparser should support specifying SSL context type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 20:41:55 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 23 Mar 2021 00:41:55 +0000 Subject: [New-bugs-announce] [issue43598] test_subprocess logs multiple ResourceWarning warnings Message-ID: <1616460115.42.0.31373302099.issue43598@roundup.psfhosted.org> New submission from STINNER Victor : $ ./python -m test test_subprocess -v (...) test_send_signal_race2 (test.test_subprocess.POSIXProcessTestCase) ... /home/vstinner/python/master/Lib/subprocess.py:1062: ResourceWarning: subprocess 137193 is still running _warn("subprocess %s is still running" % self.pid, ResourceWarning: Enable tracemalloc to get the object allocation traceback ok test_pipesize_default (test.test_subprocess.ProcessTestCase) ... /home/vstinner/python/master/Lib/unittest/case.py:549: ResourceWarning: unclosed file <_io.BufferedReader name=8> method() ResourceWarning: Enable tracemalloc to get the object allocation traceback /home/vstinner/python/master/Lib/unittest/case.py:549: ResourceWarning: unclosed file <_io.BufferedReader name=10> method() ResourceWarning: Enable tracemalloc to get the object allocation traceback ok test_pipesize_default (test.test_subprocess.ProcessTestCaseNoPoll) ... /home/vstinner/python/master/Lib/unittest/case.py:549: ResourceWarning: unclosed file <_io.BufferedReader name=8> method() ResourceWarning: Enable tracemalloc to get the object allocation traceback /home/vstinner/python/master/Lib/unittest/case.py:549: ResourceWarning: unclosed file <_io.BufferedReader name=10> method() ResourceWarning: Enable tracemalloc to get the object allocation traceback ok (...) ---------- components: Tests messages: 389359 nosy: vstinner priority: normal severity: normal status: open title: test_subprocess logs multiple ResourceWarning warnings versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 21:54:58 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 23 Mar 2021 01:54:58 +0000 Subject: [New-bugs-announce] [issue43599] Setting long domain of locale.dgettext() crashes Python interpreter Message-ID: <1616464498.74.0.582898049043.issue43599@roundup.psfhosted.org> New submission from Xinmeng Xia : Setting the first argument of locale.dgettext() long string, Python interpreter crashes. ====================================================== Python 3.10.0a6 (default, Mar 19 2021, 11:45:56) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import locale;locale.dgettext('abs'*10000000,'') Segmentation fault (core dumped) ====================================================== System: Ubuntu 16.04 BTW, the api of module locale seems to be inconsistent between Ubuntu and Mac OS. E.g. there is no dgettext() for Python on Mac OS. ---------- components: Library (Lib) messages: 389363 nosy: xxm priority: normal severity: normal status: open title: Setting long domain of locale.dgettext() crashes Python interpreter type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 22 22:03:51 2021 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 23 Mar 2021 02:03:51 +0000 Subject: [New-bugs-announce] [issue43600] IDLE: fix highlight locationfor f-string field errors Message-ID: <1616465031.1.0.790398051902.issue43600@roundup.psfhosted.org> New submission from Terry J. Reedy : Spinoff from #41064. In current python, the f'{*x}' traceback ends with (*x) ^ SyntaxError: f-string: can't use starred expression here. For f'{**x}', the message is "f-string: invalid syntax" and the ^ is also under the 2nd character in the replacement expression actually parsed (with a restricted grammar). The Python error handler must special case a syntax error message beginning with 'f-string:' and search the input line for {...} and add its offset. IDLE currently highlights "'", the 2nd char of the original code, instead of '*', the 2nd char of the e.text replacement. It needs to also adjust the offset. ---------- assignee: terry.reedy components: IDLE messages: 389365 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: fix highlight locationfor f-string field errors type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 00:04:11 2021 From: report at bugs.python.org (junyixie) Date: Tue, 23 Mar 2021 04:04:11 +0000 Subject: [New-bugs-announce] [issue43601] Tools/c-analyzer/check-c-globals.py run throw exception err Message-ID: <1616472251.02.0.323968573256.issue43601@roundup.psfhosted.org> New submission from junyixie : how to use Tools/c-analyzer/check-c-globals.py? in readme, python3 Tools/c-analyzer/check-c-globals.py /Users/xiejunyi/cpython/Tools/c-analyzer/c_common/tables.py:236: FutureWarning: Possible nested set at position 12 _COLSPEC_RE = re.compile(textwrap.dedent(r''' Traceback (most recent call last): File "Tools/c-analyzer/check-c-globals.py", line 33, in (cmd, cmd_kwargs, verbosity, traceback_cm) = parse_args() File "Tools/c-analyzer/check-c-globals.py", line 16, in parse_args _cli_check(parser, checks=''), File "/Users/xiejunyi/cpython/Tools/c-analyzer/cpython/__main__.py", line 119, in _cli_check return c_analyzer._cli_check(parser, CHECKS, **kwargs, **FILES_KWARGS) TypeError: _cli_check() got multiple values for argument 'checks' ---------- components: Demos and Tools messages: 389370 nosy: JunyiXie priority: normal severity: normal status: open title: Tools/c-analyzer/check-c-globals.py run throw exception err type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 02:30:40 2021 From: report at bugs.python.org (Sergey B Kirpichev) Date: Tue, 23 Mar 2021 06:30:40 +0000 Subject: [New-bugs-announce] [issue43602] Include Decimal's in numbers.Real Message-ID: <1616481040.42.0.772549255299.issue43602@roundup.psfhosted.org> New submission from Sergey B Kirpichev : Commit 82417ca9b2 includes Decimal's in the numbers tower, but only as an implementation of the abstract numbers.Number. The mentioned reason is "Decimals are not interoperable with floats" (see comments in the numbers.py as well), i.e. there is no lossless conversion (in general, in both directions). While this seems to be reasonable, there are arguments against: 1) The numbers module docs doesn't assert there should be a lossless conversion for implementations of same abstract type. (Perhaps, it should.) This obviously may be assumed for cases, where does exist an exact representation (integers, rationals and so on) - but not for real numbers (or complex), where representations are inexact (unless we consider some subsets of real numbers, e.g. some real finite extension of rationals - I doubt such class can represent numbers.Real). (Unfortunately, the Scheme distinction of exact/inexact was lost in PEP 3141.) 2) By same reason, I think, neither binary-based multiprecision arithmetics package can represent numbers.Real: i.e. gmpy2.mpfr, mpmath.mpf and so on. (In general, there is no lossless conversion float's, in both directions.) 3) That might confuse users (why 10-th base arbitrary precision floating point arithmetic can't represent real numbers?). 4) Last, but not least, even some parts of stdlib uses both types in an interoperable way, e.g. Fraction constructor: elif isinstance(numerator, (float, Decimal)): # Exact conversion self._numerator, self._denominator = numerator.as_integer_ratio() return self ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 389372 nosy: Sergey.Kirpichev, docs at python priority: normal severity: normal status: open title: Include Decimal's in numbers.Real versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 10:39:51 2021 From: report at bugs.python.org (aaa dsghsu) Date: Tue, 23 Mar 2021 14:39:51 +0000 Subject: [New-bugs-announce] [issue43603] safgf Message-ID: <1616510391.68.0.379076746309.issue43603@roundup.psfhosted.org> Change by aaa dsghsu : ---------- components: C API files: 442724 nosy: aaadsghsu priority: normal severity: normal status: open title: safgf type: performance versions: Python 3.6 Added file: https://bugs.python.org/file49902/442724 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 11:26:30 2021 From: report at bugs.python.org (=?utf-8?q?David_Luke=C5=A1?=) Date: Tue, 23 Mar 2021 15:26:30 +0000 Subject: [New-bugs-announce] [issue43604] Fix tempfile.mktemp() Message-ID: <1616513190.72.0.841940564849.issue43604@roundup.psfhosted.org> New submission from David Luke? : I recently came across a non-testing use case for `tempfile.mktemp()` where I struggle to find a viable alternative -- temporary named pipes (FIFOs): ``` import os import tempfile import subprocess as sp fifo_path = tempfile.mktemp() os.mkfifo(fifo_path, 0o600) try: proc = sp.Popen(["cat", fifo_path], stdout=sp.PIPE, text=True) with open(fifo_path, "w") as fifo: for c in "Ko?ka leze d?rou, pes oknem.": print(c, file=fifo) proc.wait() finally: os.unlink(fifo_path) for l in proc.stdout: print(l.strip()) ``` (`cat` is obviously just a stand-in for some useful program which needs to read from a file, but you want to send it input from Python.) `os.mkfifo()` needs a path which doesn't point to an existing file, so it's not possible to use a `tempfile.NamedTemporaryFile(delete=False)`, close it, and pass its `.name` attribute to `mkfifo()`. I know there has been some discussion regarding `mktemp()` in the relatively recent past (see the Python-Dev thread starting with ). There has also been some confusion as to what actually makes it unsafe (see ). Before the discussion petered out, it looked like people were reaching a consensus "that mktemp() could be made secure by using a longer name generated by a secure random generator" (quoting from the previous link). A secure `mktemp` could be as simple as (see ): ``` def mktemp(suffix='', prefix='tmp', dir=None): if dir is None: dir = gettempdir() return _os.path.join(dir, prefix + secrets.token_urlsafe(ENTROPY_BYTES) + suffix) ``` There's been some discussion as to what `ENTROPY_BYTES` should be. I like Steven D'Aprano's suggestion (see ) of having an overkill default just to be on the safe side, which can be overridden if needed. Of course, the security implications of lowering it should be clearly documented. Fixing `mktemp` would make it possible to get rid of its hybrid deprecated (in the docs) / not depracated (in code) status, which is somewhat confusing for users. Speaking from experience -- when I realized I needed it, the deprecation notice led me down this rabbit hole of reading mailing list threads and submitting issues :) People could stop losing time worrying about `mktemp` and trying to weed it out whenever they come across it (see e.g. https://bugs.python.org/issue42278). So I'm wondering whether there would be interest in: 1. A PR which would modify `mktemp` along the lines sketched above, to make it safe in practice. Along with that, it would probably make sense to undeprecate it in the docs, or at least indicate that while users should prefer `mkstemp` when they're fine with the file being created for them, `mktemp` is alright in cases where this is not acceptable. 2. Following that, possibly a PR which would encapsulate the new `mktemp` + `mkfifo` into a `TemporaryNamedPipe` or `TemporaryFifo`: ``` import os import tempfile import subprocess as sp with tempfile.TemporaryNamedPipe() as fifo: proc = sp.Popen(["cat", fifo.name], stdout=sp.PIPE, text=True) for c in "Ko?ka leze d?rou, pes oknem.": print(c, file=fifo) proc.wait() for l in proc.stdout: print(l.strip()) ``` (Caveat: opening the FIFO for writing cannot happen in `__enter__`, it would have to be delayed until the first call to `fifo.write()` because it hangs if no one is reading from it.) ---------- components: Library (Lib) messages: 389393 nosy: David Luke? priority: normal severity: normal status: open title: Fix tempfile.mktemp() type: security versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 13:26:04 2021 From: report at bugs.python.org (Bruno Loff) Date: Tue, 23 Mar 2021 17:26:04 +0000 Subject: [New-bugs-announce] [issue43605] Issue of scopes unclear in documentation, or wrongly implemented Message-ID: <1616520364.91.0.0936665126175.issue43605@roundup.psfhosted.org> New submission from Bruno Loff : Python 3.9.2 seems to be giving me some unexpected difficulty evaluating generators inside evals. Here is the example: ```python def func(l): def get(i): return l[i] print(sum(get(i) for i in range(len(l)))) # works as expected, prints 10 print(eval("get(0) + get(1) + get(2) + get(3)")) # works just fine, prints 10 # if __globals is set to locals(), it still works, prints 10 print(eval("sum(get(i) for i in range(len(l)))", locals())) # This will complain print(eval("sum(get(i) for i in range(len(l)))")) func([1,2,3,4]) ``` The last line gives the following error ``` Traceback (most recent call last): File "/something/test_eval.py", line 28, in func([1,2,3,4]) File "/something/test_eval.py", line 10, in func print(eval("sum(get(i) for i in range(len(l)))")) # this does not work... bug? File "", line 1, in File "", line 1, in NameError: name 'get' is not defined ``` Any kind of generator-based code wont work. The following lines would give the same an error: ``` print(eval("sum(get(i) for i in range(len(l)))"), globals(), locals()) print(eval("[get(i) for i in range(len(l))]")) print(eval("{i:get(i) for i in range(len(l))}")) ``` Any clue what is happening? The documentation on eval seems to give no insight on why this behavior is as is. This really feels like an issue, at the very least, it's an issue in the documentation. ---------- messages: 389397 nosy: bruno.loff priority: normal severity: normal status: open title: Issue of scopes unclear in documentation, or wrongly implemented type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 15:06:15 2021 From: report at bugs.python.org (FRANK BENNETT) Date: Tue, 23 Mar 2021 19:06:15 +0000 Subject: [New-bugs-announce] [issue43606] initial huge window && no widgets visible Message-ID: <1616526375.72.0.973402567023.issue43606@roundup.psfhosted.org> New submission from FRANK BENNETT : with any PySimpleGUI, tkinter, tk, *.py The initial window is huge & with a size no widgets are visible fwb at fw:/s/opt/cpython$ uname -r 5.4.0-67-generic fwb at fw:/s/opt/cpython$ cat /etc/issue Ubuntu 20.04.2 LTS \n \l fwb at fw:/s/opt/cpython$ ./python -V Python 3.10.0a6+fwb at fw:/s/opt/cpython$ cat .git/config [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "origin"] url = https://github.com/bennett78/cpython.git fetch = +refs/heads/*:refs/remotes/origin/* [branch "master"] remote = origin merge = refs/heads/master What sets initial window configuration ? ---------- components: Tkinter, Windows files: t4.py messages: 389403 nosy: bennett78, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: initial huge window && no widgets visible type: behavior versions: Python 3.10, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49906/t4.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 18:56:49 2021 From: report at bugs.python.org (D Levine) Date: Tue, 23 Mar 2021 22:56:49 +0000 Subject: [New-bugs-announce] [issue43607] urllib's request.pathname2url not compatible with extended-length Windows file paths Message-ID: <1616540209.08.0.34606490658.issue43607@roundup.psfhosted.org> New submission from D Levine : Windows file paths are limited to 256 characters, and one of Windows's prescribed methods to address this is to prepend "\\?\" before a Windows absolute path (see: https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation) urllib.request.pathname2url raises an error on such paths as this function calls nturl2path.py's pathname2url function which explicitly checks that the number of characters before the ":" in a Windows path is precisely one, which is, of course, not the case if you are using an extended-length path (e.g. "\\?\C:\Python39"). As a result, urllib cannot handle pathname2url conversion for some valid Windows paths. ---------- components: Windows messages: 389415 nosy: levineds, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: urllib's request.pathname2url not compatible with extended-length Windows file paths type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 21:48:29 2021 From: report at bugs.python.org (Sebastian Berg) Date: Wed, 24 Mar 2021 01:48:29 +0000 Subject: [New-bugs-announce] [issue43608] `bytes_concat` and Buffer cleanup Message-ID: <1616550509.19.0.257186689995.issue43608@roundup.psfhosted.org> New submission from Sebastian Berg : `pybytes_concate` currently uses the following code to get the data: va.len = -1; vb.len = -1; if (PyObject_GetBuffer(a, &va, PyBUF_SIMPLE) != 0 || PyObject_GetBuffer(b, &vb, PyBUF_SIMPLE) != 0) { PyErr_Format(PyExc_TypeError, "can't concat %.100s to %.100s", Py_TYPE(b)->tp_name, Py_TYPE(a)->tp_name); goto done; } I don't actually know if it is realistically possible to issues here (I ended up here by chasing the wrong thing). But this (and the identical code in `bytearray`) strictly rely on `view->len` not being modified on error (or else may not clean `va`)! That seems wrong to me? Although, I now saw that `PyBuffer_GetBuffer` says: If the exporter cannot provide a buffer of the exact type, it MUST raise PyExc_BufferError, set view->obj to NULL and return -1. Pretty much all code in NumPy (and cpython as far as I can tell), will guarantee that `obj` (and `len` probably) is untouched on error, but it will not set it to NULL! I can see some wisdom in NULL'ing `view->obj` since it means the caller can call `PyBuffer_Release` unconditionally (but then we have to explicitly do that!). But realistically, it seems to me the realistic thing is to say that a caller must never release an unexported buffer and make no assumption about its content? (Which doesn't mean I won't ensure NumPy will keep `len` and `obj` unmodified or NULL `obj` on error.) ---------- components: C API messages: 389428 nosy: seberg priority: normal severity: normal status: open title: `bytes_concat` and Buffer cleanup versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 22:14:47 2021 From: report at bugs.python.org (midori) Date: Wed, 24 Mar 2021 02:14:47 +0000 Subject: [New-bugs-announce] [issue43609] ast.unparse-ing a FunctionType gives ambiguous result Message-ID: <1616552087.93.0.261462178392.issue43609@roundup.psfhosted.org> New submission from midori : Hi all, this is probably my first issue here, so don't blame me if I do something wrong lol The ast.FunctionType gives syntax like (a, b) -> c for function types, this is ok, and also since Python 3.10 we can use X | Y to denote unions, this is ok. So Given the following two trees: fun1 = ast.FunctionType( argtypes=[], returns=ast.BinOp( left=ast.Name(id='int'), op=ast.BitOr(), right=ast.Name(id='float'), ) ) fun2 = ast.BinOp( left=ast.FunctionType( argtypes=[], returns=ast.Name(id='int'), ), op=ast.BitOr(), right=ast.Name(id='float'), ) Calling: print(ast.unparse(fun1)) print(ast.unparse(fun2)) The results are these: () -> int | float () -> int | float So there is some ambiguity. By feeding this string to ast.parse(mode='func_type'), I know that it means "returning a union". Don't know if there is any impact to simply add a pair of parens, or does this problem even matters at all. I tested it using Python 3.10 a6 and Python 3.9.2. ---------- components: Library (Lib) messages: 389429 nosy: Batuhan Taskaya, cleoold priority: normal severity: normal status: open title: ast.unparse-ing a FunctionType gives ambiguous result type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 23 23:04:29 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Wed, 24 Mar 2021 03:04:29 +0000 Subject: [New-bugs-announce] [issue43610] Ctrl C makes interpreter exit Message-ID: <1616555069.73.0.540309240427.issue43610@roundup.psfhosted.org> New submission from Xinmeng Xia : Python interpreter will exit when using Ctrl C to interrupt some Python module functions with read operations. e.g. sndhdr.what(0), pdb.find_function('abs/'*100000,False), mimetypes.read_mime_types(0). This is not the expected behavior. Ctrl C is to raise a KeyboardInterrupt, it should not crash Python and make interpreter exit. Reproduce: 1. type 'python3' in command console; 2. type 'import sndhdr;sndhdr.what(0)' 3. type ctrl C Expected behavior: type ctrl c, raise a KeyboardInterrupt, Python does not exit. ======================================== xxm at xxm-System-Product-Name:~$ python Python 3.9.2 (default, Mar 12 2021, 15:08:35) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> KeyboardInterrupt >>> ======================================== Unexpected behavior: type ctrl c, raise a KeyboardInterrupt, Python exits. =========================================================== xxm at xxm-System-Product-Name:~$ python Python 3.9.2 (default, Mar 12 2021, 15:08:35) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sndhdr;sndhdr.what(0) ^CTraceback (most recent call last): File "", line 1, in File "/home/xxm/Desktop/apifuzz/Python-3.9.2/Lib/sndhdr.py", line 54, in what res = whathdr(filename) File "/home/xxm/Desktop/apifuzz/Python-3.9.2/Lib/sndhdr.py", line 61, in whathdr h = f.read(512) KeyboardInterrupt >>> xxm at xxm-System-Product-Name:~$ =========================================================== System: Ubuntu 16.04 ---------- components: Library (Lib) messages: 389431 nosy: xxm priority: normal severity: normal status: open title: Ctrl C makes interpreter exit type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 01:36:13 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Wed, 24 Mar 2021 05:36:13 +0000 Subject: [New-bugs-announce] [issue43611] Function tcflow() in module termios can not be interupted when the second argument is 0 Message-ID: <1616564173.74.0.426077496185.issue43611@roundup.psfhosted.org> New submission from Xinmeng Xia : In Ubuntu 16.04, termios.tcflow(1, 0) cannot be interrupted by Ctrl C, Ctrl D, Ctrl Z. It work well on mac OS. (Ctrl C can interrupt it on Mac OS). Reproduce: 1. type 'python3' in command console; 2. type ?import termios; termios.tcflow(1, 0)? 3. try ?ctrl C?, ?Ctrl D?, ?Ctrl Z? ========================================================================= xxm at xxm-System-Product-Name:~$ '/home/xxm/Desktop/apifuzz/Python-3.10.0a6/python' Python 3.10.0a6 (default, Mar 19 2021, 11:45:56) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import termios >>> termios.tcflow(1, 0) ========================================================================= Expected result: this function can be interrupted or stopped by Ctrl C, Ctrl D, Ctrl Z. Actual result: No response for Ctrl C, Ctrl D, Ctrl Z System Ubuntu 16.04 ---------- components: Library (Lib) messages: 389436 nosy: xxm priority: normal severity: normal status: open title: Function tcflow() in module termios can not be interupted when the second argument is 0 type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 02:04:20 2021 From: report at bugs.python.org (Ruben Vorderman) Date: Wed, 24 Mar 2021 06:04:20 +0000 Subject: [New-bugs-announce] [issue43612] zlib.compress should have a wbits argument Message-ID: <1616565860.52.0.251244097489.issue43612@roundup.psfhosted.org> New submission from Ruben Vorderman : zlib.compress can now only be used to output zlib blocks. Arguably `zlib.compress(my_data, level, wbits=-15)` is even more useful as it gives you a raw deflate block. That is quite interesting if you are writing your own file format and want to use compression, but like to use a different hash. Also gzip.compress(data, level, mtime) is extremely slow due to it instantiating a GzipFile object which then streams a bytes object. Explicitly not taking advantage of the fact that the bytes object is entirely in memory already (I will create another bug for this). zlib.compress(my_data, level, wbits=31) should be faster in all possible circumstances, but that option is not available now. ---------- components: Library (Lib) messages: 389437 nosy: rhpvorderman priority: normal severity: normal status: open title: zlib.compress should have a wbits argument versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 02:27:19 2021 From: report at bugs.python.org (Ruben Vorderman) Date: Wed, 24 Mar 2021 06:27:19 +0000 Subject: [New-bugs-announce] [issue43613] gzip.compress and gzip.decompress are sub-optimally implemented. Message-ID: <1616567239.03.0.950381696666.issue43613@roundup.psfhosted.org> New submission from Ruben Vorderman : When working on python-isal which aims to provide faster drop-in replacements for the zlib and gzip modules I found that the gzip.compress and gzip.decompress are suboptimally implemented which hurts performance. gzip.compress and gzip.decompress both do the following things: - Instantiate a BytesIO object to mimick a file - Instantiate a GzipFile object to compress or read the file. That means there is way more Python code involved than strictly necessary. Also the 'data' is already fully in memory, but the data is streamed anyway. That is quite a waste. I propose the following: - The documentation should make it clear that zlib.decompress(... ,wbits=31) and zlib.compress(..., wbits=31) (after 43612 has been addressed), are both quicker but come with caveats. zlib.compress can not set mtime. zlib.decompress does not take multimember gzip into account. - For gzip.compress -> The GzipFile._write_gzip_header function should be moved to a module wide _gzip_header function that returns a bytes object. GzipFile._write_gzip_header can call this function. gzip.compress can also call this function to create a header. gzip.compress than calls zlib.compress(data, wbits=-15) (after 43612 has been fixed) to create a raw deflate block. A gzip trailer can be easily created by calling zlib.crc32(data) and len(data) & 0xffffffff and packing those into a struct. See for an example implementation here: https://github.com/pycompression/python-isal/blob/v0.8.0/src/isal/igzip.py#L242 -> For gzip.decompress it becomes quite more involved. A read_gzip_header function can be created, but the current implementation returns EOFErrors if the header is incomplete due to a truncated file instead of BadGzipFile errors. This makes it harder to implement something that is not a major break from current gzip.decompress. Apart from the header, the implementation is straightforward. Do a while true loop. All operations are performed in the loop. Validate the header and report the end of the header. Create a zlib.decompressobj(wbits=-15). Decompress all the data from the end of header. Flush. Extract the crc and length from the first 8 bytes of the unused data. data = decompobj.unused_data[8:]. if not data: break. For a reference implementation check here: https://github.com/pycompression/python-isal/blob/v0.8.0/src/isal/igzip.py#L300. Note that the decompress function is quite straightforward. Checking the header however while maintaining backwards compatibility with gzip.decompress is not so simple. And that brings to another point. Should non-descriptive EOFErrors be raised when reading the gzip header? Or throw informative BadGzipFile errors when the header is parsed. I tend towards the latter. For example BadGzipFile("Truncated header") instead of EOFError. Or at least EOFError("Truncated gzip header"). I am aware that confounds this issue with another issue, but these things are coupled in the implementation so both need to be solved at the same time. Given the headaches that gzip.decompress gives it might be easier to solve gzip.compress first in a first PR and do gzip.decompress later. ---------- messages: 389438 nosy: rhpvorderman priority: normal severity: normal status: open title: gzip.compress and gzip.decompress are sub-optimally implemented. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 07:12:48 2021 From: report at bugs.python.org (Anthony Flury) Date: Wed, 24 Mar 2021 11:12:48 +0000 Subject: [New-bugs-announce] [issue43614] Search is not beginner friendly Message-ID: <1616584368.72.0.108883750732.issue43614@roundup.psfhosted.org> New submission from Anthony Flury : A commonly asked question on Quora is 'What do *args and **kwargs' mean ? While it is relatively easy for community to answer these questions the search tool on the standard documentation doesn't make it easy. I understand that 'args' and 'kwargs' are both naming conventions, they are very common across the documentation, but searching on '*args' or '**kwargs' doesn't actually find anything useful - it certainly doesn't place 'https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-lists' at or close to the top of the list. It is my view that the documentation should be beginner friendly, but in this case (and many other I guess) you have to know what to search for to find something useful. I note that even common phrases in Computing (such as 'variable arguments' or 'variable parameters') don't find anything useful. The term 'variadic' does find the relevant page, but the link displayed in the search results lands on the page (but not the relevant section) - and many beginners wont search for 'variadic'. The index and search need to be improved to help beginners - specifically in this case * Search Index should include common conventional names (such as args, kwargs) * Search Index should include common computing terms ('variable arguments' for example - even if the documentation doesn't actually use that terminology). * Search should link to the relevant section (and not just the page). ---------- assignee: docs at python components: Documentation messages: 389442 nosy: anthony-flury, docs at python priority: normal severity: normal status: open title: Search is not beginner friendly versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 09:59:19 2021 From: report at bugs.python.org (Cong Ma) Date: Wed, 24 Mar 2021 13:59:19 +0000 Subject: [New-bugs-announce] [issue43615] [PATCH] Properly implement Py_UNREACHABLE macro using autoconf. Message-ID: <1616594359.79.0.245413708193.issue43615@roundup.psfhosted.org> New submission from Cong Ma : (This is a summarized form of the commit message in the attached patch. I'm submitting a patch instead of a PR over GitHub, because it seems that the ``autoreconf`` output files are part of the repository. In order for the changes to take effect in the repo, I may have to run ``autoreconf`` and add the clobbered output files to the repo, which I don't think is a good idea. Also on my system the ``autoreconf`` can only work correctly if I add a missing M4 file "ax_check_compile_flag.m4" from the Autoconf Archive for the ``AX_CHECK_COMPILE_FLAG`` macro used in the existing ``configure.ac``. I don't think it's wise for me to introduce so many changes at once if most developers don't need to run ``autoreconf`` often.) The problem ----------- Definition of the ``Py_UNREACHABLE()`` macro relied on testing compiler versions in preprocessor directives. This is unreliable chiefly because compilers masquerade as each other. The current implementation tests the ``__GNUC__`` and ``__GNUC_MINOR__`` macros as the logic (GCC version >= 4.5) for determining whether the compiler intrinsic ``__builtin_unreachable()`` is present (see commits eebaa9bf, 24ba3b0d). However, Clang defines these macros too and can cause confusion. Clang 11 pretends to be GCC 4.2.1 in its predefined macros. As a result, Clang won't use the intrinsic even if it's supported. This doesn't seem to match the intent behind the original implementation. The solution ------------ Test the presence of the compiler-builtin ``__builtin_unreachable()`` at configure-time using Autoconf, and conditionally define the ``Py_UNREACHABLE()`` macro depending on the configuration. The idea is based on the ``ax_gcc_builtin.m4`` code [0] by Gabriele Svelto. Alternative ideas ----------------- Recent versions of Clang and GCC support the ``__has_builtin()`` macro. However, this may be unreliable before Clang 10 [1], while GCC support is only available as of GCC 10 and its semantics may not be the same as Clang's [2]. Therefore ``__has_builtin()`` may not be as useful as it seems. We may attempt to improve the accuracy of version checking in ``#if`` directives, but this could be brittle and difficult to explain, verify, or maintain. Links ----- [0] https://www.gnu.org/software/autoconf-archive/ax_gcc_builtin.html [1] https://clang.llvm.org/docs/LanguageExtensions.html#has-builtin [2] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66970#c24 ---------- components: Build files: 0001-Properly-implement-Py_UNREACHABLE-macro-using-autoco.patch keywords: patch messages: 389454 nosy: congma priority: normal severity: normal status: open title: [PATCH] Properly implement Py_UNREACHABLE macro using autoconf. type: enhancement Added file: https://bugs.python.org/file49910/0001-Properly-implement-Py_UNREACHABLE-macro-using-autoco.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 10:16:11 2021 From: report at bugs.python.org (Rowan Sylvester-Bradley) Date: Wed, 24 Mar 2021 14:16:11 +0000 Subject: [New-bugs-announce] [issue43616] random.shuffle() crashes with Unhandled exception Message-ID: <1616595371.88.0.389934384043.issue43616@roundup.psfhosted.org> New submission from Rowan Sylvester-Bradley : When I do random.shuffle(questions_element) (questions_element is an element generated by lxml via the code questions_element = exams.find("questions") ) I get a crash: Unhandled exception at 0x00007FFD7AE8EF89 (ntdll.dll) in python.exe: 0xC0000374: A heap has been corrupted (parameters: 0x00007FFD7AEF77F0). Is there a way to work around this? Thanks - Rowan ---------- messages: 389456 nosy: rowan.bradley priority: normal severity: normal status: open title: random.shuffle() crashes with Unhandled exception type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 10:52:18 2021 From: report at bugs.python.org (Cong Ma) Date: Wed, 24 Mar 2021 14:52:18 +0000 Subject: [New-bugs-announce] [issue43617] Missing definition in configure.ac causing autoreconf to create damaged configure script Message-ID: <1616597538.33.0.0386305243834.issue43617@roundup.psfhosted.org> New submission from Cong Ma : The problem ----------- In the repository, the definition for ``AX_CHECK_COMPILE_FLAG`` in Python's ``configure.ac`` file is missing. If ``autoreconf`` is run, an invalid ``configure`` script is generated. The following is the behaviour of running ``autoreconf`` followed by ``configure``: ``` # In cpython repository top-level directory $ autoreconf $ mkdir build $ cd build $ ../configure # <- using newly generated configure script [... omitted ...] checking for --enable-optimizations... no ../configure: line 6498: syntax error near unexpected token `-fno-semantic-interposition,' ../configure: line 6498: ` AX_CHECK_COMPILE_FLAG(-fno-semantic-interposition,' ``` The solution ------------ It appears a file was missing in the m4/ directory. The file matches this one from the Autoconf Archive: https://www.gnu.org/software/autoconf-archive/ax_check_compile_flag.html Simply adding the correct m4 file to m4/ should make ``autoreconf`` work. ---------- components: Build messages: 389463 nosy: congma priority: normal severity: normal status: open title: Missing definition in configure.ac causing autoreconf to create damaged configure script _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 14:30:46 2021 From: report at bugs.python.org (Rowan Sylvester-Bradley) Date: Wed, 24 Mar 2021 18:30:46 +0000 Subject: [New-bugs-announce] [issue43618] random.shuffle loses most of the elements Message-ID: <1616610646.13.0.165363579885.issue43618@roundup.psfhosted.org> New submission from Rowan Sylvester-Bradley : This issue is probably related to issue ??? but I have created it as a separate issue. When shuffle doesn't crash it sometimes (or maybe always - I haven't fully analysed this yet) looses most of the elements in the list that it is supposed to be shuffling. Here is an extract of the code that I'm using: import io from io import StringIO from lxml import etree import random filename_xml = 'MockExam5.xml' with io.open(filename_xml, mode="r", encoding="utf-8") as xml_file: xml_to_check = xml_file.read() doc = etree.parse(StringIO(xml_to_check)) exams = doc.getroot() questions_element = exams.find("questions") logmsg(L_TRACE, "There are now " + str(len(questions_element.findall("question"))) + " questions") logmsg(L_TRACE, "Randomising order of questions in this exam") random.shuffle(questions_element) logmsg(L_TRACE, "Finished randomise") logmsg(L_TRACE, "There are now " + str(len(questions_element.findall("question"))) + " questions") And here is the log produced by this code: 21-03-24 18:10:11.989 line: 2057 file: D:\XPS_8700 Extended Files\Users\RowanB\Documents\My_Scripts NEW\mockexam\put_exam.py 2 There are now 79 questions 21-03-24 18:10:11.991 line: 2065 file: D:\XPS_8700 Extended Files\Users\RowanB\Documents\My_Scripts NEW\mockexam\put_exam.py 2 Randomising order of questions in this exam 21-03-24 18:10:11.992 line: 2067 file: D:\XPS_8700 Extended Files\Users\RowanB\Documents\My_Scripts NEW\mockexam\put_exam.py 2 Finished randomise 21-03-24 18:10:11.993 line: 2068 file: D:\XPS_8700 Extended Files\Users\RowanB\Documents\My_Scripts NEW\mockexam\put_exam.py 2 There are now 6 questions How come the shuffle starts off with 79 elements, and finishes with 6? Thanks - Rowan ---------- messages: 389482 nosy: rowan.bradley priority: normal severity: normal status: open title: random.shuffle loses most of the elements type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 16:18:16 2021 From: report at bugs.python.org (Roman Valov) Date: Wed, 24 Mar 2021 20:18:16 +0000 Subject: [New-bugs-announce] [issue43619] convenience of using create_datagram_endpoint (and friends) Message-ID: <1616617096.39.0.431079880564.issue43619@roundup.psfhosted.org> New submission from Roman Valov : Please check the attached source code. I have to implement an UDP server listening on all interfaces and able to detect what is the local address is used to communicate with remote address. In order to do this I'm using a temporary socket connected to exact remote endpoint to retrieve it's sock name. When I implement the solution in a pure `asyncio` fashion I faced pair of inconveniences: ISSUE-1: there is no idiomatic way to sleep forever inside async function. The example of using `create_datagram_endpoint` in documentation uses `sleep(3600)` which is not so useful. I've used `loop.create_future()` but it's perceived to be kind of hack. Also I can't use `loop.run_forever` in this context. Possible solutions: - `serve_forever` for a transport object - `asyncio.setup_and_run_forever(main())` -- function to setup file descriptors for an event loop and run forever. - `asyncio.sleep(None)` or `asyncio.pause()` -- special argument for sleep or dedicated `pause` function. ISSUE-2: callbacks for `Protocol` class are assumed to be sync `def`s. Despite the class is designed to be used as a part of `asyncio`. So, in order to invoke async code from sync callback I have to add some boilerplate code. Compare with `start_server`. It's `client_connected_cb` argument maybe a plain callable or co-routine function. So it's proposed to let Protocol callbacks to be `async def`s. ---------- components: asyncio files: async.py messages: 389488 nosy: Roman.Valov, asvetlov, yselivanov priority: normal severity: normal status: open title: convenience of using create_datagram_endpoint (and friends) type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file49912/async.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 24 16:35:52 2021 From: report at bugs.python.org (Jared Sutton) Date: Wed, 24 Mar 2021 20:35:52 +0000 Subject: [New-bugs-announce] [issue43620] os.path.join does not use os.sep as documentation claims Message-ID: <1616618152.23.0.688812989277.issue43620@roundup.psfhosted.org> New submission from Jared Sutton : The behavior of os.path.join() does not match the documentation, in regards to the use of os.sep. From the docs: """ The return value is the concatenation of path and any members of *paths with exactly one directory separator (os.sep) following each non-empty part except the last, meaning that the result will only end in a separator if the last part is empty. """ The documentation clearly states that the function uses the value of os.sep (which differs based on platform). However, if you review the 2 implementations (ntpath.py and posixpath.py), the separator character used is clearly hard-coded and doesn't reference os.sep at all. One could say that this is either a doc bug or an implementation bug, depending on what the intended behavior is. I submit that this is an implementation bug, as one might want to use os.path.join() to construct a path to be used on a platform other than the one currently running the application. For example, a person might be running Python on Windows, but calling a web API and constructing a path for use on a remote posix system. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 389489 nosy: docs at python, jpsutton priority: normal severity: normal status: open title: os.path.join does not use os.sep as documentation claims type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 04:24:44 2021 From: report at bugs.python.org (Ruben Vorderman) Date: Thu, 25 Mar 2021 08:24:44 +0000 Subject: [New-bugs-announce] [issue43621] gzip._GzipReader should only throw BadGzipFile errors Message-ID: <1616660684.49.0.0404853053479.issue43621@roundup.psfhosted.org> New submission from Ruben Vorderman : This is properly documented: https://docs.python.org/3/library/gzip.html#gzip.BadGzipFile . It now hrows EOFErrors when a stream is truncated. But this means that upstream both BadGzipFile and EOFError need to be catched in the exception handling when opening a gzip file for reading. When a gzip file is truncated it is also a "bad gzip file" in my opinion, so there is no reason to have an extra class of errors. Also it throws zlib.error's when zlib craches for some reason. This means there is some corruption in the raw deflate block. Well that means it is a "bad gzip file" as well and the error message should reflect that. This won't break people's code. If they are already catching EOFError zlib.error and BadGzipFile it changes nothing. If they only catch BadGzipFile, they will have less annoying errors that pop through. I can make the PR, but of course not without any feedback. I am curious what other people think. ---------- components: Library (Lib) messages: 389494 nosy: rhpvorderman priority: normal severity: normal status: open title: gzip._GzipReader should only throw BadGzipFile errors versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 04:54:27 2021 From: report at bugs.python.org (gchauvel) Date: Thu, 25 Mar 2021 08:54:27 +0000 Subject: [New-bugs-announce] [issue43622] TLS 1.3, client polling returns event without data Message-ID: <1616662467.59.0.863199354761.issue43622@roundup.psfhosted.org> New submission from gchauvel : A simple test in test_ssl.py [1][2] with following context: - client connects and listens to data without sending any first - traces to make sure no data is written at test level from server or client - TLSv1.3 is allowed or not using "context.options |= ssl.OP_NO_TLSv1_3" produces this result: TLSv1.2: - no event on FD, no issue TLSv1.3: - event on FD without any write at test level - recv() blocks, even with setblocking(False) [1] master: https://github.com/g-chauvel/cpython/commit/8c95c4f67367ea43c508ea62a0cdbe120a3fed9b [2] 3.6: https://github.com/g-chauvel/cpython/commit/7cd4b4ac22efea7c61a9f8f57b6f2315567f5742 ---------- assignee: christian.heimes components: SSL messages: 389496 nosy: christian.heimes, gchauvel priority: normal severity: normal status: open title: TLS 1.3, client polling returns event without data type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 10:53:35 2021 From: report at bugs.python.org (Emmanuel Miranda) Date: Thu, 25 Mar 2021 14:53:35 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue43623=5D_nouveaut=C3=A9_20?= =?utf-8?q?21?= Message-ID: <1616684015.38.0.262463023278.issue43623@roundup.psfhosted.org> New submission from Emmanuel Miranda : Avec un nouveau cycle de release annuel la communaut? Python ne cesse d?aller de l?avant h?te de voir ce qu?ils nous r?servent pour 2021 apr?s la version 3.10 ! En esp?rant que les applicatifs suivent pour supporter https://webscre.com/ ---------- components: Library (Lib) messages: 389503 nosy: Yanis77 priority: normal severity: normal status: open title: nouveaut? 2021 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 13:19:07 2021 From: report at bugs.python.org (Terry Davis) Date: Thu, 25 Mar 2021 17:19:07 +0000 Subject: [New-bugs-announce] [issue43624] Add underscore as a decimal separator for string formatting Message-ID: <1616692747.49.0.677529428018.issue43624@roundup.psfhosted.org> New submission from Terry Davis : Proposal: Enable this >>> format(12_34_56.12_34_56, '_._f') '123_456.123_456' Where now only this is possible >>> format(12_34_56.12_34_56, '_.f') '123_456.123456' Based on the discussion in the Ideas forum, three core devs support this addition. https://discuss.python.org/t/add-underscore-as-a-thousandths-separator-for-string-formatting/7407 I'm willing to give this a try if someone points me to where to add tests and where the float formatting code is. This would be my first CPython contribution. The feature freeze for 3.10 is 2021-05-03. https://www.python.org/dev/peps/pep-0619/#id5 ---------- components: Interpreter Core messages: 389508 nosy: Terry Davis priority: normal severity: normal status: open title: Add underscore as a decimal separator for string formatting type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 14:18:35 2021 From: report at bugs.python.org (ejacq) Date: Thu, 25 Mar 2021 18:18:35 +0000 Subject: [New-bugs-announce] [issue43625] CSV has_headers heuristic could be improved Message-ID: <1616696315.22.0.477162760848.issue43625@roundup.psfhosted.org> New submission from ejacq <0python3 at jesuislibre.net>: Here is an sample of CSV input: "time","forces" 0,0 0.5,0.9 when calling has_header() from csv.py on this sample, it returns false. Why? because 0 and 0.5 don't belong to the same type and thus the column is discarded by the heuristic. I think the heuristic will better work if rather than just comparing number types, it would also consider casting the values in this order int -> float -> complex. If the values are similar then consider this upgraded type as the type of the column. In the end, this file would be considered float columns with headers. ---------- components: Library (Lib) messages: 389515 nosy: ejacq priority: normal severity: normal status: open title: CSV has_headers heuristic could be improved versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 15:16:51 2021 From: report at bugs.python.org (Abraham Macias) Date: Thu, 25 Mar 2021 19:16:51 +0000 Subject: [New-bugs-announce] [issue43626] SIGSEV in PyErr_SetObject Message-ID: <1616699811.01.0.0992968553141.issue43626@roundup.psfhosted.org> New submission from Abraham Macias : Hi, I'm dealing with random crashes when using pymongo in Python 3.7.3 in a Debian Buster. This is the python backtrace: (gdb) thread apply all py-bt Thread 2 (Thread 0x7f9817d95700 (LWP 221)): Traceback (most recent call first): File "/usr/local/lib/python3.7/dist-packages/gevent/_threading.py", line 80, in wait waiter.acquire() # Block on the native lock File "/usr/local/lib/python3.7/dist-packages/gevent/_threading.py", line 162, in get self._not_empty.wait() File "/usr/local/lib/python3.7/dist-packages/gevent/threadpool.py", line 270, in _worker task = task_queue.get() File "/usr/local/lib/python3.7/dist-packages/gevent/threadpool.py", line 254, in __trampoline g.switch() Thread 1 (Thread 0x7f981fdfd740 (LWP 216)): Traceback (most recent call first): File "/usr/local/lib/python3.7/dist-packages/bson/__init__.py", line 1089, in _decode_all_selective return decode_all(data, codec_options) File "/usr/local/lib/python3.7/dist-packages/pymongo/message.py", line 1616, in unpack_response self.payload_document, codec_options, user_fields) File "/usr/local/lib/python3.7/dist-packages/pymongo/cursor.py", line 1080, in _unpack_response legacy_response) File "/usr/local/lib/python3.7/dist-packages/pymongo/server.py", line 131, in run_operation_with_response user_fields=user_fields) File "/usr/local/lib/python3.7/dist-packages/pymongo/mongo_client.py", line 1366, in _cmd unpack_res) File "/usr/local/lib/python3.7/dist-packages/pymongo/mongo_client.py", line 1471, in _retryable_read return func(session, server, sock_info, slave_ok) File "/usr/local/lib/python3.7/dist-packages/pymongo/mongo_client.py", line 1372, in _run_operation_with_response exhaust=exhaust) File "/usr/local/lib/python3.7/dist-packages/pymongo/cursor.py", line 1001, in __send_message address=self.__address) File "/usr/local/lib/python3.7/dist-packages/pymongo/cursor.py", line 1124, in _refresh self.__send_message(q) File "/usr/local/lib/python3.7/dist-packages/pymongo/cursor.py", line 1207, in next if len(self.__data) or self._refresh(): File "/usr/local/lib/python3.7/dist-packages/pymongo/collection.py", line 1319, in find_one for result in cursor.limit(-1): File "/usr/local/lib/python3.7/dist-packages/gecoscc/userdb.py", line 119, in create_user user = self.collection.find_one({'email': email}) File "/usr/local/lib/python3.7/dist-packages/gecoscc/commands/create_adminuser.py", line 95, in command {'is_superuser': self.options.is_superuser} File "/usr/local/lib/python3.7/dist-packages/gecoscc/management.py", line 90, in __call__ self.command() File "/usr/local/lib/python3.7/dist-packages/gecoscc/management.py", line 48, in main command() File "/usr/local/bin/pmanage", line 10, in sys.exit(main()) (gdb) And this is the builtin-code backtrace: Core was generated by `/usr/bin/python3 /usr/local/bin/pmanage /opt/gecosccui/gecoscc.ini create_admin'. Program terminated with signal SIGSEGV, Segmentation fault. #0 PyErr_SetObject (exception=0x7ff9c0 <_PyExc_AttributeError.lto_priv.2311>, value=0x7f1ab49cb098) at ../Python/errors.c:101 101 Py_INCREF(exc_value); [Current thread is 1 (Thread 0x7f1abc823740 (LWP 370))] (gdb) bt #0 PyErr_SetObject (exception=, value="type object 'dict' has no attribute '_type_marker'") at ../Python/errors.c:101 #1 0x000000000052c23b in PyErr_FormatV (vargs=0x7ffedff77c40, format=, exception=) at ../Python/errors.c:852 #2 PyErr_Format (exception=, format=) at ../Python/errors.c:852 #3 0x000000000058717d in type_getattro (type=, name=) at ../Objects/typeobject.c:3223 #4 0x000000000054baae in _PyObject_LookupAttr (result=, name=, v=) at ../Objects/object.c:949 #5 builtin_getattr (self=, args=, nargs=) at ../Python/bltinmodule.c:1121 #6 0x00000000005cccc3 in _PyMethodDef_RawFastCallKeywords (method=0x89d160 , self=, args=0x1237208, nargs=, kwnames=) at ../Objects/call.c:651 #7 0x00000000005463e3 in _PyCFunction_FastCallKeywords (kwnames=0x0, nargs=3, args=0x1237208, func=) at ../Objects/call.c:730 #8 call_function (kwnames=0x0, oparg=3, pp_stack=) at ../Python/ceval.c:4568 #9 _PyEval_EvalFrameDefault (f=, throwflag=) at ../Python/ceval.c:3124 #10 0x00000000005cd68c in PyEval_EvalFrameEx (throwflag=0, f=Frame 0x1237088, for file /usr/local/lib/python3.7/dist-packages/bson/codec_options.py, line 35, in _raw_document_class (document_class=)) at ../Python/ceval.c:547 #11 function_code_fastcall (globals=, nargs=, args=, co=) at ../Objects/call.c:283 #12 _PyFunction_FastCallKeywords (func=, stack=, nargs=, kwnames=) at ../Objects/call.c:408 #13 0x000000000054207c in call_function (kwnames=0x0, oparg=, pp_stack=) at ../Python/ceval.c:4616 #14 _PyEval_EvalFrameDefault (f=, throwflag=) at ../Python/ceval.c:3124 #15 0x000000000053f732 in PyEval_EvalFrameEx (throwflag=0, f=Frame 0x17882e8, for file /usr/local/lib/python3.7/dist-packages/bson/__init__.py, line 1013, in decode_all (data=b'V\x00\x00\x00\x03cursor\x00=\x00\x00\x00\x04firstBatch\x00\x05\x00\x00\x00\x00\x12id\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02ns\x00\x13\x00\x00\x00gecoscc.adminusers\x00\x00\x01ok\x00\x00\x00\x00\x00\x00\x00\xf0?\x00', codec_options=, view=, data_len=86, docs=[], position=0, end=85)) at ../Python/ceval.c:547 #16 _PyEval_EvalCodeWithName (_co=, globals=, locals=, args=, argcount=, kwnames=0x0, kwargs=0x7f1ab49de978, kwcount=, kwstep=1, defs=0x7f1ab9e42220, defcount=1, kwdefs=0x0, closure=0x0, name='decode_all', qualname='decode_all') at ../Python/ceval.c:3930 #17 0x00000000005cd982 in _PyFunction_FastCallKeywords (func=, stack=0x7f1ab49de968, nargs=2, kwnames=) at ../Objects/call.c:433 #18 0x000000000054207c in call_function (kwnames=0x0, oparg=, pp_stack=) at ../Python/ceval.c:4616 #19 _PyEval_EvalFrameDefault (f=, throwflag=) at ../Python/ceval.c:3124 #20 0x00000000005cd68c in PyEval_EvalFrameEx (throwflag=0, f=Frame 0x7f1ab49de7c8, for file /usr/local/lib/python3.7/dist-packages/bson/__init__.py, line 1089, in _decode_all_selective (data=b'V\x00\x00\x00\x03cursor\x00=\x00\x00\x00\x04firstBatch\x00\x05\x00\x00\x00\x00\x12id\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02ns\x00\x13\x00\x00\x00gecoscc.adminusers\x00\x00\x01ok\x00\x00\x00\x00\x00\x00\x00\xf0?\x00', codec_options=, fields={'cursor': {'firstBatch': 1, 'nextBatch': 1}})) at ../Python/ceval.c:547 As I understand the code is using "getattr" to ckeck if a dict contains an attribute called "_type_marker", and somehow when Python is formatting the exception finds that the stack has been corrupted. What can be happening? How can I help to debug this? Best regards! ---------- components: Interpreter Core messages: 389520 nosy: amacias priority: normal severity: normal status: open title: SIGSEV in PyErr_SetObject type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 16:35:30 2021 From: report at bugs.python.org (Skip Montanaro) Date: Thu, 25 Mar 2021 20:35:30 +0000 Subject: [New-bugs-announce] [issue43627] What are the requirements for a test_sunry-testable script in Tools/scripts? Message-ID: <1616704530.06.0.307458133003.issue43627@roundup.psfhosted.org> New submission from Skip Montanaro : In my fork of python/cpython I recently created a simple script to help me with my work (I am messing around in the internals and sometimes get blindsided by opcode changes). I stuck the script in Tools/script which caused test_tools.test_sundry to hang. (I suspect it's because my script reads from sys.stdin, but I'm not certain. The old Unix pipeline ways die hard.) Looking around to see how I could modify my script to make it acceptable to test_sundry, I saw nothing about requirements. I tossed it in the TestSundryScripts.other list and now that test completes. Still, it seems there should be a bit written about what it takes for a script to be amenable to the minimal testing test_sundry.py performs. ---------- components: Tests messages: 389526 nosy: skip.montanaro priority: normal severity: normal status: open title: What are the requirements for a test_sunry-testable script in Tools/scripts? versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 21:58:53 2021 From: report at bugs.python.org (Yang Feng) Date: Fri, 26 Mar 2021 01:58:53 +0000 Subject: [New-bugs-announce] [issue43628] Incorrect argument errors for random.getstate() Message-ID: <1616723933.11.0.615757976736.issue43628@roundup.psfhosted.org> New submission from Yang Feng : In documentation of random.getstate(), it says: ?random.getstate() Return an object capturing the current internal state of the generator. This object can be passed to setstate() to restore the state.? random.getstate() takes 0 argument and return the current setting for the weekday to start each week. However, when I give one argument to random.getstate(), the interpreter reports the following error: ---------------------------------------------- >>> import random >>> random.getstate(1) Traceback (most recent call last): File "", line 1, in TypeError: getstate() takes 1 positional argument but 2 were given ---------------------------------------------- Here I have two doubts about the reported errors: 1. Is the TypeError correct? This should be an inconsistent argument number error. There is nothing to do with Type. 2. Is the detailed error correct? Doc says random.getstate() takes 0 argument, the reported error says getstate() take 1 positional argument. which is inconsistent. Besides, I pass one argument to random.getstate(), but the reported error says 2 were given. Environment: Python 3.10, Ubuntu 16.04 ---------- assignee: docs at python components: Documentation messages: 389535 nosy: CharlesFengY, docs at python priority: normal severity: normal status: open title: Incorrect argument errors for random.getstate() versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 25 23:57:11 2021 From: report at bugs.python.org (junyixie) Date: Fri, 26 Mar 2021 03:57:11 +0000 Subject: [New-bugs-announce] [issue43629] fix _PyRun_SimpleFileObject create __main__ module and cache. Call this function multiple times, the attributes stored in the module dict will affect eachother. Message-ID: <1616731031.99.0.476866477615.issue43629@roundup.psfhosted.org> New submission from junyixie : fix _PyRun_SimpleFileObject create __main__ module and cache. Call this function multiple times, the attributes stored in the module dict will affect eachother. create __main__ module, and cache it. for example. if we run fileA, call _PyRun_SimpleFileObject will create __main__ module, fileA add some attribute in __main__ module dict. now we run fileB. call _PyRun_SimpleFileObject will load cached __main__ module. now in __main__ module dict, we can get fileA's attribute. dir(module), We got unexpected results ``` for name in dir(module): ... ``` in unittest, if we execute test, and don't exit. (unittest main.py TestProgram), set exit=False. ``` def __init__(self, module='__main__', defaultTest=None, argv=None, testRunner=None, testLoader=loader.defaultTestLoader, exit=True, verbosity=1, failfast=None, catchbreak=None, buffer=None, warnings=None, *, tb_locals=False): ``` then when unittest load tests. if we use _PyRun_SimpleFileObject to run unittest, it will Repeated load test cases ``` for name in dir(module): obj = getattr(module, name) if isinstance(obj, type) and issubclass(obj, case.TestCase): tests.append(self.loadTestsFromTestCase(obj)) ``` ``` int _PyRun_SimpleFileObject(FILE *fp, PyObject *filename, int closeit, PyCompilerFlags *flags) { PyObject *m, *d, *v; int set_file_name = 0, ret = -1; m = PyImport_AddModule("__main__"); if (m == NULL) return -1; Py_INCREF(m); d = PyModule_GetDict(m); ``` ---------- components: C API messages: 389538 nosy: JunyiXie priority: normal severity: normal status: open title: fix _PyRun_SimpleFileObject create __main__ module and cache. Call this function multiple times, the attributes stored in the module dict will affect eachother. type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 26 03:44:05 2021 From: report at bugs.python.org (junyixie) Date: Fri, 26 Mar 2021 07:44:05 +0000 Subject: [New-bugs-announce] [issue43630] unittest use dir(module) to load test cases. Run unittest file by _PyRun_SimpleFileObject, have bug Message-ID: <1616744645.89.0.654590841041.issue43630@roundup.psfhosted.org> New submission from junyixie : _PyRun_SimpleFileObject load __main__ module at cache. Call this function multiple times, the attributes stored in the module dict will affect eachother. for example. if we run test_A.py, call _PyRun_SimpleFileObject will create __main__ module, test_A.py add some attribute in __main__ module dict. now we run test_B.py. call _PyRun_SimpleFileObject will load cached __main__ module. now in __main__ module dict, we can get test_A's attribute. in unittest, if we execute test, and don't exit. (unittest main.py TestProgram), set exit=False. ``` def __init__(self, module='__main__', defaultTest=None, argv=None, testRunner=None, testLoader=loader.defaultTestLoader, exit=True, verbosity=1, failfast=None, catchbreak=None, buffer=None, warnings=None, *, tb_locals=False): ``` dir(module), We got unexpected results ``` for name in dir(module): ... ``` then when unittest load tests. if we use _PyRun_SimpleFileObject to run unittest, it will Repeated load test cases ``` for name in dir(module): obj = getattr(module, name) if isinstance(obj, type) and issubclass(obj, case.TestCase): tests.append(self.loadTestsFromTestCase(obj)) ``` ``` int _PyRun_SimpleFileObject(FILE *fp, PyObject *filename, int closeit, PyCompilerFlags *flags) { PyObject *m, *d, *v; int set_file_name = 0, ret = -1; m = PyImport_AddModule("__main__"); if (m == NULL) return -1; Py_INCREF(m); d = PyModule_GetDict(m); ``` ---------- messages: 389540 nosy: JunyiXie priority: normal severity: normal status: open title: unittest use dir(module) to load test cases. Run unittest file by _PyRun_SimpleFileObject, have bug _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 26 04:15:38 2021 From: report at bugs.python.org (Christian Heimes) Date: Fri, 26 Mar 2021 08:15:38 +0000 Subject: [New-bugs-announce] [issue43631] Update to OpenSSL 1.1.1k Message-ID: <1616746538.63.0.13329268116.issue43631@roundup.psfhosted.org> New submission from Christian Heimes : OpenSSL 1.1.1k contains fixes for two high severity CVEs https://www.openssl.org/news/vulnerabilities.html https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3450 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3449 ---------- assignee: christian.heimes components: SSL, Windows, macOS messages: 389541 nosy: christian.heimes, ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: patch review status: open title: Update to OpenSSL 1.1.1k type: security versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 26 07:50:20 2021 From: report at bugs.python.org (=?utf-8?b?5rGf5p2J?=) Date: Fri, 26 Mar 2021 11:50:20 +0000 Subject: [New-bugs-announce] [issue43632] an error in the documentation of descriptor Message-ID: <1616759420.96.0.930885949934.issue43632@roundup.psfhosted.org> New submission from ?? <2353381a at gmail.com>: In this section https://docs.python.org/3.8/howto/descriptor.html#functions-and-methods , there is an error: the output of CMD 'd.f.__func__' should be same as the output of 'D.__dict__['f']'. Here the former should be '', but not '' ---------- assignee: docs at python components: Documentation messages: 389545 nosy: 2353381a, docs at python priority: normal severity: normal status: open title: an error in the documentation of descriptor versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 26 13:34:40 2021 From: report at bugs.python.org (Maxime Mouchet) Date: Fri, 26 Mar 2021 17:34:40 +0000 Subject: [New-bugs-announce] [issue43633] Improve the textual representation of IPv4-mapped IPv6 addresses Message-ID: <1616780080.54.0.443105360022.issue43633@roundup.psfhosted.org> New submission from Maxime Mouchet : Python supports IPv4-mapped IPv6 addresses as defined by RFC 4038: "the IPv6 address ::FFFF:x.y.z.w represents the IPv4 address x.y.z.w.? The current behavior is as follows: from ipaddress import ip_address addr = ip_address('::ffff:8.8.4.4') # IPv6Address('::ffff:808:404') addr.ipv4_mapped # IPv4Address('8.8.4.4') Note that the textual representation of the IPv6Address is *not* in IPv4-mapped format. It prints ::ffff:808:404 instead of ::ffff:8.8.4.4. This is technically correct, but it?s somewhat frustrating as it makes it harder to read IPv4s embedded in IPv6 addresses. My proposal would be to check, in __str__, if an IPv6 is an IPv4-mapped, and to return the appropriate representation : from ipaddress import ip_address addr = ip_address('::ffff:8.8.4.4') # Current behavior str(addr) # '::ffff:808:404' repr(addr) # IPv6Address('::ffff:808:404') # Proposed behavior str(addr) # '::ffff:8.8.4.4' repr(addr) # IPv6Address('::ffff:8.8.4.4') A few data points: - Julia prints ::ffff:808:404 (current behavior) - C (glibc) and ClickHouse prints ::ffff:8.8.4.4 (proposed behavior) ---------- components: Library (Lib) messages: 389556 nosy: maxmouchet priority: normal severity: normal status: open title: Improve the textual representation of IPv4-mapped IPv6 addresses type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 26 14:58:30 2021 From: report at bugs.python.org (JustAnotherArchivist) Date: Fri, 26 Mar 2021 18:58:30 +0000 Subject: [New-bugs-announce] [issue43634] Extensions build does not respect --jobs setting Message-ID: <1616785110.97.0.460566550998.issue43634@roundup.psfhosted.org> New submission from JustAnotherArchivist : The extension building does not respect the --jobs option passed to make. Specifically, in that step, `python setup.py build` always spawns as many gcc processes as there are CPU cores available regardless of that option. This caused problems for me because I have a VM that sees all host machine CPU cores but only has a limited amount of RAM. Despite running `make -j 4`, many more gcc processes are spawned, and this immediately causes memory starvation and a system freeze after a few seconds. The reason for this is that setup.py blindly enables parallelism in the extension compilation if '-j' appears in the MAKEFLAGS at 3.9.2/setup.py:355. Later on, distutils uses os.cpu_count to set the worker count, i.e. the '-j' *value* is ignored. This behaviour was first introduced with #5309 as far as I can see, though I haven't tested anything other than version 3.9.2. Hacky workaround: patching the above setup.py line to `self.parallel = 4`. Cf. https://github.com/pyenv/pyenv/issues/1857 ---------- components: Build messages: 389558 nosy: JustAnotherArchivist priority: normal severity: normal status: open title: Extensions build does not respect --jobs setting type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 26 15:47:45 2021 From: report at bugs.python.org (Jennie) Date: Fri, 26 Mar 2021 19:47:45 +0000 Subject: [New-bugs-announce] [issue43635] Documentation needs to declare CalledProcessError as potentially resulting from subprocess.run() Message-ID: <1616788065.3.0.472312141922.issue43635@roundup.psfhosted.org> New submission from Jennie : The documentation for subprocess says that run() can return CalledProcessError... https://docs.python.org/3/library/subprocess.html#subprocess.run ...but when you click on the link (5th paragraph down) for CalledProcessError, it only lists check_call() and check_output() as methods that can return it. My understanding is that check_call(), at least, is (becoming?) deprecated. So this section should definitely mention run(): https://docs.python.org/3/library/subprocess.html#subprocess.CalledProcessError ---------- assignee: docs at python components: Documentation messages: 389564 nosy: docs at python, jennievh priority: normal severity: normal status: open title: Documentation needs to declare CalledProcessError as potentially resulting from subprocess.run() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 26 19:53:55 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 26 Mar 2021 23:53:55 +0000 Subject: [New-bugs-announce] [issue43636] test_descr fails randomly when executed with -R : Message-ID: <1616802835.56.0.699111570038.issue43636@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : ? ./python -m test test_descr -m test_slots -R 3:3 0:00:00 load avg: 0.26 Run tests sequentially 0:00:00 load avg: 0.26 [1/1] test_descr beginning 6 repetitions 123456 test test_descr failed -- Traceback (most recent call last): File "/home/pablogsal/github/cpython/Lib/test/test_descr.py", line 1201, in test_slots c.abc = 5 AttributeError: 'C' object has no attribute 'abc' test_descr failed == Tests result: FAILURE == 1 test failed: test_descr Total duration: 72 ms Tests result: FAILURE ---------- components: Tests messages: 389575 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_descr fails randomly when executed with -R : versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 04:16:31 2021 From: report at bugs.python.org (=?utf-8?q?Ondrej_Baranovi=C4=8D?=) Date: Sat, 27 Mar 2021 08:16:31 +0000 Subject: [New-bugs-announce] [issue43637] winreg: SetValueEx leaks memory if PySys_Audit fails Message-ID: <1616832991.07.0.159084299457.issue43637@roundup.psfhosted.org> New submission from Ondrej Baranovi? : The function `winreg_SetValueEx_impl` in `winreg.c`: 1) Allocates memory by calling `Py2Reg`, 2) calls `PySys_Audit` and immediately returns if it indicates an error, 3) calls `RegSetValueExW`, 4) frees memory allocated in (1) and returns. The if-block in (2) should free the memory allocated in (1) if an audit hook raises. Introduced in PR17541. ---------- components: Windows messages: 389591 nosy: nulano, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: winreg: SetValueEx leaks memory if PySys_Audit fails type: resource usage versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 06:36:42 2021 From: report at bugs.python.org (Sander) Date: Sat, 27 Mar 2021 10:36:42 +0000 Subject: [New-bugs-announce] [issue43638] MacOS os.statvfs() has rollover for >4TB disks at each 4TB (32bit counter overflow?) Message-ID: <1616841402.66.0.531178179964.issue43638@roundup.psfhosted.org> New submission from Sander : MacOS BigSur (and older), python 3.9.2 (and older) For disks >4TB, os.statvfs() shows a wrong value for available space: too low, and always rollover at each 4TB. As 4TB = 2^42, hypothesis: rollover in 32bit counter (with 10bit blocksize) Example: "df -m" does show the correct available space df -m /Volumes/Frank/ Filesystem 1M-blocks Used Available Capacity iused ifree %iused Mounted on //frank at SynologyBlabla._smb._tcp.local/Frank 21963360 2527744 19435615 12% 2588410474 19902070164 12% /Volumes/Frank So available space 19902070164 MB, so about 18.5 TB. Good. Now python's os.statvfs(): >>> s = os.statvfs("/Volumes/Frank") >>> s.f_bavail * s.f_frsize / 1024**2 2658399.39453125 So 2.5TB, and thus wrong The difference is 16777216 MB which is exactly 4 times 4TB. Problem seems to be in MacOS statvfs() itself; reproducable with a few lines of C code. We have implemented a workaround in our python program SABnzbd to directly use MacOS' libc statfs() call (not statvfs() ). A solution / workaround in python itself would be much nicer. No problem with python on Linux with >4TB drives. ---------- components: macOS messages: 389596 nosy: ned.deily, ronaldoussoren, sanderjo priority: normal severity: normal status: open title: MacOS os.statvfs() has rollover for >4TB disks at each 4TB (32bit counter overflow?) type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 08:40:32 2021 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sat, 27 Mar 2021 12:40:32 +0000 Subject: [New-bugs-announce] [issue43639] Do not raise AttributeError on instance attribute update/deletion if data descriptor with missing __set__/__delete__ method found on its type Message-ID: <1616848832.19.0.971517857631.issue43639@roundup.psfhosted.org> New submission from G?ry : Currently, the `object.__setattr__` and `type.__setattr__` methods raise an `AttributeError` during attribute *update* on an instance if its type has an attribute which is a *data* descriptor without a `__set__` method. Likewise, the `object.__delattr__` and `type.__delattr__` methods raise an `AttributeError` during attribute *deletion* on an instance if its type has an attribute which is a *data* descriptor without a `__delete__` method. This should not be the case. When update/deletion is impossible through a data descriptor found on the type, update/deletion should carry its process on the instance, like when there is no data descriptor found on the type. And this is what the `object.__getattribute__` and `type.__getattribute__` methods already do: they do *not* raise an `AttributeError` during attribute *lookup* on an instance if its type has an attribute which is a *data* descriptor without a `__get__` method. See [the discussion on Python Discuss](https://discuss.python.org/t/why-do-setattr-and-delattr-raise-an-attributeerror-in-this-case/7836?u=maggyero). Here is a simple program illustrating the differences between attribute lookup by `object.__getattribute__` on the one hand (`AttributeError` is not raised), and attribute update by `object.__setattr__` and attribute deletion by `object.__delattr__` on the other hand (`AttributeError` is raised): ```python class DataDescriptor1: # missing __get__ def __set__(self, instance, value): pass def __delete__(self, instance): pass class DataDescriptor2: # missing __set__ def __get__(self, instance, owner=None): pass def __delete__(self, instance): pass class DataDescriptor3: # missing __delete__ def __get__(self, instance, owner=None): pass def __set__(self, instance, value): pass class A: x = DataDescriptor1() y = DataDescriptor2() z = DataDescriptor3() a = A() vars(a).update({'x': 'foo', 'y': 'bar', 'z': 'baz'}) a.x # actual: returns 'foo' # expected: returns 'foo' a.y = 'qux' # actual: raises AttributeError: __set__ # expected: vars(a)['y'] == 'qux' del a.z # actual: raises AttributeError: __delete__ # expected: 'z' not in vars(a) ``` Here is another simple program illustrating the differences between attribute lookup by `type.__getattribute__` on the one hand (`AttributeError` is not raised), and attribute update by `type.__setattr__` and attribute deletion by `type.__delattr__` on the other hand (`AttributeError` is raised): ```python class DataDescriptor1: # missing __get__ def __set__(self, instance, value): pass def __delete__(self, instance): pass class DataDescriptor2: # missing __set__ def __get__(self, instance, owner=None): pass def __delete__(self, instance): pass class DataDescriptor3: # missing __delete__ def __get__(self, instance, owner=None): pass def __set__(self, instance, value): pass class M(type): x = DataDescriptor1() y = DataDescriptor2() z = DataDescriptor3() class A(metaclass=M): x = 'foo' y = 'bar' z = 'baz' A.x # actual: returns 'foo' # expected: returns 'foo' A.y = 'qux' # actual: raises AttributeError: __set__ # expected: vars(A)['y'] == 'qux' del A.z # actual: raises AttributeError: __delete__ # expected: 'z' not in vars(A) ``` ---------- components: Interpreter Core messages: 389598 nosy: maggyero priority: normal severity: normal status: open title: Do not raise AttributeError on instance attribute update/deletion if data descriptor with missing __set__/__delete__ method found on its type type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 11:40:09 2021 From: report at bugs.python.org (Illia Volochii) Date: Sat, 27 Mar 2021 15:40:09 +0000 Subject: [New-bugs-announce] [issue43640] Add warnings to ssl.PROTOCOL_TLSv1 and ssl.PROTOCOL_TLSv1_1 docs Message-ID: <1616859609.98.0.745339520317.issue43640@roundup.psfhosted.org> New submission from Illia Volochii : TLS versions 1.0 and 1.1 have recently been deprecated. [1] ssl.PROTOCOL_SSLv2 and ssl.PROTOCOL_SSLv3 have such warnings "SSL version 2 is insecure. Its use is highly discouraged." [2] We have to add such warnings to ssl.PROTOCOL_TLSv1 and ssl.PROTOCOL_TLSv1_1 too. [1] https://datatracker.ietf.org/doc/rfc8996/ [2] https://docs.python.org/3.10/library/ssl.html#ssl.PROTOCOL_SSLv2 ---------- assignee: docs at python components: Documentation messages: 389606 nosy: docs at python, illia-v priority: normal severity: normal status: open title: Add warnings to ssl.PROTOCOL_TLSv1 and ssl.PROTOCOL_TLSv1_1 docs type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 11:50:13 2021 From: report at bugs.python.org (Illia Volochii) Date: Sat, 27 Mar 2021 15:50:13 +0000 Subject: [New-bugs-announce] [issue43641] Update `ssl.PROTOCOL_TLSv1_2` docs since it is not the newest TLS version Message-ID: <1616860213.53.0.619119120608.issue43641@roundup.psfhosted.org> New submission from Illia Volochii : Docs say that TLS 1.2 is the most modern version, and probably the best choice for maximum protection, but TLS 1.3 exists. https://docs.python.org/3.10/library/ssl.html#ssl.PROTOCOL_TLSv1_2 ---------- assignee: docs at python components: Documentation messages: 389608 nosy: docs at python, illia-v priority: normal severity: normal status: open title: Update `ssl.PROTOCOL_TLSv1_2` docs since it is not the newest TLS version type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 11:54:09 2021 From: report at bugs.python.org (An1c0de) Date: Sat, 27 Mar 2021 15:54:09 +0000 Subject: [New-bugs-announce] [issue43642] ctypes.util.find_library can't find the lib on Alpine Message-ID: <1616860449.08.0.544658772631.issue43642@roundup.psfhosted.org> New submission from An1c0de : Hi! ctypes.util.find_library can't find the lib on Alpine. I think this is because the logic is not cover the digital suffixes. Docker example: ``` docker run --rm -it python:3.9-alpine sh apk add libuuid build-base ls /lib/libuuid* # /lib/libuuid.so.1 # /lib/libuuid.so.1.3.0 python -c 'from ctypes.util import find_library; print(find_library("uuid"))' # None ## Workaround: cd /lib && ln -s libuuid.so.1 libuuid.so python -c 'from ctypes.util import find_library; print(find_library("uuid"))' # libuuid.so.1 ``` Thanks! ---------- components: ctypes messages: 389609 nosy: An1c0de priority: normal severity: normal status: open title: ctypes.util.find_library can't find the lib on Alpine type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 17:21:46 2021 From: report at bugs.python.org (Andreas Poehlmann) Date: Sat, 27 Mar 2021 21:21:46 +0000 Subject: [New-bugs-announce] [issue43643] importlib.readers.MultiplexedPath.name is not a property Message-ID: <1616880106.57.0.50019170707.issue43643@roundup.psfhosted.org> New submission from Andreas Poehlmann : Hello, I was using the `importlib_resources` backport and encountered this issue, which is also present in cpython: `importlib.readers.MultiplexedPath.name` is not a property as required by `importlib.abc.Traversable` I can prepare a pull request if it helps. Cheers, Andreas ---------- components: Library (Lib) messages: 389615 nosy: ap--, jaraco priority: normal severity: normal status: open title: importlib.readers.MultiplexedPath.name is not a property type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 27 18:10:40 2021 From: report at bugs.python.org (Jason R. Coombs) Date: Sat, 27 Mar 2021 22:10:40 +0000 Subject: [New-bugs-announce] [issue43644] importlib.resources.as_file undocumented Message-ID: <1616883040.71.0.873625132004.issue43644@roundup.psfhosted.org> New submission from Jason R. Coombs : As reported in https://github.com/python/importlib_resources/issues/210, the `as_file` function of importlib.resources is undocumented in CPython. ---------- messages: 389624 nosy: jaraco priority: normal severity: normal status: open title: importlib.resources.as_file undocumented _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 28 05:21:24 2021 From: report at bugs.python.org (frathgeber) Date: Sun, 28 Mar 2021 09:21:24 +0000 Subject: [New-bugs-announce] [issue43645] xmlrpc.client.ServerProxy silently drops query string from URI Message-ID: <1616923284.46.0.0991006696827.issue43645@roundup.psfhosted.org> New submission from frathgeber : The change introduced in https://github.com/python/cpython/pull/15703 (bpo-38038) caused an (I presume unintended) behavior change that breaks some xmlrpc users: previously, the XLMRPC handler was everything after the host part of the URI (https://github.com/python/cpython/blame/32f825393e5836ab71de843596379fa3f9e23c7a/Lib/xmlrpc/client.py#L1428), but now the query string is *discarded* (https://github.com/python/cpython/blame/63298930fb531ba2bb4f23bc3b915dbf1e17e9e1/Lib/xmlrpc/client.py#L1424). This is known to break the XMLRPC for DokuWiki (https://www.dokuwiki.org/devel:xmlrpc), which uses query parameters for authentication: https://github.com/kynan/dokuwikixmlrpc/issues/8 ---------- components: Library (Lib) messages: 389632 nosy: christian.heimes, kynan priority: normal severity: normal status: open title: xmlrpc.client.ServerProxy silently drops query string from URI type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 28 05:52:49 2021 From: report at bugs.python.org (Christodoulos Tsoulloftas) Date: Sun, 28 Mar 2021 09:52:49 +0000 Subject: [New-bugs-announce] [issue43646] ForwardRef name conflict during evaluation Message-ID: <1616925169.4.0.788953716536.issue43646@roundup.psfhosted.org> New submission from Christodoulos Tsoulloftas : Consider two modules with the same name forward references with the same type construct ./a.py ``` from typing import Optional class Root: a: Optional["Person"] class Person: value: str ``` ./b.py ``` from typing import Optional class Root: b: Optional["Person"] class Person: value: str ``` There is a naming conflict, I think due to caching, and the type hint of the second property points to the first one. ``` >>> from typing import get_type_hints, Optional >>> from a import Root as RootA, Person as PersonA >>> from b import Root as RootB, Person as PersonB >>> >>> roota_hints = get_type_hints(RootA) >>> rootb_hints = get_type_hints(RootB) >>> >>> print(roota_hints) {'a': typing.Optional[a.Person]} >>> print(rootb_hints) {'b': typing.Optional[a.Person]} >>> >>> assert roota_hints["a"] == Optional[PersonA] >>> assert rootb_hints["b"] == Optional[PersonB] # fails, points to PersonA Traceback (most recent call last): File "", line 1, in AssertionError >", line 1, in AssertionError >>> ``` The behavior started in python 3.10, I am not sure which alpha version, I am using 3.10.0a6+ ---------- components: Library (Lib) messages: 389634 nosy: tefra priority: normal severity: normal status: open title: ForwardRef name conflict during evaluation type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 28 11:26:40 2021 From: report at bugs.python.org (Mikhail) Date: Sun, 28 Mar 2021 15:26:40 +0000 Subject: [New-bugs-announce] [issue43647] Sudden crash on print() of some characters Message-ID: <1616945200.88.0.69202252104.issue43647@roundup.psfhosted.org> New submission from Mikhail : Hi! I'm not sure if it's an IDLE, library, Xserver or font error, but either way, IDLE is behaving incorrectly. I have installed 3.8 and 3.9 versions, and on both it works, I do not know about the others, but I suspect that on the others, this error also occurs. The error occurs in the following way: if you type any of the characters '\u270(5-f)' in IDLE, its work will stop unexpectedly. I suspect that for many other symbols there will be the same error, but so far noticed only on these. This is what the output to the terminal says: ``` X Error of failed request: BadLength (poly request too large or internal Xlib length error) Major opcode of failed request: 139 (RENDER) Minor opcode of failed request: 20 (RenderAddGlyphs) Serial number of failed request: 2131 Current serial number in output stream: 2131 ``` I think the error is somewhere in the system error handler or in the incorrect behavior of the fonts display (or both :) ), it would be more correct to catch the error and display it in the output, rather than suddenly terminate. My system is Ubuntu 20.04, KDE 5. ---------- assignee: terry.reedy components: IDLE messages: 389635 nosy: terry.reedy, tetelevm priority: normal severity: normal status: open title: Sudden crash on print() of some characters versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 28 12:47:52 2021 From: report at bugs.python.org (Harry) Date: Sun, 28 Mar 2021 16:47:52 +0000 Subject: [New-bugs-announce] [issue43648] Remove redundant datefmt option in logging file config Message-ID: <1616950072.91.0.715870512234.issue43648@roundup.psfhosted.org> New submission from Harry : In the logging.conf section of the docs, there is a redundant datefmt option https://docs.python.org/3.10/howto/logging.html#configuring-logging ---------- assignee: docs at python components: Documentation messages: 389637 nosy: Harry-Lees, docs at python priority: normal severity: normal status: open title: Remove redundant datefmt option in logging file config versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 28 14:23:17 2021 From: report at bugs.python.org (Patrick Storz) Date: Sun, 28 Mar 2021 18:23:17 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue43649=5D_time=2Estrftime?= =?utf-8?q?=28=27=25z=27=29_doesn=27t_return_UTC_offset_in_the_form_=C2=B1?= =?utf-8?q?HHMM?= Message-ID: <1616955797.13.0.901128004161.issue43649@roundup.psfhosted.org> New submission from Patrick Storz : This is a follow-up to https://bugs.python.org/issue20010 I'm seeing this very issue in a recent gcc build of Python 3.8 (mingw-w64-x86_64-python 3.8.8-2 from MSYS2 project): Python 3.8.8 (default, Feb 20 2021, 07:16:03) [GCC 10.2.0 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.strftime('%z', time.localtime(time.time())) 'Mitteleurop?ische Sommerzeit' >>> time.strftime('%Z', time.localtime(time.time())) 'Mitteleurop?ische Sommerzeit' If this is indeed fixed in MSVCRT, it seems behavior is still not guaranteed when compiling with mingw-w64 gcc. ---------- components: Library (Lib), Windows messages: 389641 nosy: Aaron.Meurer, Ede123, V?clav Dvo??k, civalin, docs at python, eryksun, ezio.melotti, kepkin, martin-t, paul.moore, r.david.murray, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: time.strftime('%z') doesn't return UTC offset in the form ?HHMM type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 28 17:21:23 2021 From: report at bugs.python.org (igor voltaic) Date: Sun, 28 Mar 2021 21:21:23 +0000 Subject: [New-bugs-announce] [issue43650] MemoryError on zip.read in shutil._unpack_zipfile Message-ID: <1616966483.79.0.479664444023.issue43650@roundup.psfhosted.org> New submission from igor voltaic : MemoryError: null ... File "....", line 13, in repack__file shutil.unpack_archive(local_file_path, local_dir) File "python3.6/shutil.py", line 983, in unpack_archive func(filename, extract_dir, **kwargs) File "python3.6/shutil.py", line 901, in _unpack_zipfile data = zip.read(info.filename) File "python3.6/zipfile.py", line 1338, in read return fp.read() File "python3.6/zipfile.py", line 858, in read buf += self._read1(self.MAX_N) File "python3.6/zipfile.py", line 948, in _read1 data = self._decompressor.decompress(data, n) shutil.unpack_archive tries to read the whole file into memory, making use of any buffer at all. Python crashes for really large files. In my case ? archive: ~1.7G, unpacked: ~10G. Interestingly zipfile.ZipFile.extractall handles this case more effective. ---------- components: Library (Lib) messages: 389652 nosy: igorvoltaic priority: normal severity: normal status: open title: MemoryError on zip.read in shutil._unpack_zipfile type: crash versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 00:30:27 2021 From: report at bugs.python.org (Inada Naoki) Date: Mon, 29 Mar 2021 04:30:27 +0000 Subject: [New-bugs-announce] [issue43651] PEP 597: Fix EncodingError Message-ID: <1616992227.89.0.174128182891.issue43651@roundup.psfhosted.org> Change by Inada Naoki : ---------- nosy: methane priority: normal severity: normal status: open title: PEP 597: Fix EncodingError versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 03:07:11 2021 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 29 Mar 2021 07:07:11 +0000 Subject: [New-bugs-announce] [issue43652] Upgrade Windows tcl/tk to 8.6.11 Message-ID: <1617001631.71.0.115330422511.issue43652@roundup.psfhosted.org> New submission from Terry J. Reedy : #39017, PR 22405 was too late for 3.9, but the new Mac installer is already using 8.6.11. Serhiy, do you know any reason not to upgrade the Windows installer to 8.6.11 also? Steve, should a new PR with '10' replaced with '11, where '9' was replaced with '10' before, be sufficient? I presumt the 'v14' for VC13 should be left alone. To test, would 8.6.11 be built on my system, or is it fetched externally? ---------- components: Tkinter, Windows messages: 389662 nosy: paul.moore, serhiy.storchaka, steve.dower, terry.reedy, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Upgrade Windows tcl/tk to 8.6.11 type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 03:57:52 2021 From: report at bugs.python.org (Matteo Bertucci) Date: Mon, 29 Mar 2021 07:57:52 +0000 Subject: [New-bugs-announce] [issue43653] Typo in the random.shuffle docs Message-ID: <1617004672.48.0.32476010858.issue43653@roundup.psfhosted.org> New submission from Matteo Bertucci : Hello! The current documentation for random.shuffle reads: > The optional argument random is a 0-argument function returning a random float in [0.0, 1.0); by default, this is the function random(). I believe the range here should use matching symbols, unless I am missing something. ---------- assignee: docs at python components: Documentation messages: 389669 nosy: Akarys, docs at python priority: normal severity: normal status: open title: Typo in the random.shuffle docs type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 04:40:35 2021 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 29 Mar 2021 08:40:35 +0000 Subject: [New-bugs-announce] [issue43654] IDLE: Applying settings disables tab completion Message-ID: <1617007235.89.0.0831829954955.issue43654@roundup.psfhosted.org> New submission from Terry J. Reedy : (Original report by Mikhail on #43647, running 3.9 on Linux; verified and extended by me running 3.10 on Windows.) Normally, "i" brings up a completion window with 'id', 'if', 'import', etc. Opening a Settings windows with Options => Configure IDLE and closing with Apply and Cancel or OK (which also applies) disables tab completion. Other completions (forced with ^ or auto with '.' or '/' and waits seem not affected. The only way to restore is to close and reopen each window. Tab completions are enabled in editor.py with these two lines. text.event_add('<>', '') text.bind("<>", autocomplete.autocomplete_event) Attribute and path completions, not affected, are enabled with these. text.event_add('<>', '', '', ' text.bind("<>", autocomplete.try_open_completions_event) Similarly for some other things. In configdialog, the relevant method is (179) def apply, who relevant calls are (219) deactivate_current_config and (230) activate_current_config. The former removes key bindings and the latter rebinds and makes other changes. What is different about Tab versus '.' is that is tab also used for indents and the indent space is reset by 'activate...'. I will later add some debug prints to console based on the clues above. ---------- assignee: terry.reedy components: IDLE messages: 389673 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: Applying settings disables tab completion type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 04:53:43 2021 From: report at bugs.python.org (Patrick Melix) Date: Mon, 29 Mar 2021 08:53:43 +0000 Subject: [New-bugs-announce] [issue43655] Tkinter: Not setting _NET_WM_WINDOW_TYPE on FileDialog Message-ID: <1617008023.65.0.620967294098.issue43655@roundup.psfhosted.org> New submission from Patrick Melix : While trying to fix window behaviour in a python project (ASE: https://wiki.fysik.dtu.dk/ase/), I came across this problem: Tkinter does not set the _NET_WM_WINDOW_TYPE when using the FileDialog class or it's derivatives. I could not find a reason for this and it leads to my window manager (i3) not automatically recognising the window as a dialogue (and thus not enabling floating). I think the window types are there exactly for that purpose, so I don't see why not to set this as the default for the FileDialog class. I was able to change this by adding the line ```self.top.wm_attributes('-type', 'dialog')``` to the initialization of the FileDialog class. See also MR on GitHub. Since I am an absolute beginner at this please do forgive if I missed something. ---------- components: Tkinter messages: 389676 nosy: patrickmelix priority: normal severity: normal status: open title: Tkinter: Not setting _NET_WM_WINDOW_TYPE on FileDialog type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 05:36:49 2021 From: report at bugs.python.org (Martin) Date: Mon, 29 Mar 2021 09:36:49 +0000 Subject: [New-bugs-announce] [issue43656] StackSummary.format fails if str(value) fails Message-ID: <1617010609.77.0.749976985588.issue43656@roundup.psfhosted.org> New submission from Martin : With `capture_locals=True`, `StackSummary.format` prints the local variables for every frame: https://github.com/python/cpython/blob/4827483f47906fecee6b5d9097df2a69a293a85c/Lib/traceback.py#L440 This will fail, however, if string conversion fails. StackSummary.format should be robust towards such possibilities. An easy fix would be a utility function: ``` def try_str(x): try: return str(x) except: return "" ``` ---------- messages: 389679 nosy: moi90 priority: normal severity: normal status: open title: StackSummary.format fails if str(value) fails type: enhancement versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 08:52:29 2021 From: report at bugs.python.org (Walter White) Date: Mon, 29 Mar 2021 12:52:29 +0000 Subject: [New-bugs-announce] [issue43657] shutil.rmtree fails on readonly files in Windows, onerror not called Message-ID: <1617022349.59.0.0395025567771.issue43657@roundup.psfhosted.org> New submission from Walter White : shutil.rmtree fails on readonly files in Windows. Usually people are using the onerror callback to handle file permissions and retry, but that is not possible in this case because it is not triggerd. onerror is only triggered if a OSError is found. In my case the unlink throws a PermissionError Code shutil.rmdir(): try: os.unlink(fullname) except OSError: onerror(os.unlink, fullname, sys.exc_info()) Traceback: Traceback (most recent call last): File "c:\Users\user\test.py", line 121, in shutil.rmtree(shutil.rmtree(working_dir), File "C:\python-3.9.1.amd64\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\python-3.9.1.amd64\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\python-3.9.1.amd64\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\python-3.9.1.amd64\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\python-3.9.1.amd64\lib\shutil.py", line 616, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 5] Access denied: 'C:\\Users\\user\\somefile.txt' os.stat: st_mode=33060 st_ino=34621422136837665 st_dev=3929268297 st_nlink=1 st_uid=0 st_gid=0 ---------- components: Library (Lib) messages: 389697 nosy: homerun4711 priority: normal severity: normal status: open title: shutil.rmtree fails on readonly files in Windows, onerror not called type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 12:53:01 2021 From: report at bugs.python.org (kale-smoothie) Date: Mon, 29 Mar 2021 16:53:01 +0000 Subject: [New-bugs-announce] [issue43658] implementations of the deprecated load_module import loader API, as prescribed by the documentation, are not thread safe Message-ID: <1617036781.07.0.980162704543.issue43658@roundup.psfhosted.org> New submission from kale-smoothie : Unless I've misread or misunderstood, the documentation at https://docs.python.org/3/reference/import.html#loaders for the deprecated `load_module` method doesn't indicate any requirements or caveats for thread safe importing. As it stands, I think it is not thread-safe, since the module is not protected against concurrent imports by the internal implementation marker `__spec__._initializing = True`. Additionally, the deprecated function decorator, `importlib.util.module_for_loader` seems to implement the marker incorrectly (sets `__initializing__` directly on the module). I think this behaviour should either be documented as a major caveat, or internal details exposed to allow thread-safe implementations, or the old API removed entirely. ---------- assignee: docs at python components: Documentation files: thread_unsafe_import.py messages: 389713 nosy: docs at python, kale-smoothie priority: normal severity: normal status: open title: implementations of the deprecated load_module import loader API, as prescribed by the documentation, are not thread safe type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49916/thread_unsafe_import.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 13:51:13 2021 From: report at bugs.python.org (Michael Felt) Date: Mon, 29 Mar 2021 17:51:13 +0000 Subject: [New-bugs-announce] [issue43659] AIX: test_curses crashes buildbot Message-ID: <1617040273.92.0.487968447995.issue43659@roundup.psfhosted.org> New submission from Michael Felt : Since issue42789 the AIX bot's have crashed - to the extent that the bot's did not even return results. Part of this has been resolved, for now, by using: $ export TERM=unknown $ buildbot start buildarea However, the test still crash because AIX default libcurses.a does not include support for update_lines_cols(). This patch should allow test_curses.py to pass in the buildbot. When run from command-line as: $ TERM=unknown ./python Lib/test/test_curses.py .ss......ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss ---------------------------------------------------------------------- Ran 71 tests in 0.121s OK (skipped=64) aixtools at cpython2:[/home/aixtools/py3a-10.0] (When TERM is defined - a core dump still occurs - that will be a new issue and a new PR). ---------- components: Tests messages: 389716 nosy: Michael.Felt priority: normal severity: normal status: open title: AIX: test_curses crashes buildbot versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 14:42:46 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 29 Mar 2021 18:42:46 +0000 Subject: [New-bugs-announce] [issue43660] Segmentation fault when overriding sys.stderr Message-ID: <1617043366.54.0.823715047816.issue43660@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : This code crashes (reported by the one and only Matt Wozniski): import sys class MyStderr: def write(self, s): sys.stderr = None sys.stderr = MyStderr() 1/0 [1] 34112 segmentation fault ./python.exe lel.py ---------- components: Interpreter Core messages: 389722 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: Segmentation fault when overriding sys.stderr versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 15:08:04 2021 From: report at bugs.python.org (Tom Kacvinsky) Date: Mon, 29 Mar 2021 19:08:04 +0000 Subject: [New-bugs-announce] [issue43661] api-ms-win-core-path-l1-1.0.dll, redux of 40740 (which has since been closed) Message-ID: <1617044884.9.0.569070921495.issue43661@roundup.psfhosted.org> New submission from Tom Kacvinsky : Even though bpo#40740 has been closed, I wanted to re-raise the issue as this affects me. There are only two functions that come from this missing DLL: PathCchCombineEx PathCchCanonicalizeEx Would there be a way of rewriting join/canonicalize in getpathp.c (each of which uses one of the above functions) to _not_ rely on functions from a missing DLL on Windows 7 SP1? Or has the ship truly sailed on this matter? ---------- components: C API messages: 389727 nosy: tkacvinsky priority: normal severity: normal status: open title: api-ms-win-core-path-l1-1.0.dll, redux of 40740 (which has since been closed) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 29 16:32:53 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 29 Mar 2021 20:32:53 +0000 Subject: [New-bugs-announce] [issue43662] test_tools: test_reindent_file_with_bad_encoding() fails on s390x RHEL7 LTO + PGO 3.x Message-ID: <1617049973.14.0.713223722429.issue43662@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/244/builds/931 At commit 9b999479c0022edfc9835a8a1f06e046f3881048 (...) test_reindent_file_with_bad_encoding (test.test_tools.test_reindent.ReindentTests) ... FAIL (...) ====================================================================== FAIL: test_reindent_file_with_bad_encoding (test.test_tools.test_reindent.ReindentTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.lto-pgo/build/Lib/test/test_tools/test_reindent.py", line 29, in test_reindent_file_with_bad_encoding rc, out, err = assert_python_ok(self.script, '-r', bad_coding_path) File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.lto-pgo/build/Lib/test/support/script_helper.py", line 160, in assert_python_ok return _assert_python(True, *args, **env_vars) File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.lto-pgo/build/Lib/test/support/script_helper.py", line 145, in _assert_python res.fail(cmd_line) File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.lto-pgo/build/Lib/test/support/script_helper.py", line 72, in fail raise AssertionError("Process return code is %d\n" AssertionError: Process return code is 1 command line: ['/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.lto-pgo/build/python', '-X', 'faulthandler', '-I', '/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.lto-pgo/build/Tools/scripts/reindent.py', '-r', '/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z.lto-pgo/build/Lib/test/bad_coding.py'] stdout: --- --- stderr: --- SyntaxError: encoding problem: encoding --- Can it be related to the following change? commit 261a452a1300eeeae1428ffd6e6623329c085e2c Author: Pablo Galindo Date: Sun Mar 28 23:48:05 2021 +0100 bpo-25643: Refactor the C tokenizer into smaller, logical units (GH-25050) ---------- components: Tests messages: 389739 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_tools: test_reindent_file_with_bad_encoding() fails on s390x RHEL7 LTO + PGO 3.x versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 00:34:43 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 30 Mar 2021 04:34:43 +0000 Subject: [New-bugs-announce] [issue43663] Python interpreter works abnormally after interrupting logging.config.fileConfig() Message-ID: <1617078883.29.0.318397860885.issue43663@roundup.psfhosted.org> New submission from Xinmeng Xia : Python interpreter cannot work well and report errors after interrupting logging.config.fileConfig() Reproduce step: 1. type python3 in console 2. type import logging.config; logging.config.fileConfig({2,2,'sdf'},'') 3. ctrl C 4. type 1/0 ------------------------------------------------------------------------------- Python 3.9.2 (default, Mar 12 2021, 15:08:35) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 1/0 Traceback (most recent call last): File "", line 1, in ZeroDivisionError: division by zero >>> import logging.config >>> logging.config.fileConfig({2,2,'sdf'},'') ^C>>> 1/0 >>> -------------------------------------------------------------------------------- Expected result: 1/0 will return a ZeroDivisionError after interrupting " logging.config.fileConfig({2,2,'sdf'},'') " Actual result: Nothing output Python 3.9.2, Ubuntu 16.04 ---------- components: Library (Lib) messages: 389788 nosy: xxm priority: normal severity: normal status: open title: Python interpreter works abnormally after interrupting logging.config.fileConfig() type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 00:35:44 2021 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 30 Mar 2021 04:35:44 +0000 Subject: [New-bugs-announce] [issue43664] Long computations in pdb.run() lead to segfault Message-ID: <1617078944.41.0.741182615703.issue43664@roundup.psfhosted.org> New submission from Xinmeng Xia : Long computations in pdb.run() lead to interpreter crashes. Crash example ======================================================= Python 3.9.2 (default, Mar 12 2021, 15:08:35) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pdb >>> pdb.run("1+2"*1000000) Segmentation fault (core dumped) ======================================================= Environment: Ubuntu 16.04, Python 3.9.2, Python 3.10.0a6 Mac OS Big Sur 11.2.3, Python 3.91, Python 3.10.0a2 ---------- components: Library (Lib) messages: 389789 nosy: xxm priority: normal severity: normal status: open title: Long computations in pdb.run() lead to segfault type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 03:30:31 2021 From: report at bugs.python.org (Michael Felt) Date: Tue, 30 Mar 2021 07:30:31 +0000 Subject: [New-bugs-announce] [issue43665] AIX: test_importlib regression (ENV change) Message-ID: <1617089431.51.0.0013168061581.issue43665@roundup.psfhosted.org> New submission from Michael Felt : Since issue43517 test_importlib 'fails' (bot status) with ENV_CHANGED. The core dump is caused by SIGTRAP. I need help to learn how to stop the core dump from being cleaned up so I can load it into dbx and hopefully understand/learn with sub-test is actually having issues. e.g., see https://buildbot.python.org/all/#/builders/438/builds/1031/steps/5/logs/stdio for current bot exit status Thx for assistance. ---------- components: Tests messages: 389797 nosy: Michael.Felt priority: normal severity: normal status: open title: AIX: test_importlib regression (ENV change) versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 04:37:28 2021 From: report at bugs.python.org (Michael Felt) Date: Tue, 30 Mar 2021 08:37:28 +0000 Subject: [New-bugs-announce] [issue43666] AIX: Lib/_aix_support.py may break in a WPAR environment Message-ID: <1617093448.54.0.678139047472.issue43666@roundup.psfhosted.org> New submission from Michael Felt : When working in a WPAR (workload partition) the routines supporting aix_platform() may fail if there is no related builddate for bos.mp64. a) the fileset queried is changed to `bos.rte` b) an extreme value (9988) is returned for any similar (unexpected) situations - so that, in any case, the build of Python can proceed. ---------- components: Library (Lib) messages: 389804 nosy: Michael.Felt priority: normal severity: normal status: open title: AIX: Lib/_aix_support.py may break in a WPAR environment versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 06:11:34 2021 From: report at bugs.python.org (Jakub Kulik) Date: Tue, 30 Mar 2021 10:11:34 +0000 Subject: [New-bugs-announce] [issue43667] Solaris: Fix broken Unicode encoding in non-UTF locales Message-ID: <1617099094.43.0.768783655635.issue43667@roundup.psfhosted.org> New submission from Jakub Kulik : On Linux, wchar_t values are mapped to their UTF-8 counterparts; however, that does not have to be the case as the standard allows any arbitrary representation to be used, and this is the case for Solaris. In Oracle Solaris, the internal form of wchar_t is specific to a locale; in the Unicode locales, wchar_t has the UTF-32 Unicode encoding form, and other locales have different representations [1]. This is an issue because Python expects wchar_t to correspond with Unicode, which on Oracle Solaris with non-UTF locale results either in errors (values are outside the Unicode range) or in output with different symbols. Unicode locales work as expected, but they are not an acceptable workaround for some Oracle Solaris users that cannot use Unicode encoding for various reasons. Because of that, we fixed it a few months ago with a patch to `PyUnicode_FromWideChar`, which handles conversion to unicode (attached in PR). It was tested over the last half a year, and we didn't see any related issues since. Is something like this acceptable or should it be fixed on a different place/in a different way? All comments are appreciated. [1] https://docs.oracle.com/cd/E36784_01/html/E39536/gmwkm.html ---------- components: Unicode messages: 389813 nosy: ezio.melotti, kulikjak, vstinner priority: normal severity: normal status: open title: Solaris: Fix broken Unicode encoding in non-UTF locales versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 07:05:31 2021 From: report at bugs.python.org (axel) Date: Tue, 30 Mar 2021 11:05:31 +0000 Subject: [New-bugs-announce] [issue43668] Segfault with for fresh ubuntu 20.04 install Message-ID: <1617102331.02.0.538346172049.issue43668@roundup.psfhosted.org> New submission from axel : The python interpreter segfaults when running in a miniconda environment on a fresh install of ubuntu 20.04.2. This seems to happen intermittently, both while running "pip" during the conda setup of an environment and during the execution of code like below. The issue is most have mostly been reproduced with conda, but seems to happen regardless, which is why I suspect it is a python bug. It is very odd that I can't seem to find anyone else with the same issue. The segfault always occurs when running the following code, which reads texts from files and tokenizes the result. The segfault location changes from run to run. Also the exact same code can run on another computer with the same conda environment on a ubuntu 18.04. The core dumps always points to some function in the unicodeobject.c file in python but the exact function changes from crash to crash. At least one crash has a clear dereferenced pointer 0x0 where the "unicode object" should be. My guess is that something causes the python interpreter to throw away the pointed to unicode object while it is still being worked on causing a segfault. But any bug in the interpreter or NLTK should have been noticed by more users, and I cannot find anyone with similar issues. Things tried that didn't fix the issue: 1. Reformatting and reinstalling ubuntu 2. Switched to ubuntu 18.04 (on this computer, another computer with 18.04 can run the code just fine) 3. Replacing hardware, to ensure that RAM, or SSD disk isn't broken 4. Changing to python versions 3.8.6, 3.8.8, 3.9.2 5. Cloning the conda environment from a working computer to the broken one Attached is one stacktrace of the fault handler along with it's corresponding core dump stack trace from gdb. ``` (eo) axel at minimind:~/test$ python tokenizer_mini.py 2021-03-30 11:10:15.588399: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-03-30 11:10:15.588426: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Fatal Python error: Segmentation fault Current thread 0x00007faa73bbe740 (most recent call first): File "tokenizer_mini.py", line 36 in preprocess_string File "tokenizer_mini.py", line 51 in Segmentation fault (core dumped) ``` ``` #0 raise (sig=) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 #2 find_maxchar_surrogates (num_surrogates=, maxchar=, end=0x4 , begin=0x0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/unicodeobject.c:1703 #3 _PyUnicode_Ready (unicode=0x7f7e4e04d7f0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/unicodeobject.c:1742 #4 0x000055cd65f6df6a in PyUnicode_RichCompare (left=0x7f7e4cf43fb0, right=, op=2) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/unicodeobject.c:11205 #5 0x000055cd6601712a in do_richcompare (op=2, w=0x7f7e4e04d7f0, v=0x7f7e4cf43fb0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/object.c:726 #6 PyObject_RichCompare (op=2, w=0x7f7e4e04d7f0, v=0x7f7e4cf43fb0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/object.c:774 #7 PyObject_RichCompareBool (op=2, w=0x7f7e4e04d7f0, v=0x7f7e4cf43fb0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/object.c:796 #8 list_contains (a=0x7f7e4e04b4c0, el=0x7f7e4cf43fb0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/listobject.c:455 #9 0x000055cd660be41b in PySequence_Contains (ob=0x7f7e4cf43fb0, seq=0x7f7e4e04b4c0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/abstract.c:2083 #10 cmp_outcome (w=0x7f7e4e04b4c0, v=0x7f7e4cf43fb0, op=, tstate=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:5082 #11 _PyEval_EvalFrameDefault (f=, throwflag=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:2977 #12 0x000055cd6609f706 in PyEval_EvalFrameEx (throwflag=0, f=0x7f7e4f4d3c40) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:738 #13 function_code_fastcall (globals=, nargs=, args=, co=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/call.c:284 #14 _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Objects/call.c:411 #15 0x000055cd660be54f in _PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0x7f7f391985b8, callable=0x7f7f39084160) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Include/cpython/abstract.h:115 #16 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=0x55cd66c2e880) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4963 #17 _PyEval_EvalFrameDefault (f=, throwflag=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:3500 #18 0x000055cd6609e503 in PyEval_EvalFrameEx (throwflag=0, f=0x7f7f39198440) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4298 #19 _PyEval_EvalCodeWithName (_co=, globals=, locals=, args=, argcount=, kwnames=, kwargs=, kwcount=, kwstep=, defs=, defcount=, kwdefs=, closure=, name=, qualname=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4298 #20 0x000055cd6609f559 in PyEval_EvalCodeEx (_co=, globals=, locals=, args=, argcount=, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:4327 #21 0x000055cd661429ab in PyEval_EvalCode (co=, globals=, locals=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ceval.c:718 #22 0x000055cd66142a43 in run_eval_code_obj (co=0x7f7f3910f240, globals=0x7f7f391fad80, locals=0x7f7f391fad80) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:1165 #23 0x000055cd6615c6b3 in run_mod (mod=, filename=, globals=0x7f7f391fad80, locals=0x7f7f391fad80, flags=, arena=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:1187 --Type for more, q to quit, c to continue without paging-- #24 0x000055cd661615b2 in pyrun_file (fp=0x55cd66c2cdf0, filename=0x7f7f391bbee0, start=, globals=0x7f7f391fad80, locals=0x7f7f391fad80, closeit=1, flags=0x7ffe3ee6f8e8) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:1084 #25 0x000055cd66161792 in pyrun_simple_file (flags=0x7ffe3ee6f8e8, closeit=1, filename=0x7f7f391bbee0, fp=0x55cd66c2cdf0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:439 #26 PyRun_SimpleFileExFlags (fp=0x55cd66c2cdf0, filename=, closeit=1, flags=0x7ffe3ee6f8e8) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/pythonrun.c:472 #27 0x000055cd66161d0d in pymain_run_file (cf=0x7ffe3ee6f8e8, config=0x55cd66c2da70) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:391 #28 pymain_run_python (exitcode=0x7ffe3ee6f8e0) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:616 #29 Py_RunMain () at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:695 #30 0x000055cd66161ec9 in Py_BytesMain (argc=, argv=) at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Modules/main.c:1127 #31 0x00007f7f3a3620b3 in __libc_start_main (main=0x55cd65fe3490

, argc=2, argv=0x7ffe3ee6fae8, init=, fini=, rtld_fini=, stack_end=0x7ffe3ee6fad8) at ../csu/libc-start.c:308 #32 0x000055cd660d7369 in _start () at /home/conda/feedstock_root/build_artifacts/python-split_1613835706476/work/Python/ast.c:937 ``` The conda environment used is below, using Miniconda3-py38_4.9.2-Linux-x86_64.sh (note that the segfault does sometimes occur during the setup of a conda environment so it's probably not related to the env) ``` name: eo channels: - conda-forge - defaults dependencies: - python=3.8.8 - pip=20.3.1 - pip: - transformers==4.3.2 - tensorflow_gpu==2.4.0 - scikit-learn==0.23.2 - nltk==3.5 - matplotlib==3.2.1 - seaborn==0.11.0 - tensorflow-addons==0.11.2 - tf-models-official==2.4.0 - gspread==3.6.0 - oauth2client==4.1.3 - ipykernel==5.4.2 - autopep8==1.5.4 - torch==1.7.1 ``` The code below consistently reproduces the problem, the files read are simple text files containing unicode text: ```python from nltk.tokenize import wordpunct_tokenize from tensorflow.keras.preprocessing.text import Tokenizer from nltk.stem.snowball import SnowballStemmer from nltk.corpus import stopwords import pickle from pathlib import Path import faulthandler faulthandler.enable() def load_data(root_path, feature, index): feature_root = root_path / feature dir1 = str(index // 10_000) base_path = feature_root / dir1 / str(index) full_path = base_path.with_suffix('.txt') data = None with open(full_path, 'r', encoding='utf-8') as f: data = f.read() return data def preprocess_string(text, stemmer, stop_words): word_tokens = wordpunct_tokenize(text.lower()) alpha_tokens = [] for w in word_tokens: try: if (w.isalpha() and w not in stop_words): alpha_tokens.append(w) except: print("Something went wrong when handling the word: ", w) clean_tokens = [] for w in alpha_tokens: try: word = stemmer.stem(w) clean_tokens.append(word) except: print("Something went wrong when stemming the word: ", w) clean_tokens.append(w) return clean_tokens stop_words = stopwords.words('english') stemmer = SnowballStemmer(language='english') tokenizer = Tokenizer() root_path = '/srv/patent/EbbaOtto/E' for idx in range(0, 57454): print(f'Processed {idx}/57454', end='\r') desc = str(load_data(Path(root_path), 'clean_description', idx)) desc = preprocess_string(desc, stemmer, stop_words) tokenizer.fit_on_texts([desc]) ``` For more readable formatting read the stackoverflow post regarding the same issue: https://stackoverflow.com/questions/66868753/segfault-with-for-fresh-ubuntu-20-04-install-using-conda ---------- components: Interpreter Core messages: 389816 nosy: axel_1234 priority: normal severity: normal status: open title: Segfault with for fresh ubuntu 20.04 install type: crash versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 08:10:46 2021 From: report at bugs.python.org (Christian Heimes) Date: Tue, 30 Mar 2021 12:10:46 +0000 Subject: [New-bugs-announce] [issue43669] PEP 644: Require OpenSSL 1.1.1 or newer Message-ID: <1617106246.37.0.0210118077092.issue43669@roundup.psfhosted.org> New submission from Christian Heimes : Tracker ticket for PEP 644, https://www.python.org/dev/peps/pep-0644/ This PEP proposes for CPython?s standard library to support only OpenSSL 1.1.1 LTS or newer. Support for OpenSSL versions past end-of-lifetime, incompatible forks, and other TLS libraries are dropped. ---------- assignee: christian.heimes components: SSL messages: 389823 nosy: christian.heimes priority: normal severity: normal status: open title: PEP 644: Require OpenSSL 1.1.1 or newer type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 09:10:44 2021 From: report at bugs.python.org (Ilya Gruzinov) Date: Tue, 30 Mar 2021 13:10:44 +0000 Subject: [New-bugs-announce] [issue43670] Typo in 3.10 changelog Message-ID: <1617109844.58.0.922515260201.issue43670@roundup.psfhosted.org> New submission from Ilya Gruzinov : In next lines typo in function `load`: # BUG: "rb" mode or encoding="utf-8" should be used. with open("data.json") as f: data = json.laod(f) ---------- assignee: docs at python components: Documentation messages: 389825 nosy: docs at python, shagren priority: normal pull_requests: 23843 severity: normal status: open title: Typo in 3.10 changelog versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 10:34:47 2021 From: report at bugs.python.org (Samuel Kirwin) Date: Tue, 30 Mar 2021 14:34:47 +0000 Subject: [New-bugs-announce] [issue43671] segfault when using tkinter + pygame for ~5 minutes Message-ID: <1617114887.1.0.547947822619.issue43671@roundup.psfhosted.org> New submission from Samuel Kirwin : Per the attached file, when testing an adapted version of pygame's alien script as part of research. Python segfaulted. This has occured twice about 5 minutes in. I had console running all messages at the time if more logs needed. MacOS Big Sur 11.2.3 on MacBook Air (Retina, 13-inch, 2018) with 1.6 GHz Dual-Core Intel Core i5 & 8 GB 2133 MHz LPDDR3 Memory ---------- components: Tkinter files: segfault.rtf messages: 389827 nosy: Pycryptor10 priority: normal severity: normal status: open title: segfault when using tkinter + pygame for ~5 minutes type: crash versions: Python 3.9 Added file: https://bugs.python.org/file49919/segfault.rtf _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 11:44:53 2021 From: report at bugs.python.org (Brett Cannon) Date: Tue, 30 Mar 2021 15:44:53 +0000 Subject: [New-bugs-announce] [issue43672] Raise ImportWarning when calling find_loader() Message-ID: <1617119093.18.0.759506550026.issue43672@roundup.psfhosted.org> New submission from Brett Cannon : Using find_loader() in the import system should raise ImportWarning to start transitioning people over to find_spec() who haven't migrated since Python 3.4. ---------- assignee: brett.cannon components: Interpreter Core messages: 389834 nosy: brett.cannon priority: normal severity: normal status: open title: Raise ImportWarning when calling find_loader() versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 13:42:28 2021 From: report at bugs.python.org (William (David) Wilcox) Date: Tue, 30 Mar 2021 17:42:28 +0000 Subject: [New-bugs-announce] [issue43673] Missing stub for logging attribute manager Message-ID: <1617126148.6.0.590806825017.issue43673@roundup.psfhosted.org> Change by William (David) Wilcox : ---------- components: Library (Lib) nosy: wdwilcox priority: normal severity: normal status: open title: Missing stub for logging attribute manager type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 15:50:29 2021 From: report at bugs.python.org (TW) Date: Tue, 30 Mar 2021 19:50:29 +0000 Subject: [New-bugs-announce] [issue43674] strange effect at recursion limit Message-ID: <1617133829.55.0.126108093558.issue43674@roundup.psfhosted.org> New submission from TW : user at development:~$ python3 Python 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. def recurse(n): print(n) try: recurse(n+1) except RecursionError: print("recursion error") print(n) Please note that there are 2 print(n) on the same level. >>> recurse(0) 0 1 2 ... 994 995 recursion error 994 993 ... 2 1 0 Why is there no 2nd 995 after the recursion error? Same happens for python 3.8.8 and 3.9.2. Related question: I also tried to set sys.setrecursionlimit(100000) and ran recurse(0), but at a bit beyond 21800, it just segfaulted. Is there a way to determine the practically working maximum it can do? ---------- components: Interpreter Core messages: 389848 nosy: ThomasWaldmann2 priority: normal severity: normal status: open title: strange effect at recursion limit type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 30 17:20:31 2021 From: report at bugs.python.org (Aysal Marandian) Date: Tue, 30 Mar 2021 21:20:31 +0000 Subject: [New-bugs-announce] [issue43675] test Message-ID: <1617139231.76.0.428039980824.issue43675@roundup.psfhosted.org> Change by Aysal Marandian : ---------- nosy: aysal.marandian priority: normal severity: normal status: open title: test _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 01:31:48 2021 From: report at bugs.python.org (Tim Hatch) Date: Wed, 31 Mar 2021 05:31:48 +0000 Subject: [New-bugs-announce] [issue43676] Doctest ELLIPSIS explanation hard to follow when they're missing Message-ID: <1617168708.15.0.0628240300897.issue43676@roundup.psfhosted.org> New submission from Tim Hatch : The doctest docs try to explain directives like ELLIPSIS but those directives are absent from the rendered html. Where? Most of the code blocks in the Directives section, and https://docs.python.org/3/library/doctest.html#directives and the one introduced by "nice approach" subsequently are missing their directive comments. The docs today say they go with Python 3.9.2 generated by Sphinx 2.4.4. I haven't tried generating them manually, but it appears the `pycon` lexer for Pygments handles these fine, so I assume it's a problem in reST. ---------- assignee: docs at python components: Documentation messages: 389874 nosy: Tim.Hatch, docs at python priority: normal severity: normal status: open title: Doctest ELLIPSIS explanation hard to follow when they're missing type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 04:58:05 2021 From: report at bugs.python.org (Daniel Torres) Date: Wed, 31 Mar 2021 08:58:05 +0000 Subject: [New-bugs-announce] [issue43677] Documentation Message-ID: <1617181085.74.0.392624279534.issue43677@roundup.psfhosted.org> New submission from Daniel Torres : https://github.com/python/cpython/blob/master/Doc/howto/descriptor.rst Section 'Functions and methods' The provided example contains comment 'Emulate Py_MethodType in Objects/classobject.c' But Py_MethodType is nowhere to be found under 'Objects/classobject.c' ---------- assignee: docs at python components: Documentation messages: 389878 nosy: danielcft, docs at python priority: normal severity: normal status: open title: Documentation type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 07:05:45 2021 From: report at bugs.python.org (Muzahid Hussain) Date: Wed, 31 Mar 2021 11:05:45 +0000 Subject: [New-bugs-announce] [issue43678] TypeError: get() got an unexpected keyword argument 'vars' Message-ID: <1617188745.77.0.411349429246.issue43678@roundup.psfhosted.org> Change by Muzahid Hussain : ---------- components: 2to3 (2.x to 3.x conversion tool) nosy: cis-muzahid priority: normal severity: normal status: open title: TypeError: get() got an unexpected keyword argument 'vars' versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 08:26:06 2021 From: report at bugs.python.org (MikeS) Date: Wed, 31 Mar 2021 12:26:06 +0000 Subject: [New-bugs-announce] [issue43679] ttk.Sizegrip disappears under Windows 10 UI Scaling, with dpiAware set true and >1 scaling Message-ID: <1617193566.44.0.678288943132.issue43679@roundup.psfhosted.org> New submission from MikeS : When using tkinter on Windows (10) with a >1 HiDpi screen the sizegrip disappear with dpiawareness is on. A minimal example is as follows: import tkinter as tk import tkinter.ttk as ttk from ctypes import windll, pointer, wintypes windll.shcore.SetProcessDpiAwareness(1) root = tk.Tk() btn1 = tk.Button(root, text='btn1').pack(side=tk.LEFT) sg = ttk.Sizegrip(root).pack(side=tk.LEFT) btn2 = tk.Button(root, text='btn2').pack(side=tk.LEFT, fill=tk.BOTH, expand=1) root.mainloop() Works fine with commented "SetProcessDpiAwareness", but not when using it. This might be related to the tk issues with hidpi and small radio/checkboxes https://bugs.python.org/issue41969 ---------- components: Tkinter messages: 389893 nosy: msmith priority: normal severity: normal status: open title: ttk.Sizegrip disappears under Windows 10 UI Scaling, with dpiAware set true and >1 scaling type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 08:59:21 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 31 Mar 2021 12:59:21 +0000 Subject: [New-bugs-announce] [issue43680] Remove undocumented io.OpenWrapper and _pyio.OpenWrapper Message-ID: <1617195561.83.0.0407283597649.issue43680@roundup.psfhosted.org> New submission from STINNER Victor : The OpenWrapper function of io and _pyio is an undocumented hack allowing to use the builtin open() function as a method: class MyClass: method = open MyClass.method(...) # class method MyClass().method(...) # instance method It is only needed by the _pyio module: the pure Python implementation of the io module: --- class DocDescriptor: """Helper for builtins.open.__doc__ """ def __get__(self, obj, typ=None): return ( "open(file, mode='r', buffering=-1, encoding=None, " "errors=None, newline=None, closefd=True)\n\n" + open.__doc__) class OpenWrapper: """Wrapper for builtins.open Trick so that open won't become a bound method when stored as a class variable (as dbm.dumb does). See initstdio() in Python/pylifecycle.c. """ __doc__ = DocDescriptor() def __new__(cls, *args, **kwargs): return open(*args, **kwargs) --- The io module simply uses an alias to open: --- OpenWrapper = _io.open # for compatibility with _pyio --- No wrapper is needed since built-in functions can be used directly as methods. Example: --- class MyClass: method = len # built-in function print(MyClass.method("abc")) print(MyClass().method("abc")) --- This example works as expected, it displays "3" two times. I propose to simply remove io.OpenWrapper and force developers to explicitly use staticmethod: class MyClass: method = staticmethod(open) io.OpenWrapper is not documented. I don't understand the remark about dbm.dumb: I fail to see where the built-in open() function is used as a method. ---------- components: Library (Lib) messages: 389896 nosy: vstinner priority: normal severity: normal status: open title: Remove undocumented io.OpenWrapper and _pyio.OpenWrapper versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 10:25:05 2021 From: report at bugs.python.org (Ethan Furman) Date: Wed, 31 Mar 2021 14:25:05 +0000 Subject: [New-bugs-announce] [issue43681] doctest forgets previous imports Message-ID: <1617200705.39.0.678751580159.issue43681@roundup.psfhosted.org> New submission from Ethan Furman : In the Python 3.10 Doc/library/enum.rst file was the following: .. class:: FlagBoundary *FlagBoundary* controls how out-of-range values are handled in *Flag* and its subclasses. .. attribute:: STRICT Out-of-range values cause a :exc:`ValueError` to be raised. This is the default for :class:`Flag`:: >>> from enum import STRICT >>> class StrictFlag(Flag, boundary=STRICT): ... RED = auto() ... GREEN = auto() ... BLUE = auto() >>> StrictFlag(2**2 + 2**4) Traceback (most recent call last): ... ValueError: StrictFlag: invalid value: 20 given 0b0 10100 allowed 0b0 00111 .. attribute:: CONFORM Out-of-range values have invalid values removed, leaving a valid *Flag* value:: >>> from enum import CONFORM >>> class ConformFlag(Flag, boundary=CONFORM): ... RED = auto() ... GREEN = auto() ... BLUE = auto() >>> ConformFlag(2**2 + 2**4) ConformFlag.BLUE .. attribute:: EJECT Out-of-range values lose their *Flag* membership and revert to :class:`int`. This is the default for :class:`IntFlag`:: >>> from enum import EJECT >>> class EjectFlag(Flag, boundary=EJECT): ... RED = auto() ... GREEN = auto() ... BLUE = auto() >>> EjectFlag(2**2 + 2**4) 20 .. attribute:: KEEP Out-of-range values are kept, and the *Flag* membership is kept. This is used for some stdlib flags: >>> from enum import KEEP >>> class KeepFlag(Flag, boundary=KEEP): ... RED = auto() ... GREEN = auto() ... BLUE = auto() >>> KeepFlag(2**2 + 2**4) KeepFlag.BLUE|0x10 All four tests are relying on a previous `from enum import Flag`, but only the three tests pass -- the fourth raises: Traceback (most recent call last): File "/home/runner/work/cpython/cpython/Lib/doctest.py", line 1337, in __run exec(compile(example.source, filename, "single", File "", line 1, in class KeepFlag(Flag, boundary=KEEP): NameError: name 'Flag' is not defined ---------- components: Library (Lib) messages: 389903 nosy: ethan.furman priority: normal severity: normal stage: test needed status: open title: doctest forgets previous imports type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 10:31:21 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 31 Mar 2021 14:31:21 +0000 Subject: [New-bugs-announce] [issue43682] Make function wrapped by staticmethod callable Message-ID: <1617201081.91.0.895015962683.issue43682@roundup.psfhosted.org> New submission from STINNER Victor : Currently, static methods created by the @staticmethod decorator are not callable as regular function. Example: --- @staticmethod def func(): print("my func") class MyClass: method = func func() # A: regular function MyClass.method() # B: class method MyClass().method() # C: instance method --- The func() call raises TypeError('staticmethod' object is not callable) exception. I propose to make staticmethod objects callable to get a similar to built-in function: --- func = len class MyClass: method = func func("abc") # A: regular function MyClass.method("abc") # B: class method MyClass().method("abc") # C: instance method --- The 3 variants (A, B, C) to call the built-in len() function work just as expected. If static method objects become callable, the 3 variants (A, B, C) will just work. It would avoid the hack like _pyio.Wrapper: --- class DocDescriptor: """Helper for builtins.open.__doc__ """ def __get__(self, obj, typ=None): return ( "open(file, mode='r', buffering=-1, encoding=None, " "errors=None, newline=None, closefd=True)\n\n" + open.__doc__) class OpenWrapper: """Wrapper for builtins.open Trick so that open won't become a bound method when stored as a class variable (as dbm.dumb does). See initstdio() in Python/pylifecycle.c. """ __doc__ = DocDescriptor() def __new__(cls, *args, **kwargs): return open(*args, **kwargs) --- Currently, it's not possible possible to use directly _pyio.open as a method: --- class MyClass: method = _pyio.open --- whereas "method = io.open" just works because io.open() is a built-in function. See also bpo-43680 "Remove undocumented io.OpenWrapper and _pyio.OpenWrapper" and my thread on python-dev: "Weird io.OpenWrapper hack to use a function as method" https://mail.python.org/archives/list/python-dev at python.org/thread/QZ7SFW3IW3S2C5RMRJZOOUFSHHUINNME/ ---------- components: Library (Lib) messages: 389905 nosy: vstinner priority: normal severity: normal status: open title: Make function wrapped by staticmethod callable versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 12:45:03 2021 From: report at bugs.python.org (Mark Shannon) Date: Wed, 31 Mar 2021 16:45:03 +0000 Subject: [New-bugs-announce] [issue43683] Handle generator (and coroutine) state in the bytecode. Message-ID: <1617209103.34.0.926095267338.issue43683@roundup.psfhosted.org> New submission from Mark Shannon : Every time we send, or throw, to a generator, the C code in genobject.c needs to check what state the generator is in. This is inefficient and couples the generator code, which should just be a thin wrapper around the interpreter, to the internals of the interpreter. The state of the generator is known to the compiler. It should emit appropriate bytecodes to handle the different behavior for the different states. While the main reason this is robustness and maintainability, removing the complex C code between Python caller and Python callee also opens up the possibility of some worthwhile optimizations. There are three changes I want to make: 1. Add a new bytecode to handle starting a generator. This `GEN_START` bytecode would pop TOS, raising an exception if it is not None. This adds some overhead for the first call to iter()/send() but speeds up all the others. 2. Handle the case of exhausted generators. This is a bit more fiddly, and involves putting an infinite loop at the end of the generator. Something like: CLEAR_FRAME label: GEN_RETURN (Like RETURN_VALUE None, but does not discard the frame) JUMP label This removes a lot of special case code for corner cases of exhausted generators and coroutines. 3. Handle throw() on `YIELD_FROM`. The problem here is that we need to differentiate between exceptions triggered by throw, which must call throw() on sub-generators, and exceptions propagating out of sub-generators which should be passed up the stack. By splitting the opcode into two (or more), it is clear which case is being handled in the interpreter without complicated logic in genobject.c ---------- messages: 389919 nosy: Mark.Shannon priority: normal severity: normal status: open title: Handle generator (and coroutine) state in the bytecode. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 18:40:32 2021 From: report at bugs.python.org (Guido van Rossum) Date: Wed, 31 Mar 2021 22:40:32 +0000 Subject: [New-bugs-announce] [issue43684] Add combined opcodes Message-ID: <1617230432.54.0.49540913796.issue43684@roundup.psfhosted.org> New submission from Guido van Rossum : I'm lining up some PRs (inspired by some of Mark Shannon's ideas) that add new opcodes which are straightforward combinations of existing opcodes. For example, ADD_INT is equivalent to LOAD_CONST + BINARY_ADD, for certain small (common) integer constants. Each of these adds only a minor speedup, but after a dozen or so of these the speedup is (hopefully) significant enough to warrant the opcode churn. ---------- messages: 389939 nosy: Mark.Shannon, eric.snow, gvanrossum, pablogsal priority: normal severity: normal status: open title: Add combined opcodes type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 22:37:08 2021 From: report at bugs.python.org (=?utf-8?q?Jo=C3=ABl_Larose?=) Date: Thu, 01 Apr 2021 02:37:08 +0000 Subject: [New-bugs-announce] [issue43685] __call__ not being called on metaclass Message-ID: <1617244628.3.0.903633823298.issue43685@roundup.psfhosted.org> New submission from Jo?l Larose : Hi, I'm trying to implement a metaclass for the singleton pattern, with the intent of creating type-appropriate sentinels. After trying several approaches, I've come up with what I thought would be an elegant solution. However, I've run into a bit of a snag. Whenever I "call" the class to get the instance, the machinery behind the scenes always calls __init__. To bypass this, I tried overriding type.__call__ in my metaclass. Contrary to all the documentation I've read, metaclass.__call__ is not being used. The call sequence goes straight to class.__new__ and class.__init__. ===================================================== M = TypeVar("M") class SingletonMeta(type): """Metaclass for single value classes.""" def __call__(cls: Type[M], *args: Any, **kwargs: Any) -> M: ### Never see this line of output print(f"{cls.__name__}.__call__({args=}, {kwargs=}") it: Optional[M] = cast(Optional[M], cls.__dict__.get("__it__")) if it is not None: return it try: it = cls.__new__(*args, **kwargs) it.__init__(*args, **kwargs) except TypeError: it = cls.__new__() it.__init__() # cls.__it__ = it return it def __new__(mcs, name: str, bases: th.Bases, namespace: th.DictStrAny, **kwargs: Any) -> SingletonMeta: print(f"{mcs.__name__}.__new__({name=}, {bases=}, {namespace=}, {kwargs=}") new_cls: SingletonMeta = cast(SingletonMeta, type(name, bases, namespace)) print(f"{new_cls=}") print(f"{new_cls.__call__}") ### Both of these lines ignore the __call__ defined in this metaclass ### They produce TypeError if the class doesn't define __new__ or __init__ accepting arguments # new_cls.__it__ = new_cls(new_cls, **kwargs) # new_cls.__it__ = new_cls.__call__(new_cls, **kwargs) return new_cls Here's the output I get after defining the metaclass and try to use it: >>> class S(metaclass=SingletonMeta): ... pass SingletonMeta.__new__(name='S', bases=(), namespace={'__module__': '__main__', '__qualname__': 'S'}, kwargs={} new_cls= >>> S() <__main__.S object at 0x000002C128AE5940> >>> S() <__main__.S object at 0x000002C128AE56A0> If SingletonMeta.__call__ was being used, I would see the output from that call, and consecutive calls to S() would yield the same object (with the same address). As you can see, that is not the case. Environment: Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] on win32 Is this a bug? Or am I misunderstanding how/when __call__ gets called? ---------- components: Interpreter Core messages: 389947 nosy: joel.larose priority: normal severity: normal status: open title: __call__ not being called on metaclass type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 23:11:36 2021 From: report at bugs.python.org (Alexander Grigoriev) Date: Thu, 01 Apr 2021 03:11:36 +0000 Subject: [New-bugs-announce] [issue43686] re.match appears to hang with certain combinations of pattern and string Message-ID: <1617246696.22.0.947636964181.issue43686@roundup.psfhosted.org> New submission from Alexander Grigoriev : Certain patterns and input strings cause re.match to take exponentially longer time. This can be expected because of recursive nature of matching some patterns, but it can take surprisingly long time with some combination of a pattern and input data of moderate length. For example: import re import time t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_ {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_ {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_D {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_D {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DI {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DI {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DIS {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DIS {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISP {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISP {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPL {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPL {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPLA {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPLA {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) t1 = time.monotonic() re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPLAY {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n') t2 = time.monotonic() print(r"re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPLAY {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): %s s" % (t2-t1)) Does: re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_ {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 0.2190000000409782 s re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_D {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 0.4529999999795109 s re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DI {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 0.9529999999795109 s re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DIS {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 1.797000000020489 s re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISP {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 3.6090000001713634 s re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPL {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 7.125 s re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPLA {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 14.890999999828637 s re.match(rb'((?: |\t)*)((?:.*?(?:\\(?: |\t)+)?)*)(\s*)$', b'#define EVENT_DISPLAY {\t\\\r\n\tif {\t\\\r\n\t\tmnt_disp;\t\\\r\n\t} }\r\n'): 29.688000000081956 s Perhaps re.match needs to be optimized and/or rewritten in low level language (C) ---------- components: Library (Lib) messages: 389949 nosy: alegrigoriev priority: normal severity: normal status: open title: re.match appears to hang with certain combinations of pattern and string type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 31 23:39:21 2021 From: report at bugs.python.org (junyixie) Date: Thu, 01 Apr 2021 03:39:21 +0000 Subject: [New-bugs-announce] [issue43687] use unicode_state empty string before unicode_init. without define WITH_DOC_STRINGS Message-ID: <1617248361.55.0.452091252277.issue43687@roundup.psfhosted.org> New submission from junyixie : use unicode_state empty string before unicode_init. without define WITH_DOC_STRINGS. PyType_Ready call PyUnicode_FromString, if doc string striped, cause crash. unicode_get_empty() must not be called before _PyUnicode_Init() or after _PyUnicode_Fini() PyType_Ready ``` const char *old_doc = _PyType_DocWithoutSignature(type->tp_name,type->tp_doc); PyObject *doc = PyUnicode_FromString(old_doc); ``` ---------- messages: 389950 nosy: JunyiXie priority: normal severity: normal status: open title: use unicode_state empty string before unicode_init. without define WITH_DOC_STRINGS type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________