From report at bugs.python.org Tue Feb 1 03:48:19 2022 From: report at bugs.python.org (Nathan Shain) Date: Tue, 01 Feb 2022 08:48:19 +0000 Subject: [New-bugs-announce] [issue46596] PyLineTable_InitAddressRange isn't exported - causing C Extensions to fail at import Message-ID: <1643705299.86.0.148763425917.issue46596@roundup.psfhosted.org> New submission from Nathan Shain : I'm trying to develop C++ Extension that needs to access the new line table. I have a call to PyLineTable_InitAddressRange in my extension. After compiling, "_PyLineTable_InitAddressRange" symbol is undefined in the .so (which is ok so far). When importing the extension, it fails with this error: ImportError: /usr/foo/foo.cpython-310-x86_64-linux-gnu.so: undefined symbol: _PyLineTable_InitAddressRange Obviously python isn't exporting this symbol, and making the dlopen fail I'd make a PR with a fix, but I'm not sure which approach is appropriate ---------- components: C API messages: 412237 nosy: nathan3 priority: normal severity: normal status: open title: PyLineTable_InitAddressRange isn't exported - causing C Extensions to fail at import versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 05:44:23 2022 From: report at bugs.python.org (Kumar Aditya) Date: Tue, 01 Feb 2022 10:44:23 +0000 Subject: [New-bugs-announce] [issue46597] Remove Python 3.3 compatibility code from overlapped.c Message-ID: <1643712263.04.0.804521209967.issue46597@roundup.psfhosted.org> New submission from Kumar Aditya : Remove Python 3.3 compatibility code from overlapped.c. https://github.com/python/cpython/blob/108e66b6d23efd0fc2966163ead9434b328c5f17/Modules/overlapped.c#L27 ---------- components: asyncio messages: 412245 nosy: asvetlov, kumaraditya303, yselivanov priority: normal severity: normal status: open title: Remove Python 3.3 compatibility code from overlapped.c versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 06:00:34 2022 From: report at bugs.python.org (Petr Prikryl) Date: Tue, 01 Feb 2022 11:00:34 +0000 Subject: [New-bugs-announce] [issue46598] ElementTree: wrong XML prolog for the utf-8-sig encoding Message-ID: <1643713234.05.0.62090035158.issue46598@roundup.psfhosted.org> New submission from Petr Prikryl : When ElementTree object is to be written to the file, and when BOM is needed, the 'utf-8-sig' can be used for the purpose. However, the XML prolog then looks like... ... and that encoding in the prolog makes no sense. Therefore, the utf-8-sig is changed to utf-8 for the purpose. To fix the situation, the following two lines should be added to `cpython/Lib/xml/etree/ElementTree.py` `elif enc_lower == "utf-8-sig": declared_encoding = "utf-8" ` just above the line 741 that says `write("\n" % ( declared_encoding,))` I have already cloned the main branch, added the lines to `https://github.com/pepr/cpython.git`, and sent pull request. I have tested the functionality locally with `Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32` ---------- components: Library (Lib) messages: 412247 nosy: prikryl priority: normal pull_requests: 29231 severity: normal status: open title: ElementTree: wrong XML prolog for the utf-8-sig encoding versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 07:38:41 2022 From: report at bugs.python.org (A-Shvedov) Date: Tue, 01 Feb 2022 12:38:41 +0000 Subject: [New-bugs-announce] [issue46599] Objects/object.c:767:24: runtime error: member access within null pointer of type 'PyObject' (aka 'struct _object') Message-ID: <1643719121.07.0.180413749782.issue46599@roundup.psfhosted.org> New submission from A-Shvedov : Hello. Got an error with AFLplusplus, with crafted sample: https://github.com/a-shvedov/res/blob/master/fuzzing/python/crashes/id:000000%2Csig:11%2Csrc:009074%2Ctime:446401660%2Cexecs:16120011%2Cop:arith8%2Cpos:16%2Cval:-21 Compiled with: clang (version 6.0.0-3) ; Configure params: --enable-optimizations --prefix= . Package version: Python-3.9.9 ; Builded binary info: python: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, not stripped ; Stderr with run crafted sample: Segmentation fault ; AddressSanitizer run: Objects/object.c:767:24: runtime error: member access within null pointer of type 'PyObject' (aka 'struct _object') ; AddressSanitizer log attached in logfile. ---------- components: Interpreter Core files: issue-file_asanlog.log messages: 412251 nosy: a-shvedov priority: normal severity: normal status: open title: Objects/object.c:767:24: runtime error: member access within null pointer of type 'PyObject' (aka 'struct _object') type: crash versions: Python 3.9 Added file: https://bugs.python.org/file50599/issue-file_asanlog.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 08:06:00 2022 From: report at bugs.python.org (STINNER Victor) Date: Tue, 01 Feb 2022 13:06:00 +0000 Subject: [New-bugs-announce] [issue46600] Python built with clang -O0 allocates 10x more stack memory than clang -O3 on a Python function call Message-ID: <1643720760.9.0.302596428689.issue46600@roundup.psfhosted.org> New submission from STINNER Victor : Measure using this script on the main branch (commit 108e66b6d23efd0fc2966163ead9434b328c5f17): --- import _testcapi def f(): yield _testcapi.stack_pointer() print(_testcapi.stack_pointer() - next(f())) --- Stack usage depending on the compiler and compiler optimization level: * clang -O0: 9,104 bytes * clang -Og: 736 bytes * gcc -O0: 6,784 bytes * gcc -Og: 624 bytes -O0 allocates around 10x more memory. Moreover, "./configure --with-pydebug CC=clang" uses -O0 in CFLAGS, because "clang --help" output doesn't containt "-Og". I'm working on a configure change to use -Og on clang which supports it. ---------- components: Build messages: 412252 nosy: vstinner priority: normal severity: normal status: open title: Python built with clang -O0 allocates 10x more stack memory than clang -O3 on a Python function call type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 08:28:01 2022 From: report at bugs.python.org (Chris Drake) Date: Tue, 01 Feb 2022 13:28:01 +0000 Subject: [New-bugs-announce] [issue46601] Instructions do not work Message-ID: <1643722081.07.0.215509583519.issue46601@roundup.psfhosted.org> New submission from Chris Drake : See https://github.com/python/pythondotorg/issues/1774#issuecomment-1025250329 ---------- components: macOS messages: 412257 nosy: cryptophoto, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Instructions do not work versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 13:48:33 2022 From: report at bugs.python.org (Nathan Howard) Date: Tue, 01 Feb 2022 18:48:33 +0000 Subject: [New-bugs-announce] [issue46602] Subtle trouble with heredoc append in configure. Message-ID: <1643741313.11.0.141658321036.issue46602@roundup.psfhosted.org> New submission from Nathan Howard : TODO: (see PR) ---------- components: Installation messages: 412298 nosy: adanhawth priority: normal severity: normal status: open title: Subtle trouble with heredoc append in configure. type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 16:17:41 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Tue, 01 Feb 2022 21:17:41 +0000 Subject: [New-bugs-announce] [issue46603] `typing._strip_annotations` is not fully covered Message-ID: <1643750261.29.0.798402203083.issue46603@roundup.psfhosted.org> New submission from Nikita Sobolev : Right now `coverage` says that this line is not covered at all: https://github.com/python/cpython/blob/bebaa95fd0f44babf8b6bcffd8f2908c73ca259e/Lib/typing.py#L1882 Considering how hard all these `types.UnionType` / `typing.Union` stuff is and that the logic with `reduce` and `operator.or_` is also quite complex, I think it is important to cover it. It actually took me some time to reach this line, but here's the test I came up with: ``` def test_get_type_hints_annotated_in_union(self): def with_union(x: int | list[Annotated[str, 'meta']]): ... self.assertEqual(get_type_hints(with_union), {'x': int | list[str]}) self.assertEqual( get_type_hints(with_union, include_extras=True), {'x': int | list[Annotated[str, 'meta']]}, ) ``` Note that direct `|` with `Annotated` does not work, because it triggers `_AnnotatedType.__or__`, which returns `typing.Union` and not `types.UnionType`. I will send a PR with it in a minute :) Any feedback is welcome! ---------- components: Library (Lib) messages: 412308 nosy: AlexWaygood, gvanrossum, kj, sobolevn priority: normal severity: normal status: open title: `typing._strip_annotations` is not fully covered type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 16:32:18 2022 From: report at bugs.python.org (Kossi GLOKPOR) Date: Tue, 01 Feb 2022 21:32:18 +0000 Subject: [New-bugs-announce] [issue46604] Documentation fix in ssl module Message-ID: <1643751138.89.0.0198780201325.issue46604@roundup.psfhosted.org> Change by Kossi GLOKPOR : ---------- assignee: docs at python components: Documentation nosy: docs at python, glk0 priority: normal severity: normal status: open title: Documentation fix in ssl module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 16:49:08 2022 From: report at bugs.python.org (ov2k) Date: Tue, 01 Feb 2022 21:49:08 +0000 Subject: [New-bugs-announce] [issue46605] Py_XDECREF() module on fail in Py_mod_exec Message-ID: <1643752148.46.0.568852350737.issue46605@roundup.psfhosted.org> New submission from ov2k : In some of the xx modules, a Py_mod_exec function steals a reference to the module argument when an error occurs (Py_XDECREF(m) after goto fail). It's a bit pernicious given the modules' stated intent to be used as a template, although I'm not sure how often this has actually happened. At the very least, I haven't noticed this outside the xx modules. For Python <= 3.9, this affects xx_exec() in xxmodule.c and xx_modexec() in xxlimited.c. For Python >= 3.10, this affects xx_exec() in xxmodule.c and xx_modexec() in xxlimited_35.c. ---------- components: Extension Modules messages: 412315 nosy: ov2k priority: normal severity: normal status: open title: Py_XDECREF() module on fail in Py_mod_exec type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 1 22:06:44 2022 From: report at bugs.python.org (Inada Naoki) Date: Wed, 02 Feb 2022 03:06:44 +0000 Subject: [New-bugs-announce] [issue46606] Large C stack usage of os.getgroups() and os.setgroups() Message-ID: <1643771204.31.0.0513908090604.issue46606@roundup.psfhosted.org> New submission from Inada Naoki : I checked stack usage for bpo-46600 and found this two functions use a lot of stack. os_setgroups: 262200 bytes os_getgroups_impl: 262184 bytes Both function has local variable like this: gid_t grouplist[MAX_GROUPS]; MAX_GROUPS is defined as: ``` #ifdef NGROUPS_MAX #define MAX_GROUPS NGROUPS_MAX #else /* defined to be 16 on Solaris7, so this should be a small number */ #define MAX_GROUPS 64 #endif ``` NGROUPS_MAX is 65536 and sizeof(gid_t) is 4 on Ubuntu 20.04, so grouplist is 262144bytes. It seems this grouplist is just for avoid allocation: ``` } else if (n <= MAX_GROUPS) { /* groups will fit in existing array */ alt_grouplist = grouplist; } else { alt_grouplist = PyMem_New(gid_t, n); if (alt_grouplist == NULL) { return PyErr_NoMemory(); } ``` How about just using `#define MAX_GROUPS 64`? Or should we remove this grouplist because os.grouplist() is not called so frequently? ---------- components: Library (Lib) messages: 412335 nosy: methane priority: normal severity: normal status: open title: Large C stack usage of os.getgroups() and os.setgroups() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 02:01:09 2022 From: report at bugs.python.org (Hugo van Kemenade) Date: Wed, 02 Feb 2022 07:01:09 +0000 Subject: [New-bugs-announce] [issue46607] Add DeprecationWarning to configparser's LegacyInterpolation Message-ID: <1643785269.24.0.226402381238.issue46607@roundup.psfhosted.org> New submission from Hugo van Kemenade : The LegacyInterpolation class of configparser has been deprecated in docs since 3.2, but without raising a DeprecationWarning. The 3.2 HISTORY file says: > - configparser: the SafeConfigParser class has been renamed to ConfigParser. > The legacy ConfigParser class has been removed but its interpolation mechanism is still available as LegacyInterpolation. Searching the top 5,000 PyPI sdists, there's very little (if any "real") use of LegacyInterpolation. Details: https://bugs.python.org/issue45173#msg409685 Other configparser deprecations were added in 3.2, but with DeprecationWarnings. Let's add a DeprecationWarning for a couple of releases before removal. ---------- components: Library (Lib) messages: 412339 nosy: hugovk priority: normal severity: normal status: open title: Add DeprecationWarning to configparser's LegacyInterpolation versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 04:08:27 2022 From: report at bugs.python.org (Kumar Aditya) Date: Wed, 02 Feb 2022 09:08:27 +0000 Subject: [New-bugs-announce] [issue46608] Exclude marshalled-frozen data if deep-freezing to save 300 KB space Message-ID: <1643792907.85.0.286906427988.issue46608@roundup.psfhosted.org> New submission from Kumar Aditya : This reduces the size of the data segment by 300 KB of the executable because if the modules are deep-frozen then the marshalled frozen data just wastes space. This was inspired by comment by @gvanrossum in #29118 (comment). Note: There is a new option `--deepfreeze-only` in freeze_modules.py to change this behavior, it is on be default to save disk space. # du -s ./python before 27892 ./python # du -s ./python after 27524 ./python ---------- components: Build messages: 412346 nosy: gvanrossum, kumaraditya303 priority: normal severity: normal status: open title: Exclude marshalled-frozen data if deep-freezing to save 300 KB space versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 07:27:05 2022 From: report at bugs.python.org (Sebastian Rittau) Date: Wed, 02 Feb 2022 12:27:05 +0000 Subject: [New-bugs-announce] [issue46609] Generator-based coroutines in Python 3.10 docs Message-ID: <1643804825.5.0.310548487717.issue46609@roundup.psfhosted.org> New submission from Sebastian Rittau : Currently, the Python 3.10.2 documentation at https://docs.python.org/3/library/asyncio-task.html?highlight=coroutine#asyncio.coroutine says: "Note: Support for generator-based coroutines is deprecated and is scheduled for removal in Python 3.10." Python 3.10 still has support for those (although it emits a warning), so the note should be updated. ---------- assignee: docs at python components: Documentation messages: 412352 nosy: docs at python, srittau priority: normal severity: normal status: open title: Generator-based coroutines in Python 3.10 docs versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 08:19:38 2022 From: report at bugs.python.org (Can) Date: Wed, 02 Feb 2022 13:19:38 +0000 Subject: [New-bugs-announce] [issue46610] assertCountEqual doesn't work as expected for dictionary elements Message-ID: <1643807978.22.0.824288315103.issue46610@roundup.psfhosted.org> Change by Can : ---------- nosy: cansarigol priority: normal severity: normal status: open title: assertCountEqual doesn't work as expected for dictionary elements type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 09:00:52 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Wed, 02 Feb 2022 14:00:52 +0000 Subject: [New-bugs-announce] [issue46611] Improve coverage of `__instancecheck__` and `__subclasscheck__` methods in `typing.py` Message-ID: <1643810452.7.0.438267754834.issue46611@roundup.psfhosted.org> New submission from Nikita Sobolev : There are several problem reported by coverage: 1. This line is never reached in `_SpecialGenericAlias.__subclasscheck__`: https://github.com/python/cpython/blob/08f8301b21648d58d053e1a513db8ed32fbf37dd/Lib/typing.py#L1140 2. `__instancecheck__` and `__subclasscheck__` for `_UnionGenericAlias` are not covered at all: https://github.com/python/cpython/blob/08f8301b21648d58d053e1a513db8ed32fbf37dd/Lib/typing.py#L1243-L1249 I suspect this happened because of `types.UnionType` / `typing.Union` duality I am going to add these today! By the way, this is the last coverage issue in typing! ? ---------- components: Library (Lib) messages: 412361 nosy: sobolevn priority: normal severity: normal status: open title: Improve coverage of `__instancecheck__` and `__subclasscheck__` methods in `typing.py` type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 10:06:11 2022 From: report at bugs.python.org (Marek Scholle) Date: Wed, 02 Feb 2022 15:06:11 +0000 Subject: [New-bugs-announce] [issue46612] Unclear behavior of += operator Message-ID: <1643814371.03.0.343657653727.issue46612@roundup.psfhosted.org> New submission from Marek Scholle : Hi, I ran into discussion about scoping in Python (visibility of outer variables in nested functions, global, nonlocal) which made me to create for other some showcases. I realized there is a space for ambiguity which I extracted to this REPL: ---- >>> x = [] >>> def f(): x += [1] ... >>> f() Traceback (most recent call last): File "", line 1, in File "", line 1, in f UnboundLocalError: local variable 'x' referenced before assignment >>> x = [] >>> def f(): x.append(1) ... >>> f() >>> x [1] ---- The documentation says about `x += [1]` it is "translated" to `x.__iadd__([1])`. It would be interesting to know if Python actually documents that `x += [1]` will err with `UnboundLocalError`. I think there is a natural argument that `x += ` should behave as an in-place version of `x = x + ` (where `UnboundLocalError` makes perfect sense), but diving into documentation it seems that `x += ` should be a syntax sugar for `x.__iadd__(rhs)` in which case `UnboundLocalError` should not happen and looks like some parser artifact. ---------- components: Interpreter Core messages: 412365 nosy: mscholle priority: normal severity: normal status: open title: Unclear behavior of += operator type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 11:55:22 2022 From: report at bugs.python.org (Petr Viktorin) Date: Wed, 02 Feb 2022 16:55:22 +0000 Subject: [New-bugs-announce] [issue46613] Add PyType_GetModuleByDef to the public & limited API Message-ID: <1643820922.83.0.000906972876287.issue46613@roundup.psfhosted.org> New submission from Petr Viktorin : _PyType_GetModuleByDef (added in bpo-42100)allows module state access from slot methods (like tp_init or nb_add), the main thing missing from PEP 573 (Module State Access from C Extension Methods). It's time to make it public. The function itself can be implemented using only limited API, though it's a bit tricky to do so correctly (and our implementation uses private speedups), so it's better if extension authors can use it as a pre-made building block. Discussed in: https://mail.python.org/archives/list/capi-sig at python.org/thread/WMSDNMQ7A6LE6X4MQW4QAQUKDDL7MJ72/ Note that a bug was found in the CPython optimization recently: bpo-46433 ---------- components: C API messages: 412378 nosy: petr.viktorin priority: normal severity: normal status: open title: Add PyType_GetModuleByDef to the public & limited API versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 12:31:46 2022 From: report at bugs.python.org (Paul Ganssle) Date: Wed, 02 Feb 2022 17:31:46 +0000 Subject: [New-bugs-announce] [issue46614] Add option to output UTC datetimes as "Z" in `.isoformat()` Message-ID: <1643823106.57.0.535855742317.issue46614@roundup.psfhosted.org> New submission from Paul Ganssle : As part of bpo-35829, it was suggested that we add the ability to output the "Z" suffix in `isoformat()`, so that `fromisoformat()` can both be the exact functional inverse of `isoformat()` and parse datetimes with "Z" outputs. I think that that's not a particularly compelling motivation for this, but I also see plenty of examples of `datetime.utcnow().isoformat() + "Z"` out there, so it seems like this is a feature that we would want to have *anyway*, particularly if we want to deprecate and remove `utcnow`. I've spun this off into its own issue so that we can discuss how to implement the feature. The two obvious questions I see are: 1. What do we call the option? `use_utc_designator`, `allow_Z`, `utc_as_Z`? 2. What do we consider as "UTC"? Is it anything with +00:00? Just `timezone.utc`? Anything that seems like a fixed-offset zone with 0 offset? For example, do we want this? >>> LON = zoneinfo.ZoneInfo("Europe/London") >>> datetime(2022, 3, 1, tzinfo=LON).isoformat(utc_as_z=True) 2022-03-01T00:00:00Z >>> datetime(2022, 6, 1, tzinfo=LON).isoformat(utc_as_z=True) 2022-06-01T00:00:00+01:00 Another possible definition might be if the `tzinfo` is a fixed-offset zone with offset 0: >>> datetime.timezone.utc.utcoffset(None) timedelta(0) >>> zoneinfo.ZoneInfo("UTC").utcoffset(None) timedelta(0) >>> dateutil.tz.UTC.utcoffset(None) timedelta(0) >>> pytz.UTC.utcoffset(None) timedelta(0) The only "odd man out" is `dateutil.tz.tzfile` objects representing fixed offsets, since all `dateutil.tz.tzfile` objects return `None` when `utcoffset` or `dst` are passed `None`. This can and will be changed in future versions. I feel like "If the offset is 00:00, use Z" is the wrong rule to use conceptually, but considering that people will be opting into this behavior, it is more likely that they will be surprised by `datetime(2022, 3, 1, tzinfo=ZoneInfo("Europe/London").isoformat(utc_as_z=True)` returning `2022-03-01T00:00:00+00:00` than alternation between `Z` and `+00:00`. Yet another option might be to add a completely separate function, `utc_isoformat(*args, **kwargs)`, which is equivalent to (in the parlance of the other proposal) `dt.astimezone(timezone.utc).isoformat(*args, **kwargs, utc_as_z=True)`. Basically, convert any datetime to UTC and append a Z to it. The biggest footgun there would be people using it on na?ve datetimes and not realizing that it would interpret them as system local times. ---------- assignee: p-ganssle components: Library (Lib) messages: 412384 nosy: belopolsky, brett.cannon, p-ganssle priority: normal severity: normal stage: needs patch status: open title: Add option to output UTC datetimes as "Z" in `.isoformat()` type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 13:01:22 2022 From: report at bugs.python.org (Dennis Sweeney) Date: Wed, 02 Feb 2022 18:01:22 +0000 Subject: [New-bugs-announce] [issue46615] Segfault in set intersection (&) and difference (-) Message-ID: <1643824882.75.0.278611227582.issue46615@roundup.psfhosted.org> New submission from Dennis Sweeney : Maybe related to https://bugs.python.org/issue8420 Somewhat obscure, but using only standard Python, and no frame- or gc-hacks, it looks like we can get a use-after-free: from random import random BADNESS = 0.0 class Bad: def __eq__(self, other): if random() < BADNESS: set1.clear() if random() < BADNESS: set2.clear() return True def __hash__(self): return 42 SIZE = 100 TRIALS = 10_000 ops = [ "|", "|=", "==", "!=", "<", "<=", ">", ">=", # "&", # crash! # "&=", # crash! "^", # "^=", # crash # "-", # crash "-=", ] for op in ops: stmt = f"set1 {op} set2" print(stmt, "...") for _ in range(TRIALS): BADNESS = 0.00 set1 = {Bad() for _ in range(SIZE)} set2 = {Bad() for _ in range(SIZE)} BADNESS = 0.02 exec(stmt) print("ok.") ---------- components: Interpreter Core messages: 412386 nosy: Dennis Sweeney, rhettinger priority: normal severity: normal status: open title: Segfault in set intersection (&) and difference (-) type: crash versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 13:14:06 2022 From: report at bugs.python.org (Steve Dower) Date: Wed, 02 Feb 2022 18:14:06 +0000 Subject: [New-bugs-announce] [issue46616] test_importlib leaves stray registry entries on Windows Message-ID: <1643825646.98.0.404653154913.issue46616@roundup.psfhosted.org> New submission from Steve Dower : When running test_importlib.test_windows, it may create registry keys that previously didn't exist. These keys are not fully cleaned up. Detect if the full key is created and then delete it after the test. If it existed, only delete the specific test key. ---------- assignee: steve.dower components: Windows messages: 412388 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: test_importlib leaves stray registry entries on Windows type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 16:34:08 2022 From: report at bugs.python.org (Matthew Stidham) Date: Wed, 02 Feb 2022 21:34:08 +0000 Subject: [New-bugs-announce] [issue46617] CSV Creation occasional off by one error Message-ID: <1643837648.9.0.298281382063.issue46617@roundup.psfhosted.org> New submission from Matthew Stidham : The file which I found the error in is in https://github.com/greearb/lanforge-scripts ---------- components: C API files: debug from pandas failure.txt messages: 412400 nosy: matthewstidham priority: normal severity: normal status: open title: CSV Creation occasional off by one error type: compile error versions: Python 3.10, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50601/debug from pandas failure.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 17:33:12 2022 From: report at bugs.python.org (koala-lava) Date: Wed, 02 Feb 2022 22:33:12 +0000 Subject: [New-bugs-announce] [issue46618] Exponent operator(**) interpreter issue Message-ID: <1643841192.81.0.245453697907.issue46618@roundup.psfhosted.org> New submission from koala-lava : If I put -2 ** 2 in the interpreter it outputs -4. Expected is 4. If I create a variable and initialize it with -2 and then try the same then it's correct. ---------- components: Interpreter Core messages: 412402 nosy: koala-lava priority: normal severity: normal status: open title: Exponent operator(**) interpreter issue type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 20:08:28 2022 From: report at bugs.python.org (Jason R. Coombs) Date: Thu, 03 Feb 2022 01:08:28 +0000 Subject: [New-bugs-announce] [issue46619] lazy module property not recognized by doctests Message-ID: <1643850508.65.0.917170259408.issue46619@roundup.psfhosted.org> New submission from Jason R. Coombs : Attempting to define a lazy-loaded property for a module, I found [this guidance](https://stackoverflow.com/a/52018676/70170) referencing [module attribute access](https://docs.python.org/3/reference/datamodel.html#customizing-module-attribute-access) in the Python docs as a means of customizing attribute access. I followed that guidance, but found that doctests don't have access to those attributes in its execution. Consider this reproducer: ``` """ >>> print(static_property) static value >>> print(lazy_property) lazy value """ # text.py import types import sys static_property = 'static value' class _Properties(types.ModuleType): @property def lazy_property(self): return 'lazy value' sys.modules[__name__].__class__ = _Properties ``` Run that with `python -m doctest text.py` and it fails thus: ``` ********************************************************************** File "/Users/jaraco/draft/text.py", line 4, in text Failed example: print(lazy_property) Exception raised: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py", line 1346, in __run exec(compile(example.source, filename, "single", File "", line 1, in print(lazy_property) NameError: name 'lazy_property' is not defined ********************************************************************** 1 items had failures: 1 of 2 in text ***Test Failed*** 1 failures. ``` Same error using the `__getattr__` technique: ``` """ >>> print(static_property) static value >>> print(lazy_property) lazy value """ static_property = 'static value' def __getattr__(name): if name != 'lazy_property': raise AttributeError(name) return 'lazy value' ``` I suspect the issue is that doctests runs with locals from the module's globals(), which won't include these lazy properties. It would be nice if doctests could honor locals that would represent the properties available on the module. ---------- components: Library (Lib) messages: 412409 nosy: jaraco priority: normal severity: normal status: open title: lazy module property not recognized by doctests versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 21:46:49 2022 From: report at bugs.python.org (Kenta Tsuna) Date: Thu, 03 Feb 2022 02:46:49 +0000 Subject: [New-bugs-announce] [issue46620] Documentation of ipaddress behavior for prefix length with leading zeros. Message-ID: <1643856409.43.0.923620976074.issue46620@roundup.psfhosted.org> New submission from Kenta Tsuna : ipaddress library tolerate the prefix length with leading zeros. $ ./python.exe Python 3.11.0a4+ (heads/main:8fb3649450, Jan 31 2022, 16:39:46) [Clang 13.0.0 (clang-1300.0.29.3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import ipaddress >>> ipaddress.ip_interface('192.0.2.0/0000000024') IPv4Interface('192.0.2.0/24') >>> ipaddress.ip_interface('2001:db8::/0000000032') IPv6Interface('2001:db8::/32') The explanation of this behavior exists in follow tests. https://github.com/python/cpython/blob/51a95be1d035a717ab29e98056b8831a98e61125/Lib/test/test_ipaddress.py#L747-L748 https://github.com/python/cpython/blob/51a95be1d035a717ab29e98056b8831a98e61125/Lib/test/test_ipaddress.py#L755-L756 https://github.com/python/cpython/blob/51a95be1d035a717ab29e98056b8831a98e61125/Lib/test/test_ipaddress.py#L592-L593 But, it seems that the explanation does not exists in the document. https://docs.python.org/3.11/library/ipaddress.html https://docs.python.org/3.10/library/ipaddress.html https://docs.python.org/3.9/library/ipaddress.html https://docs.python.org/3.8/library/ipaddress.html https://docs.python.org/3.7/library/ipaddress.html ---------- assignee: docs at python components: Documentation messages: 412412 nosy: docs at python, lay20114 priority: normal severity: normal status: open title: Documentation of ipaddress behavior for prefix length with leading zeros. type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 2 23:46:26 2022 From: report at bugs.python.org (Peiran Yao) Date: Thu, 03 Feb 2022 04:46:26 +0000 Subject: [New-bugs-announce] [issue46621] Should map(function, iterable, ...) replace StopIteration with RuntimeError? Message-ID: <1643863586.75.0.713225774424.issue46621@roundup.psfhosted.org> New submission from Peiran Yao : Currently, StopIteration raised accidentally inside the `function` being applied is not caught by map(). This will cause the iteration of the map object to terminate silently. (Whereas, when some other exception is raised, a traceback is printed pinpointing the cause of the problem.) Here's a minimal working example: ``` def take_first(it: Iterable): # if `it` is empty, StopIteration will be raised accidentally return next(it) iterables = [iter([1]), iter([]), iter([2, 3])] # the second one is empty for i in map(take_first, iterables): print(i) ``` `take_first` function didn't consider the case where `it` is empty. The programmer would expect an uncaught StopIteration, instead of the loop terminating silently after only one iteration. Similar to the case of generators (described in PEP 497), this behaviour can conceal obscure bugs, and a solution could be catching StopIteration when applying the function, and replacing it with a RuntimeError. Beside the built-in map(), imap() and imap_unordered() in the concurrent and multiprocessing modules also have similar behaviour. PEP 479 -- Change StopIteration handling inside generators https://www.python.org/dev/peps/pep-0479/ ---------- messages: 412419 nosy: xavieryao priority: normal severity: normal status: open title: Should map(function, iterable, ...) replace StopIteration with RuntimeError? type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 02:28:28 2022 From: report at bugs.python.org (Tzu-ping Chung) Date: Thu, 03 Feb 2022 07:28:28 +0000 Subject: [New-bugs-announce] [issue46622] Support decorating a coroutine with functools.cached_property Message-ID: <1643873308.65.0.92890586202.issue46622@roundup.psfhosted.org> New submission from Tzu-ping Chung : Currently, decorating a coroutine with cached_property would cache the coroutine itself. But this is not useful in any way since a coroutine cannot be awaited multiple times. Running this code: import asyncio import functools class A: @functools.cached_property async def hello(self): return 'yo' async def main(): a = A() print(await a.hello) print(await a.hello) asyncio.run(main()) produces: yo Traceback (most recent call last): File "t.py", line 15, in asyncio.run(main()) File "/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete return future.result() File "t.py", line 12, in main print(await a.hello) RuntimeError: cannot reuse already awaited coroutine The third-party cached_property package, on the other hand, detects a coroutine and caches its result instead. I feel this is a more useful behaviour. https://github.com/pydanny/cached-property/issues/85 ---------- components: Library (Lib) messages: 412422 nosy: uranusjr priority: normal severity: normal status: open title: Support decorating a coroutine with functools.cached_property type: enhancement versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 03:34:40 2022 From: report at bugs.python.org (STINNER Victor) Date: Thu, 03 Feb 2022 08:34:40 +0000 Subject: [New-bugs-announce] [issue46623] test_zlib: test_pair() and test_speech128() fail with s390x hardware accelerator Message-ID: <1643877280.17.0.469977325894.issue46623@roundup.psfhosted.org> New submission from STINNER Victor : test_pair() and test_speech128() tests of test_zlib fail on the s390x architecture if zlib uses the s390x hardware accelerator. RHEL8 downstream issues (with most details): https://bugzilla.redhat.com/show_bug.cgi?id=1974658 Fedora downstream issues: https://bugzilla.redhat.com/show_bug.cgi?id=2038848 The s390x has a hardware accelerator for zlib. Depending if the hardware accelerator is used or not, the output (compress data) is different. Also, test_zlib compress data in two different ways and then expect the same output. test_zlib pass with the software implementation which creates a single (final) compressed block. test_zlib fails with the hardware implementation which creates multiple compressed blocks (the last one is a final block). Another reason the output differs is the FHT/DHT heuristic. The zlib deflate algorithm can analyze the data distribution and decide whether it wants to use a fixed-Huffman table (FHT) or a dynamic-Huffman table (DHT) for the next block, but the accelerator can't. Furthermore, looking at data in software would kill the accelerator performance. Therefore the following heuristic is used on s390x: the first 4k are compressed with FHT and the rest of the data with DHT. So, compress() creates a single FHT block. compressobj() creates a FHT block, a DHT block and a trailing block. It is *not a bug* in zlib: the decompression gives back the original content as expected in all cases. The issue is that Python test_zlib makes too many assumptions on how "streamed" data should be compressed. The test expected that compressed data using different ways to call zlib would return the exact same compressed data. If an accelarator is used, it's not always the case. ---------- components: Tests messages: 412426 nosy: vstinner priority: normal severity: normal status: open title: test_zlib: test_pair() and test_speech128() fail with s390x hardware accelerator versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 05:52:01 2022 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Thu, 03 Feb 2022 10:52:01 +0000 Subject: [New-bugs-announce] [issue46624] random.randrange removed support for non-integer types after just one release of deprecation Message-ID: <1643885521.08.0.473990761772.issue46624@roundup.psfhosted.org> New submission from Miro Hron?ok : In https://github.com/python/cpython/commit/5afa0a411243210a30526c7459a0ccff5cb88494 the support for non-integer types was removed from random.randrange(). This change is not backward-compatible and it breaks 3rd party code, for example: simplewrap: https://bugzilla.redhat.com/show_bug.cgi?id=2050093 numpy-stl: https://bugzilla.redhat.com/show_bug.cgi?id=2050092 == https://github.com/WoLpH/numpy-stl/issues/188 That support was only deprecated in Python 3.10 and it needs to remain deprecated for at least two Python releases. Please revert this change from Python 3.11 and wait for at least Python 3.12. See https://www.python.org/dev/peps/pep-0387/#making-incompatible-changes When you do remove this from Python 3.12, please make sure to document it in the What's new document. Thank you. ---------- components: Library (Lib) messages: 412436 nosy: hroncok, pablogsal, rhettinger priority: normal severity: normal status: open title: random.randrange removed support for non-integer types after just one release of deprecation versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 09:18:19 2022 From: report at bugs.python.org (Nicolas SURRIBAS) Date: Thu, 03 Feb 2022 14:18:19 +0000 Subject: [New-bugs-announce] [issue46625] timeout option of socket.create_connection is not respected Message-ID: <1643897899.25.0.188715237752.issue46625@roundup.psfhosted.org> New submission from Nicolas SURRIBAS : When passing to socket.create_connection a timeout option above (approximately) 127 seconds, the timeout is not respected. Code to reproduce the issue : import socket from time import monotonic print(socket.getdefaulttimeout()) start = monotonic() try: socket.create_connection(("1.1.1.1", 21), 300) except Exception as exception: print(exception) print(monotonic() - start) Output at execution: None [Errno 110] Connection timed out 129.3075186319984 Expected behavior would be that the "Connection timed out" exception is raised after 300 seconds, as given in argument, not 129. Observed with Python 3.9.1 ---------- components: IO messages: 412443 nosy: Nicolas SURRIBAS priority: normal severity: normal status: open title: timeout option of socket.create_connection is not respected type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 13:21:59 2022 From: report at bugs.python.org (Benjamin Peterson) Date: Thu, 03 Feb 2022 18:21:59 +0000 Subject: [New-bugs-announce] [issue46626] expose IP_BIND_ADDRESS_NO_PORT linux socket option Message-ID: <1643912519.95.0.96688411003.issue46626@roundup.psfhosted.org> Change by Benjamin Peterson : ---------- components: Library (Lib) nosy: benjamin.peterson priority: normal severity: normal status: open title: expose IP_BIND_ADDRESS_NO_PORT linux socket option type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 13:40:17 2022 From: report at bugs.python.org (J.B. Langston) Date: Thu, 03 Feb 2022 18:40:17 +0000 Subject: [New-bugs-announce] [issue46627] Regex hangs indefinitely Message-ID: <1643913617.99.0.318274176206.issue46627@roundup.psfhosted.org> New submission from J.B. Langston : The following code will cause Python's regex engine to hang apparently indefinitely: import re message = "Flushed to [BigTableReader(path='/data/cassandra/data/log/logEntry_202202-e68971800b2711ecaf770d5fa3f5ae87/md-112-big-Data.db')] (1 sstables, 8,650MiB), biggest 8,650MiB, smallest 8,650MiB" regex = re.compile(r"Flushed to \[(?P[^]]+)+\] \((?P[^ ]+) sstables, (?P[^)]+)\), biggest (?P[^,]+), smallest (?P[^ ]+)( \((?P\d+)ms\))?") regex.match(message) This may be a case of exponential backtracking similar to #35915 or #30973. Both of these issues have been closed as Wont Fix, and I suspect my issue is similar. The use of commas for decimal points in the input string was not anticipated but happened due to localization of the logs that the message came from. The regex works properly when the decimal point is a period. I will try to rewrite my regex to address this specific issue, but it's hard to anticipate every possible input and craft a bulletproof regex, so something like this kind of thing can be used for a denial of service attack (intentional or not). In this case the regex was used in an automated import process and caused the process to back up for many hours before someone noticed. Maybe a solution could be to add a timeout option to the regex engine so it will give up and throw an exception if the regex executes for longer than the configured timeout. ---------- components: Regular Expressions messages: 412450 nosy: ezio.melotti, jblangston, mrabarnett priority: normal severity: normal status: open title: Regex hangs indefinitely type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 14:14:22 2022 From: report at bugs.python.org (Paul Koning) Date: Thu, 03 Feb 2022 19:14:22 +0000 Subject: [New-bugs-announce] [issue46628] Can't install YARL Message-ID: <1643915662.48.0.0476331394582.issue46628@roundup.psfhosted.org> New submission from Paul Koning : Trying to install "aiohttp" with pip I get a compile error installing "yarl". I get the same error when I install just that module. But it installs fine on 3.10. This is on an Apple M1 (ARM64) machine. ---------- components: macOS files: yarl.log messages: 412453 nosy: ned.deily, pkoning, ronaldoussoren priority: normal severity: normal status: open title: Can't install YARL type: compile error versions: Python 3.11 Added file: https://bugs.python.org/file50602/yarl.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 15:01:49 2022 From: report at bugs.python.org (Steve Dower) Date: Thu, 03 Feb 2022 20:01:49 +0000 Subject: [New-bugs-announce] [issue46629] Cannot sideload MSIX package on Windows Message-ID: <1643918509.02.0.11986541964.issue46629@roundup.psfhosted.org> New submission from Steve Dower : We need to update PC/classicAppCompat.can.xml for our new certificate and email Microsoft to get it signed again. ---------- assignee: steve.dower messages: 412461 nosy: steve.dower priority: normal severity: normal status: open title: Cannot sideload MSIX package on Windows versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 3 15:37:52 2022 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 03 Feb 2022 20:37:52 +0000 Subject: [New-bugs-announce] [issue46630] IDLE: Set query focus to entry box on Windows Message-ID: <1643920672.2.0.211419458926.issue46630@roundup.psfhosted.org> New submission from Terry J. Reedy : On Mac, and I presume *nix in general, query boxes open with the focus on the first entry box, with the cursor displayed. One can immediate enter a line number, dotted module name, or whatever. On Windows, since 3.9, one must hit Tab or click on the entry box to set the focus. If a blank entry is an error, one can even click on OK or hit Enter and the focus will move after an error message. idlelib/query.py already has self.entry.focus_set. Why did that stop working in 3.9? All patches to query.py were before May 2021 and backported to 3.8. Perhaps the upgrade from tk 8.6.9 to 8.6.12 had an effect given the code as it is. Text widgets have the same issue and Editor window has 'text.focus_set' in '__init__' and that works. Whatever, moving entry.focus_set() to just after self.deiconify() works without affecting unittests both in Windows repository and 3.11 installed on macOS. ---------- assignee: terry.reedy components: IDLE messages: 412465 nosy: terry.reedy priority: normal severity: normal status: open title: IDLE: Set query focus to entry box on Windows type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 00:55:34 2022 From: report at bugs.python.org (Eryk Sun) Date: Fri, 04 Feb 2022 05:55:34 +0000 Subject: [New-bugs-announce] [issue46631] Implement a "strict" mode for getpass.getuser() Message-ID: <1643954134.16.0.436701246605.issue46631@roundup.psfhosted.org> New submission from Eryk Sun : getpass.getuser() checks the environment variables LOGNAME (login name), USER, LNAME, and USERNAME, in that order. In Windows, LOGNAME, USER, and LNAME have no conventional usage. I think there should be a strict mode that restricts getuser() to check only USERNAME in Windows and only LOGNAME in POSIX [1]. If the login variable isn't defined, it should fall back on using the system API, based on the user ID in POSIX and the logon ID in Windows. For the fallback in Windows, the _winapi module could implement GetCurrentProcessToken(), GetTokenInformation(), and LsaGetLogonSessionData(). For TokenStatistics, return a dict with just "AuthenticationId". For LsaGetLogonSessionData(), return a dict with just "UserName". GetCurrentProcessToken() returns a pseudohandle (-4), which should not be closed. For example, assuming _winapi wraps the required functions: def getuser(strict=False): """Get the username from the environment or password database. First try various environment variables. If strict, check only LOGNAME in POSIX and only USERNAME in Windows. As a fallback, in POSIX get the user name from the password database, and in Windows get the user name from the logon-session data of the current process. """ posix = sys.platform != 'win32' if strict: names = ('LOGNAME',) if posix else ('USERNAME',) else: names = ('LOGNAME', 'USER', 'LNAME', 'USERNAME') for name in names: if user := os.environ.get(name): return user if posix: import pwd return pwd.getpwuid(os.getuid())[0] import _winapi logon_id = _winapi.GetTokenInformation( _winapi.GetCurrentProcessToken(), _winapi.TokenStatistics)['AuthenticationId'] return _winapi.LsaGetLogonSessionData(logon_id)['UserName'] Like WinAPI GetUserNameW(), the above fallback returns the logon user name instead of the account name of the token user. As far as I know, the user name and the account name only differ for the builtin service account logons "SYSTEM" (999) and "NETWORK SERVICE" (996), for which the user name is the machine security principal (i.e. the machine's NETBIOS name plus "$"). The user name of the builtin "LOCAL SERVICE" logon (997), on the other hand, is just the "LOCAL SERVICE" account name, since this account lacks network access. Unlike GetUserNameW(), the above code uses the process token instead of the effective token. This is like POSIX getuid(), whereas what GetUserNameW() does is like geteuid(). getuser() could implement an `effective` option to return the effective user name. In Windows this would switch to calling GetCurrentThreadEffectiveToken() instead of GetCurrentProcessToken(). --- [1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html ---------- components: Library (Lib), Windows messages: 412495 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Implement a "strict" mode for getpass.getuser() type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 04:54:45 2022 From: report at bugs.python.org (STINNER Victor) Date: Fri, 04 Feb 2022 09:54:45 +0000 Subject: [New-bugs-announce] [issue46632] test_ssl: 2 tests fail on cstratak-CentOS9-fips-x86_64 Message-ID: <1643968485.02.0.29316293905.issue46632@roundup.psfhosted.org> New submission from STINNER Victor : test_load_verify_cadata() and test_connect_cadata() of test_ssl fail on cstratak-CentOS9-fips-x86_64 (with OpenSSL FIPS mode enabled): https://buildbot.python.org/all/#builders/828/builds/63 test.pythoninfo: fips.linux_crypto_fips_enabled: 1 fips.openssl_fips_mode: 1 ssl.OPENSSL_VERSION: OpenSSL 3.0.1 14 Dec 2021 ssl.OPENSSL_VERSION_INFO: (3, 0, 0, 1, 0) Logs: ====================================================================== ERROR: test_load_verify_cadata (test.test_ssl.ContextTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-CentOS9-fips-x86_64.no-builtin-hashes-except-blake2/build/Lib/test/test_ssl.py", line 1494, in test_load_verify_cadata ctx.load_verify_locations(cadata=cacert_der) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ssl.SSLError: [EVP] unsupported (_ssl.c:3998) ====================================================================== ERROR: test_connect_cadata (test.test_ssl.SimpleBackgroundTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-CentOS9-fips-x86_64.no-builtin-hashes-except-blake2/build/Lib/test/test_ssl.py", line 2138, in test_connect_cadata ctx.load_verify_locations(cadata=der) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ssl.SSLError: [EVP] unsupported (_ssl.c:3998) Stdout: server: new connection from ('127.0.0.1', 49102) server: connection cipher is now ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256) ---------- assignee: christian.heimes components: SSL, Tests messages: 412497 nosy: christian.heimes, vstinner priority: normal severity: normal status: open title: test_ssl: 2 tests fail on cstratak-CentOS9-fips-x86_64 versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 05:09:11 2022 From: report at bugs.python.org (STINNER Victor) Date: Fri, 04 Feb 2022 10:09:11 +0000 Subject: [New-bugs-announce] [issue46633] AddressSanitizer: Skip tests directly in Python, not with external config Message-ID: <1643969351.1.0.716031320154.issue46633@roundup.psfhosted.org> New submission from STINNER Victor : It seems like bpo-45200: "Address Sanitizer: libasan dead lock in pthread_create() (test_multiprocessing_fork.test_get() hangs)" is not fixed yet. In the GitHub Action job, test_multiprocessing_fork is skipped because it's too slow, so the job doesn't hang. Yesterday, I modified the ASAN buildbot to double its timeout from 20 to 40 minutes: https://github.com/python/buildmaster-config/commit/5a37411e75c9475d48eabaac18102a3c9fc2d467 But it's useful, when it hangs, it hangs forever. Exmaple on the AMD64 Arch Linux Asan Debug 3.9 buildbot (with the new config): --- (test.test_multiprocessing_fork.WithProcessesTestPicklingConnections) ... ok Timeout (0:35:00)! --- https://buildbot.python.org/all/#/builders/588/builds/332 Tests are tuned for ASAN, but the configuration is copied and inconsistent between the GitHub Actions job and the buildbot configuration. I propose to move this configuration directly into Python. test_decimal.py checks for "-fsanitize=address" in CFLAGS and skip some tests if it's present. ---------- components: Tests messages: 412499 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: AddressSanitizer: Skip tests directly in Python, not with external config versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 05:38:53 2022 From: report at bugs.python.org (Erlend E. Aasland) Date: Fri, 04 Feb 2022 10:38:53 +0000 Subject: [New-bugs-announce] =?utf-8?b?W2lzc3VlNDY2MzRdIFtzcWxpdGUzXcKg?= =?utf-8?q?speed_up_cursor=2Eexecute*=28=29?= Message-ID: <1643971133.68.0.0138168910429.issue46634@roundup.psfhosted.org> New submission from Erlend E. Aasland : `pysqlite_connection_execute_impl()` and friends (executemany, executescript) goes all the way through the Call API just to call `pysqlite_connection_cursor_impl`. We can same a lot of calls by calling the cursor _impl function directly; after all, it does live in the same file scope as the callers. A quick bench (sqlitesynth) shows a small speedup: Mean +- std dev: [main] 9.55 us +- 0.25 us -> [patched] 9.32 us +- 0.23 us: 1.02x faster (Side effect: will get rid of _Py_IDENTIFIER(cursor) in sqlite3) ---------- components: Extension Modules messages: 412503 nosy: erlendaasland priority: normal severity: normal status: open title: [sqlite3]?speed up cursor.execute*() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 06:35:56 2022 From: report at bugs.python.org (Tasos Papastylianou) Date: Fri, 04 Feb 2022 11:35:56 +0000 Subject: [New-bugs-announce] [issue46635] unittest.defaultTestLoader.discover fails for namespace packages Message-ID: <1643974556.47.0.0758530865107.issue46635@roundup.psfhosted.org> New submission from Tasos Papastylianou : Back in python 3.6.9, attempting to import __file__ on a namespace package resulted in an attribute error. From at least 3.8 onwards, this behaviour seems to have changed, and __file__ simply returns None instead. This seems to have broken unittest discovery. Looking at the code, it seems that discover still seems to rely on a try/except block in order to test for a namespace package. Now that the attribute error is no longer present in later python versions, discover simply accepts the None value for __file__, and fails further down the line when attempting to canonicalise a path containing a None value (error effectively expects a string). On my system with python 3.8, the relevant files/lines are: - /usr/lib/python3.8/unittest/loader.py()discover() The try block starting at line 304 checks for the module's __file__ attribute, expecting to redirect to 307 to "look for namespace packages" in case of an attribute error. Obviously, now that __file__ returns None instead, this logic fails. - The call to dirname in line 306 therefore proceeds normally, passing a None as a file, which then fails with a TypeError: expected str, bytes or os.PathLike object, not NoneType See https://github.com/tpapastylianou/self-contained-runnable-python-package-template/issues/13# for the example in the wild that prompted the discovery of this bug. ---------- components: Tests messages: 412505 nosy: tpapastylianou priority: normal severity: normal status: open title: unittest.defaultTestLoader.discover fails for namespace packages type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 09:49:36 2022 From: report at bugs.python.org (chen-y0y0) Date: Fri, 04 Feb 2022 14:49:36 +0000 Subject: [New-bugs-announce] [issue46636] Bugs of 2to3 Message-ID: <1643986176.49.0.812119099931.issue46636@roundup.psfhosted.org> New submission from chen-y0y0 : I have a file named foo.py: try : input = raw_input int = long chr = unichr range = xrange except NameError : pass When I process this file to 2to3, it shows: --- foo.py (original) +++ foo.py (refactored) @@ -1,7 +1,7 @@ try : input = raw_input - int = long - chr = unichr + int = int + chr = chr range = xrange except NameError : pass RefactoringTool: Files that need to be modified: RefactoringTool: foo.py I don't know why it modifies the Python 2.x and 3.x compatible code. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 412508 nosy: prasechen priority: normal severity: normal status: open title: Bugs of 2to3 type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 10:54:15 2022 From: report at bugs.python.org (=?utf-8?q?Anders_Hovm=C3=B6ller?=) Date: Fri, 04 Feb 2022 15:54:15 +0000 Subject: [New-bugs-announce] [issue46637] Incorrect error message: "missing 1 required positional argument" Message-ID: <1643990055.45.0.970752611144.issue46637@roundup.psfhosted.org> New submission from Anders Hovm?ller : >>> def foo(a): ... pass ... >>> foo() Traceback (most recent call last): File "", line 1, in TypeError: foo() missing 1 required positional argument: 'a' This error is incorrect. It says "positional argument", but it's just "argument". The proof is that if you call it with foo(a=3) it works fine. ---------- components: Interpreter Core messages: 412510 nosy: Anders.Hovm?ller priority: normal severity: normal status: open title: Incorrect error message: "missing 1 required positional argument" type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 11:16:10 2022 From: report at bugs.python.org (Steve Dower) Date: Fri, 04 Feb 2022 16:16:10 +0000 Subject: [New-bugs-announce] [issue46638] Inconsistent registry use in Windows Store package Message-ID: <1643991370.43.0.929070469703.issue46638@roundup.psfhosted.org> New submission from Steve Dower : The build of the Store package detects whether the build PC supports disabling registry virtualisation or not when deciding whether to add it to the manifest. Because our release builds just moved from the windows-2019 image to the windows-2022 image, this setting changed and now builds have virtualisation disabled. While this is probably desirable for some users, having it happen without warning is bad. I'll check whether we can leave it unconditionally enabled for 3.11 and still install on older Windows versions. If it won't install, we'll just have to leave it disabled. ---------- assignee: steve.dower components: Windows messages: 412513 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Inconsistent registry use in Windows Store package type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 14:25:23 2022 From: report at bugs.python.org (Vladimir Feinberg) Date: Fri, 04 Feb 2022 19:25:23 +0000 Subject: [New-bugs-announce] [issue46639] Ceil division with math.ceildiv Message-ID: <1644002723.68.0.838942257454.issue46639@roundup.psfhosted.org> New submission from Vladimir Feinberg : I have a request related to the rejected proposal (https://bugs.python.org/issue43255) to introduce a ceildiv operator. I frequently find myself wishing for a ceildiv function which computes `ceil(x/y)` for integers `x,y`. This comes up all the time when "batching" some resource and finding total consumption, be it for memory allocation or GUI manipulation or time bucketing or whatnot. It is easy enough to implement this inline, but `math.ceildiv` would express intent clearly. ``` # x, y, out: int # (A) import math out = math.ceil(x / y) # clear intent but subtly changes type, and also incorrect for big ints # (B) out = int(math.ceil(x / y)) # wordy, especially if using this multiple times, still technically wrong # (C) out = (x + y - 1) // y # too clever if you haven't seen it before, does it have desirable semantics for negatives? # (D) out = -(-x//y) def ceildiv(a: int, b: int) -> int: # Clearest and correct, but should my library code really invent this wheel? """Returns ceil(a/b).""" return -(-x//y) out = ceildiv(x, y) ``` Even though these are all "one-liners", as you can see leaving people to complex manually-implemented `ceildiv`s might result in bugs or unclear handling of negatives. ---------- components: Library (Lib) messages: 412527 nosy: Vladimir Feinberg priority: normal severity: normal status: open title: Ceil division with math.ceildiv type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 15:24:09 2022 From: report at bugs.python.org (STINNER Victor) Date: Fri, 04 Feb 2022 20:24:09 +0000 Subject: [New-bugs-announce] [issue46640] Python can now use the C99 NAN constant Message-ID: <1644006249.4.0.292742084923.issue46640@roundup.psfhosted.org> New submission from STINNER Victor : While debugging a GCC regression (*) on "HUGE_VAL * 0" used by Py_NAN macro, I noticed that Python can now C99 "NAN" constant. (*) https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104389 In bpo-45440, I already removed legacy code for pre-C99 support and old platforms: "Building Python now requires a C99 header file providing the following functions: copysign(), hypot(), isfinite(), isinf(), isnan(), round()." Attached patch modifies Py_NAN to simply reuse NAN. mathmodule.c and cmathmodule.c m_nan() still use _Py_dg_stdnan() by default (if PY_NO_SHORT_FLOAT_REPR is not defined). ---------- components: Interpreter Core messages: 412531 nosy: vstinner priority: normal severity: normal status: open title: Python can now use the C99 NAN constant versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 15:26:32 2022 From: report at bugs.python.org (Br Km) Date: Fri, 04 Feb 2022 20:26:32 +0000 Subject: [New-bugs-announce] [issue46641] multiplication error 2.2 and 2.1 Message-ID: <1644006392.6.0.966274499639.issue46641@roundup.psfhosted.org> New submission from Br Km : Python 3.6.9 (default, Dec 8 2021, 21:08:43) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 2.2 * 2.1 4.620000000000001 >>> ---------- messages: 412532 nosy: jzradom priority: normal severity: normal status: open title: multiplication error 2.2 and 2.1 type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 16:59:21 2022 From: report at bugs.python.org (Gregory Beauregard) Date: Fri, 04 Feb 2022 21:59:21 +0000 Subject: [New-bugs-announce] [issue46642] typing: tested TypeVar instance subclass TypeError is incidental Message-ID: <1644011961.17.0.0152846458354.issue46642@roundup.psfhosted.org> New submission from Gregory Beauregard : https://github.com/python/cpython/blob/bf95ff91f2c1fc5a57190491f9ccdc63458b089e/Lib/test/test_typing.py#L227-L230 typing's testcases contain the following test to ensure instances of TypeVar cannot be subclassed: def test_cannot_subclass_vars(self): with self.assertRaises(TypeError): class V(TypeVar('T')): pass The reason this raises a TypeError is incidental and subject to behavior change, not because doing so is prohibited per se; what's happening is the class creation does the equivalent of type(TypeVar('T')(name, bases, namespace), but this calls TypeVar's __init__ function with these items as the TypeVar constraints. TypeVar runs typing._type_check on the type constraints passed to it, and the literals for the namespace/name do not pass the callable() check in typing._type_check, causing it to raise a TypeError. I find it dubious this is the behavior the testcase is intending to test and the error it gives is confusing I propose we add __mro_entries__ to the TypeVar class that only contains a raise of TypeError to properly handle this case I can write this patch ---------- components: Library (Lib) messages: 412544 nosy: AlexWaygood, GBeauregard, Jelle Zijlstra, gvanrossum, kj, sobolevn priority: normal severity: normal status: open title: typing: tested TypeVar instance subclass TypeError is incidental type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 17:19:25 2022 From: report at bugs.python.org (Gregory Beauregard) Date: Fri, 04 Feb 2022 22:19:25 +0000 Subject: [New-bugs-announce] [issue46643] typing.Annotated cannot wrap typing.ParamSpec args/kwargs Message-ID: <1644013165.99.0.0164231326825.issue46643@roundup.psfhosted.org> New submission from Gregory Beauregard : Consider the following. ``` import logging from typing import Annotated, Callable, ParamSpec, TypeVar T = TypeVar("T") P = ParamSpec("P") def add_logging(f: Callable[P, T]) -> Callable[P, T]: """A type-safe decorator to add logging to a function.""" def inner(*args: Annotated[P.args, "meta"], **kwargs: P.kwargs) -> T: logging.info(f"{f.__name__} was called") return f(*args, **kwargs) return inner @add_logging def add_two(x: float, y: float) -> float: """Add two numbers together.""" return x + y ``` This raises an error at runtime because P.args/P.kwargs cannot pass the typing._type_check called by Annotated because they are not callable(). This prevents being able to use Annotated on these type annotations. This can be fixed by adding __call__ methods that raise to typing.ParamSpecArgs and typing.ParamSpecKwargs to match other typeforms. I can write this patch given agreement ---------- components: Library (Lib) messages: 412546 nosy: AlexWaygood, GBeauregard, Jelle Zijlstra, gvanrossum, kj priority: normal severity: normal status: open title: typing.Annotated cannot wrap typing.ParamSpec args/kwargs type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 18:02:57 2022 From: report at bugs.python.org (Gregory Beauregard) Date: Fri, 04 Feb 2022 23:02:57 +0000 Subject: [New-bugs-announce] [issue46644] typing: remove callable() check from typing._type_check Message-ID: <1644015777.96.0.875814750019.issue46644@roundup.psfhosted.org> New submission from Gregory Beauregard : I propose removing the callable() check[1] from typing._type_check. This restriction is usually met in typeform instances by implementing a __call__ method that raises at runtime[2]. _type_check is called at runtime in a few disparate locations, such as in an argument to typing.Annotated or for certain stringified annotations in typing.get_type_hints. Because the requirement to be callable is unexpected and shows up in situations not easily discoverable during development or common typing usage, it is the cause of several existing cpython bugs and will likely continue to be the cause of bugs in typeforms outside of cpython. Known cpython bugs caused by the callable() check are bpo-46643, bpo-44799, a substantial contributing factor to bpo-46642, and partly bpo-46511. I discovered bpo-46643 with only a cursory check of typing.py while writing this proposal. Moreover, it doesn't make any particular technical sense to me why it should be required to add an awkward __call__ method. Removing the callable() check fails 10 tests: 7 tests: checking that an int literal is not a type 2 tests: testing that list literals are not valid types (e.g. [3] raises a TypeError because the literal [('name', str), ('id', int)] doesn't pass callable()) 1 test: bpo-46642 The responsibility of determining these invalid typeforms (e.g. int literals) would need to be passed to a static type checker. If it's desired to do this at runtime it's my opinion that a different check would be more appropriate. Have I missed any reasons for the callable() check? Can I remove the check and adjust or remove the tests? [1] https://github.com/python/cpython/blob/bf95ff91f2c1fc5a57190491f9ccdc63458b089e/Lib/typing.py#L183-L184 [2] https://github.com/python/cpython/blob/bf95ff91f2c1fc5a57190491f9ccdc63458b089e/Lib/typing.py#L392-L393 [3] https://github.com/python/cpython/blob/bf95ff91f2c1fc5a57190491f9ccdc63458b089e/Lib/test/test_typing.py#L4262-L4263 ---------- components: Library (Lib) messages: 412548 nosy: AlexWaygood, GBeauregard, Jelle Zijlstra, gvanrossum, kj, levkivskyi, sobolevn priority: normal severity: normal status: open title: typing: remove callable() check from typing._type_check type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 4 21:02:13 2022 From: report at bugs.python.org (Josh Triplett) Date: Sat, 05 Feb 2022 02:02:13 +0000 Subject: [New-bugs-announce] [issue46645] Portable python3 shebang for Windows, macOS, and Linux Message-ID: <1644026533.87.0.375364061362.issue46645@roundup.psfhosted.org> New submission from Josh Triplett : I'm writing this issue on behalf of the Rust project. The build system for the Rust compiler is a Python 3 script `x.py`, which orchestrates the build process for a user even if they don't already have Rust installed. (For instance, `x.py build`, `x.py test`, and various command-line arguments for more complex cases.) We currently run into various issues making this script easy for people to use on all common platforms people build Rust on: Windows, macOS, and Linux. If we use a shebang of `#!/usr/bin/env python3`, then x.py works for macOS and Linux users, and also works on Windows systems that install Python via the Windows store, but fails to run on Windows systems that install via the official Python installer, requiring users to explicitly invoke Python 3 on the script, and adding friction, support issues, and complexity to our documentation to help users debug that situation. If we use a shebang of `#!/usr/bin/env python`, then x.py works for Windows users, fails on some modern macOS systems, works on other modern macOS systems (depending on installation method I think, e.g. homebrew vs Apple), fails on some modern Linux systems, and on macOS and Linux systems where it *does* work, it might be python2 or python3. So in practice, people often have to explicitly run `python3 x.py`, which again results in friction, support issues, and complexity in our documentation. We've even considered things like `#!/bin/sh` and then writing a shell script hidden inside a Python triple-quoted string, but that doesn't work well on Windows where we can't count on the presence of a shell. We'd love to write a single shebang that works for all of Windows, macOS, and Linux systems, and doesn't resort in recurring friction or support issues for us across the wide range of systems that our users use. As far as we can tell, `#!/usr/bin/env python3` would work on all platforms, if the Python installer for Windows shipped a `python3.exe` and handled that shebang by using `python3.exe` as the interpreter. Is that something that the official Python installer could consider adding, to make it easy for us to supply cross-platform Python 3 scripts that work out of the box for all our users? Thank you, Josh Triplett, on behalf of many Rust team members ---------- messages: 412553 nosy: joshtriplett priority: normal severity: normal status: open title: Portable python3 shebang for Windows, macOS, and Linux type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 02:05:52 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Sat, 05 Feb 2022 07:05:52 +0000 Subject: [New-bugs-announce] [issue46646] `address` arg can be `bytes` for `ip_*` functions in `ipaddress` module Message-ID: <1644044752.78.0.400898810491.issue46646@roundup.psfhosted.org> New submission from Nikita Sobolev : Right now the docs say: > ipaddress.ip_interface(address) > Return an IPv4Interface or IPv6Interface object depending on the IP address passed as argument. **address is a string or integer** representing the IP address. Either IPv4 or IPv6 addresses may be supplied; integers less than 2**32 will be considered to be IPv4 by default. A ValueError is raised if address does not represent a valid IPv4 or IPv6 address. Note the `address is a string or integer` part. But, this is not true. Counter example: ``` >>> import ipaddress >>> ipaddress.ip_interface(b'0000') IPv4Interface('48.48.48.48/32') >>> ipaddress.ip_interface(b'1111') IPv4Interface('49.49.49.49/32') ``` So, packed version that accepts `bytes`, should be also mentioned. For `ip_address` types are not mentioned: > ipaddress.ip_address(address) > Return an IPv4Address or IPv6Address object depending on the IP address passed as argument. Either IPv4 or IPv6 addresses may be supplied; integers less than 2**32 will be considered to be IPv4 by default. A ValueError is raised if address does not represent a valid IPv4 or IPv6 address. I will send a PR with proposed changes. ---------- assignee: docs at python components: Documentation messages: 412562 nosy: docs at python, sobolevn priority: normal severity: normal status: open title: `address` arg can be `bytes` for `ip_*` functions in `ipaddress` module type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 05:46:54 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Sat, 05 Feb 2022 10:46:54 +0000 Subject: [New-bugs-announce] [issue46647] `test_functools` unexpected failures when C `_functoolsmodule` is missing Message-ID: <1644058014.78.0.386662810799.issue46647@roundup.psfhosted.org> New submission from Nikita Sobolev : Reproduction steps: 1. Add to `Setup.local`: ``` *disabled* _functoolsmodule ``` 2. `.configure && make -j`. Then, ensure that this module is not available: ``` ? ./python.exe -c 'import _functools' Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named '_functools' ``` 3. Run `test_functools`: ``` ====================================================================== ERROR: test_bad_cmp (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 905, in test_bad_cmp key = self.cmp_to_key(cmp1) ^^^^^^^^^^^^^^^^^^^^^ TypeError: cmp_to_key() takes 1 positional argument but 2 were given ====================================================================== ERROR: test_cmp_to_key (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 869, in test_cmp_to_key key = self.cmp_to_key(cmp1) ^^^^^^^^^^^^^^^^^^^^^ TypeError: cmp_to_key() takes 1 positional argument but 2 were given ====================================================================== ERROR: test_cmp_to_key_arguments (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 885, in test_cmp_to_key_arguments key = self.cmp_to_key(mycmp=cmp1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: cmp_to_key() got multiple values for argument 'mycmp' ====================================================================== ERROR: test_hash (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 941, in test_hash key = self.cmp_to_key(mycmp) ^^^^^^^^^^^^^^^^^^^^^^ TypeError: cmp_to_key() takes 1 positional argument but 2 were given ====================================================================== ERROR: test_obj_field (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 920, in test_obj_field key = self.cmp_to_key(mycmp=cmp1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: cmp_to_key() got multiple values for argument 'mycmp' ====================================================================== ERROR: test_sort_int (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 926, in test_sort_int self.assertEqual(sorted(range(5), key=self.cmp_to_key(mycmp)), ^^^^^^^^^^^^^^^^^^^^^^ TypeError: cmp_to_key() takes 1 positional argument but 2 were given ====================================================================== ERROR: test_sort_int_str (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 934, in test_sort_int_str values = sorted(values, key=self.cmp_to_key(mycmp)) ^^^^^^^^^^^^^^^^^^^^^^ TypeError: cmp_to_key() takes 1 positional argument but 2 were given ====================================================================== ERROR: test_pickle (test.test_functools.TestPartialC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 258, in test_pickle f_copy = pickle.loads(pickle.dumps(f, proto)) ^^^^^^^^^^^^^^^^^^^^^^ _pickle.PicklingError: Can't pickle : it's not the same object as functools.partial ====================================================================== ERROR: test_recursive_pickle (test.test_functools.TestPartialC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 343, in test_recursive_pickle pickle.dumps(f, proto) ^^^^^^^^^^^^^^^^^^^^^^ _pickle.PicklingError: Can't pickle : it's not the same object as functools.partial ====================================================================== ERROR: test_iterator_usage (test.test_functools.TestReduceC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 843, in test_iterator_usage self.assertEqual(self.reduce(add, SequenceClass(5)), 10) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/functools.py", line 249, in reduce it = iter(sequence) ^^^^^^^^^^^^^^ TypeError: 'builtin_function_or_method' object is not iterable ====================================================================== ERROR: test_reduce (test.test_functools.TestReduceC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 794, in test_reduce self.assertEqual(self.reduce(add, ['a', 'b', 'c'], ''), 'abc') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: reduce() takes from 2 to 3 positional arguments but 4 were given ====================================================================== FAIL: test_disallow_instantiation (test.test_functools.TestCmpToKeyC) ---------------------------------------------------------------------- TypeError: type() takes 1 or 3 arguments During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 955, in test_disallow_instantiation support.check_disallow_instantiation( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py", line 2121, in check_disallow_instantiation testcase.assertRaisesRegex(TypeError, msg, tp, *args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: "cannot create 'type' instances" does not match "type() takes 1 or 3 arguments" ====================================================================== FAIL: test_attributes_unwritable (test.test_functools.TestPartialC) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 402, in test_attributes_unwritable self.assertRaises(AttributeError, setattr, p, 'func', map) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: AttributeError not raised by setattr ====================================================================== FAIL: test_attributes_unwritable (test.test_functools.TestPartialCSubclass) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/test/test_functools.py", line 402, in test_attributes_unwritable self.assertRaises(AttributeError, setattr, p, 'func', map) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: AttributeError not raised by setattr ---------------------------------------------------------------------- Ran 249 tests in 0.690s FAILED (failures=3, errors=11) test test_functools failed test_functools failed (11 errors, 3 failures) == Tests result: FAILURE == 1 test failed: test_functools Total duration: 1.3 sec Tests result: FAILURE ``` List of individual problems: 1. This function is defined assuming that `c_functools` always has `.lru_cache`: https://github.com/python/cpython/blob/fea7290a0ecee09bbce571d4d10f5881b7ea3485/Lib/test/test_functools.py#L1860-L1862 2. `TestLRUC` is never skipped: https://github.com/python/cpython/blob/fea7290a0ecee09bbce571d4d10f5881b7ea3485/Lib/test/test_functools.py#L1879-L1881 I think it should be, because there's no need to test `_lru_cache_wrapper` twice for just python implementation (default if `_functools` is missing) 3. All similar modules tend to use `fresh=` in `import_fresh_module`, for example: https://github.com/python/cpython/blob/fea7290a0ecee09bbce571d4d10f5881b7ea3485/Lib/test/test_typing.py#L43-L44 But, `test_functools` does not do this: https://github.com/python/cpython/blob/fea7290a0ecee09bbce571d4d10f5881b7ea3485/Lib/test/test_functools.py#L30 So, even if `_functools` is missing, `c_functools` will not be `None`, it will still be `functools.py` module! And this causes multiple unexpected test failures above Related: - https://github.com/python/cpython/pull/23405 - https://github.com/python/cpython/pull/23407 I will send a patch for this in a moment. ---------- components: Tests messages: 412565 nosy: rhettinger, shihai1991, sobolevn priority: normal severity: normal status: open title: `test_functools` unexpected failures when C `_functoolsmodule` is missing type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 06:45:33 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Sat, 05 Feb 2022 11:45:33 +0000 Subject: [New-bugs-announce] [issue46648] `test.test_urllib2.MiscTests.test_issue16464` started to fail Message-ID: <1644061533.84.0.594817920054.issue46648@roundup.psfhosted.org> New submission from Nikita Sobolev : Today I've noticed that a lot of CI runs fail because of this test. Output: ``` ====================================================================== ERROR: test_issue16464 (test.test_urllib2.MiscTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/sobolev/Desktop/cpython/Lib/contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/test/support/socket_helper.py", line 245, in transient_internet yield ^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/test/test_urllib2.py", line 1799, in test_issue16464 opener.open(request, "1".encode("us-ascii")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/urllib/request.py", line 525, in open response = meth(req, response) ^^^^^^^^^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/urllib/request.py", line 634, in http_response response = self.parent.error( ^^^^^^^^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/urllib/request.py", line 563, in error return self._call_chain(*args) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/urllib/request.py", line 496, in _call_chain result = func(*args) ^^^^^^^^^^^ File "/Users/sobolev/Desktop/cpython/Lib/urllib/request.py", line 643, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ urllib.error.HTTPError: HTTP Error 404: Not Found ---------------------------------------------------------------------- Ran 1 test in 0.448s FAILED (errors=1) /Users/sobolev/Desktop/cpython/Lib/test/support/__init__.py:705: ResourceWarning: unclosed gc.collect() ResourceWarning: Enable tracemalloc to get the object allocation traceback test test_urllib2 failed test_urllib2 failed (1 error) == Tests result: FAILURE == ``` I can also reproduce this failure locally with: ``` ./python.exe -m test -v test_urllib2 -m test_issue16464 -u network ``` Related https://bugs.python.org/issue36019 ---------- components: Tests messages: 412567 nosy: sobolevn priority: normal severity: normal status: open title: `test.test_urllib2.MiscTests.test_issue16464` started to fail type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 08:07:42 2022 From: report at bugs.python.org (Gabriele N Tornetta) Date: Sat, 05 Feb 2022 13:07:42 +0000 Subject: [New-bugs-announce] [issue46649] Propagate Python thread name to thread state structure Message-ID: <1644066462.37.0.754357637612.issue46649@roundup.psfhosted.org> New submission from Gabriele N Tornetta : For tools like Austin (https://github.com/P403n1x87/austin) it is currently quite challenging to derive the name of a thread based on the information exposed by the PyThreadState structure and one stored in threading._active. I would like to propose propagating the thread name from the Thread object to the PyThreadState structure so that profiling information from tools like Austin could easily be enriched with the names of each sampled thread. ---------- components: C API messages: 412569 nosy: Gabriele Tornetta priority: normal severity: normal status: open title: Propagate Python thread name to thread state structure type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 10:50:10 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Sat, 05 Feb 2022 15:50:10 +0000 Subject: [New-bugs-announce] [issue46650] `priority` in `sched.scheduler` is not sufficiently tested Message-ID: <1644076210.74.0.445085597972.issue46650@roundup.psfhosted.org> New submission from Nikita Sobolev : Right now there only a single test to ensure `priority` works correctly in `scheduler`: https://github.com/python/cpython/blob/fea7290a0ecee09bbce571d4d10f5881b7ea3485/Lib/test/test_sched.py#L90-L97 It looks like it is not enough. Why? ``` for priority in [1, 2, 3, 4, 5]: z = scheduler.enterabs(0.01, priority, fun, (priority,)) scheduler.run() self.assertEqual(l, [1, 2, 3, 4, 5]) ``` This test does not actually test different priorities. It only tests that a direct one works correctly. But, this might be a pure coincidence that numbers match. They are spawned in this particular order. What if there are equal numbers? Like `[1, 2, 1]` I propose adding more examples to this test. PR is on its way. ---------- components: Tests messages: 412577 nosy: sobolevn priority: normal severity: normal status: open title: `priority` in `sched.scheduler` is not sufficiently tested type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 12:57:15 2022 From: report at bugs.python.org (STINNER Victor) Date: Sat, 05 Feb 2022 17:57:15 +0000 Subject: [New-bugs-announce] [issue46651] test_urllib2.test_issue16464() fails randomly Message-ID: <1644083835.92.0.540199213669.issue46651@roundup.psfhosted.org> New submission from STINNER Victor : test_urllib2.test_issue16464() fails randomly. It uses http://www.example.com/ server. Instead, I proposed to use http://httpbin.org/post URL which is written to support HTTP POST. $ ./python -m test test_urllib2 -u all -v -m test_issue16464 ====================================================================== ERROR: test_issue16464 (test.test_urllib2.MiscTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/python/main/Lib/contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/test/support/socket_helper.py", line 245, in transient_internet yield ^^^^^ File "/home/vstinner/python/main/Lib/test/test_urllib2.py", line 1799, in test_issue16464 opener.open(request, "1".encode("us-ascii")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/urllib/request.py", line 525, in open response = meth(req, response) ^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/urllib/request.py", line 634, in http_response response = self.parent.error( ^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/urllib/request.py", line 563, in error return self._call_chain(*args) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/urllib/request.py", line 496, in _call_chain result = func(*args) ^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/urllib/request.py", line 643, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ urllib.error.HTTPError: HTTP Error 404: Not Found ---------- components: Tests messages: 412586 nosy: vstinner priority: normal severity: normal status: open title: test_urllib2.test_issue16464() fails randomly versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 16:25:55 2022 From: report at bugs.python.org (Gabriele N Tornetta) Date: Sat, 05 Feb 2022 21:25:55 +0000 Subject: [New-bugs-announce] [issue46652] Use code.co_qualname to provide richer information Message-ID: <1644096355.01.0.0724658726819.issue46652@roundup.psfhosted.org> New submission from Gabriele N Tornetta : https://bugs.python.org/issue44530 introduced the co_qualname field to code objects. This could be used to, e.g. enrich the information provided by tracebacks. Consider this simple example ~~~ python import traceback class Bogus: def __init__(self): traceback.print_stack() raise RuntimeError("Oh no!") class Foo: def __init__(self): Bogus() Foo() ~~~ The current output is ~~~ ? python3.10 test_tb_format.py File "/home/gabriele/Projects/cpython/test_tb_format.py", line 15, in Foo() File "/home/gabriele/Projects/cpython/test_tb_format.py", line 12, in __init__ Bogus() File "/home/gabriele/Projects/cpython/test_tb_format.py", line 6, in __init__ traceback.print_stack() Traceback (most recent call last): File "/home/gabriele/Projects/cpython/test_tb_format.py", line 15, in Foo() File "/home/gabriele/Projects/cpython/test_tb_format.py", line 12, in __init__ Bogus() File "/home/gabriele/Projects/cpython/test_tb_format.py", line 7, in __init__ raise RuntimeError("Oh no!") RuntimeError: Oh no! ~~~ The proposed change is to use the co_qualname field instead of co_name to provide more immediate information about the distinct functions __init__, viz. ~~~ ? ./python test_tb_format.py File "/home/gabriele/Projects/cpython/test_tb_format.py", line 15, in Foo() File "/home/gabriele/Projects/cpython/test_tb_format.py", line 12, in Foo.__init__ Bogus() File "/home/gabriele/Projects/cpython/test_tb_format.py", line 6, in Bogus.__init__ traceback.print_stack() Traceback (most recent call last): File "/home/gabriele/Projects/cpython/test_tb_format.py", line 15, in Foo() ^^^^^ File "/home/gabriele/Projects/cpython/test_tb_format.py", line 12, in Foo.__init__ Bogus() ^^^^^^^ File "/home/gabriele/Projects/cpython/test_tb_format.py", line 7, in Bogus.__init__ raise RuntimeError("Oh no!") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Oh no! ~~~ This makes it clear that two distinct __init__ functions are involved, without having to look at sources. ---------- components: Interpreter Core messages: 412598 nosy: Gabriele Tornetta, pablogsal priority: normal severity: normal status: open title: Use code.co_qualname to provide richer information type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 17:20:58 2022 From: report at bugs.python.org (Josselin Poiret) Date: Sat, 05 Feb 2022 22:20:58 +0000 Subject: [New-bugs-announce] [issue46653] sys.path entries normalization in site.py doesn't follow POSIX symlink behaviour Message-ID: <1644099658.9.0.961977779463.issue46653@roundup.psfhosted.org> New submission from Josselin Poiret : Whenever sys.prefix contains a symlink followed by a '..', the corresponding part of sys.path entries will not refer to the parent directory of the directory pointed to by the symlink, but rather to the directory in which the symlink is. Thus, it will be impossible to import standard Python modules installed at sys.prefix, among other things. Here is an example: Suppose you have installed Python with prefix /usr. Create a symlink /tmp/symlink pointing to /usr/lib, and launch `PYTHONHOME=/tmp/symlink/.. python3`. In that REPL, `import warnings` will fail to find the correct module, as evidenced by the value of `sys.path` containing entries such as `/tmp/lib/python3.X/` instead of the expected `/usr/lib/python3.X/`. This issue is caused by the makepath function (among others) in Lib/site.py using os.path.abspath instead of os.path.realpath, which does not follow POSIX as the documentation of os.path.normpath (used internally by abspath) suggests. I propose replacing all four instances of abspath in Lib/site.py to realpath instead. This is a breaking change for users that relied on non-conforming symlink semantics (although that user-base might be quite small), but in my opinion Python should be expected to follow the behaviour of the platform it is running on. This issue was raised while investigating a bug [1] in the relocatable packs feature of GNU Guix [2], which makes use of symlinks to achieve relocatability. [1] https://issues.guix.gnu.org/53258 [2] https://guix.gnu.org/en/blog/2018/tarballs-the-ultimate-container-image-format/ ---------- components: Library (Lib) messages: 412600 nosy: jpoiret priority: normal severity: normal status: open title: sys.path entries normalization in site.py doesn't follow POSIX symlink behaviour type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 5 17:27:39 2022 From: report at bugs.python.org (Mike Auty) Date: Sat, 05 Feb 2022 22:27:39 +0000 Subject: [New-bugs-announce] [issue46654] file_open doesn't handle UNC paths produced by pathlib's resolve() (but can handle UNC paths with additional slashes) Message-ID: <1644100059.97.0.383989396746.issue46654@roundup.psfhosted.org> New submission from Mike Auty : I've found open to have difficulty with a resolved pathlib path: Example code of: import pathlib path = "Z:\\test.py" with open(path) as fp: print("Stock open: works") data = fp.read() with open(pathlib.Path(path).resolve().as_uri()) as fp: print("Pathlib resolve open") data = fp.read() Results in: Z:\> python test.py Stock open: works Traceback (most recent call last): File "Z:\test.py", line 12, in with open(pathlib.Path(path).resolve().as_uri()) as fp: FileNotFoundError: [Errno 2] No such file or directory: "file://machine/share/test.py" Interestingly, I've found that open("file:////machine/share/test.py") succeeds, but this isn't what pathlib's resolve() produces. It appears as though file_open only supports hosts that are local, but will open UNC paths on windows with the additional slashes. This is quite confusing behaviour and it's not clear why file://host/share/file won't work, but file:////host/share/file does. I imagine this is a long time issue and a decision has already been reached on why file_open doesn't support such URIs, but I couldn't find the answer anywhere, just issue 32442 which was resolved without clarifying the situation... ---------- messages: 412602 nosy: ikelos priority: normal severity: normal status: open title: file_open doesn't handle UNC paths produced by pathlib's resolve() (but can handle UNC paths with additional slashes) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 02:30:06 2022 From: report at bugs.python.org (Gregory Beauregard) Date: Sun, 06 Feb 2022 07:30:06 +0000 Subject: [New-bugs-announce] [issue46655] typing.TypeAlias is not in the list of allowed plain _SpecialForm typeforms Message-ID: <1644132606.43.0.5970530565.issue46655@roundup.psfhosted.org> New submission from Gregory Beauregard : typing.TypeAlias is allowed to be bare, but it's not listed in the list of types in typing._type_check that are allowed to be bare. This means it's possible to reach the wrong error `TypeError: Plain typing.TypeAlias is not valid as type argument` at runtime. Examples offhand: from typing import TypeAlias, get_type_hints class A: a: "TypeAlias" = int get_type_hints(A) from typing import Annotated, TypeAlias b: Annotated[TypeAlias, ""] = int There's likely more and/or more realistic ways to trigger the problem. Anything that triggers typing._type_check on typing.TypeAlias will give this error (TypeError: Plain typing.TypeAlias is not valid as type argument). I will fix this by adding TypeAlias to the list of typing special forms allowed to be bare/plain. I intend to move these to their own named var to reduce the chance of types not getting added in the future. ---------- components: Library (Lib) messages: 412618 nosy: GBeauregard priority: normal severity: normal status: open title: typing.TypeAlias is not in the list of allowed plain _SpecialForm typeforms type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 04:54:35 2022 From: report at bugs.python.org (Mark Dickinson) Date: Sun, 06 Feb 2022 09:54:35 +0000 Subject: [New-bugs-announce] [issue46656] Compile fails if Py_NO_NAN is defined Message-ID: <1644141275.01.0.967715714071.issue46656@roundup.psfhosted.org> New submission from Mark Dickinson : The macro Py_NAN may or may not be defined: in particular, a platform that doesn't have NaNs is supposed to be able to define Py_NO_NAN in pyport.h to indicate that. But not all of our uses of `Py_NAN` are guarded by suitable #ifdef conditionals. As a result, compilation fails if Py_NAN is not defined. ---------- messages: 412620 nosy: mark.dickinson priority: normal severity: normal status: open title: Compile fails if Py_NO_NAN is defined type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 09:49:12 2022 From: report at bugs.python.org (Christian Heimes) Date: Sun, 06 Feb 2022 14:49:12 +0000 Subject: [New-bugs-announce] [issue46657] Add mimalloc memory allocator Message-ID: <1644158952.9.0.443530192167.issue46657@roundup.psfhosted.org> New submission from Christian Heimes : >From https://github.com/microsoft/mimalloc > mimalloc (pronounced "me-malloc") is a general purpose allocator with excellent performance characteristics. Initially developed by Daan Leijen for the run-time systems of the Koka and Lean languages. mimalloc has several interesting properties that make it useful for CPython. Amongst other it is fast, thread-safe, and NUMA-aware. It has built-in free lists with multi-sharding and allocation heaps. While Python's obmalloc requires the GIL to protect its data structures, mimalloc uses mostly thread-local and atomic instructions (compare-and-swap) for efficiency. Sam Gross' nogil relies on mimalloc's thread safety and uses first-class heaps for heap walking GC. mimalloc works on majority of platforms and CPU architectures. However it requires a compiler with C11 atomics support. CentOS 7's default GCC is slightly too old, more recent GCC from Developer Toolset is required. For 3.11 I plan to integrate mimalloc as an optional drop-in replacement for obmalloc. Users will be able to compile CPython without mimalloc or disable mimalloc with PYTHONMALLOC env var. Since mimalloc will be optional in 3.11, Python won't depend or expose on any of the advanced features yet. The approach enables the community to test and give feedback with minimal risk of breakage. mimalloc sources will vendored without any option to use system libraries. Python's mimalloc requires several non-standard compile-time flags. In the future Python may extend or modify mimalloc for heap walking and nogil, too. (This is a tracking bug until I find time to finish a PEP.) ---------- components: Interpreter Core messages: 412639 nosy: christian.heimes priority: normal severity: normal status: open title: Add mimalloc memory allocator type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 10:52:18 2022 From: report at bugs.python.org (David CARLIER) Date: Sun, 06 Feb 2022 15:52:18 +0000 Subject: [New-bugs-announce] [issue46658] shutil Lib enables sendfile on solaris for regular files Message-ID: <1644162738.48.0.935411238665.issue46658@roundup.psfhosted.org> New submission from David CARLIER : - sendfile on solaris supports copy between regular file descriptors as well. ---------- components: Library (Lib) messages: 412643 nosy: devnexen priority: normal pull_requests: 29338 severity: normal status: open title: shutil Lib enables sendfile on solaris for regular files versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 12:33:14 2022 From: report at bugs.python.org (STINNER Victor) Date: Sun, 06 Feb 2022 17:33:14 +0000 Subject: [New-bugs-announce] [issue46659] Deprecate locale.getdefaultlocale() function Message-ID: <1644168794.44.0.0658923518102.issue46659@roundup.psfhosted.org> New submission from STINNER Victor : The locale.getdefaultlocale() function only relies on environment variables. At Python startup, Python calls setlocale() is set the LC_CTYPE locale to the user preferred encoding. Since Python 3.7, if the LC_CTYPE locale is "C" or "POSIX", PEP 538 sets the LC_CTYPE locale to a UTF-8 variant if available, and PEP 540 ignores the locale and forces the usage of the UTF-8 encoding. The *effective* encoding used by Python is inconsistent with environment variables. Moreover, if setlocale() is called to set the LC_CTYPE locale to a locale different than the user locale, again, environment variables are inconsistent with the effective locale. In these cases, locale.getdefaultlocale() result is not the expected locale and it can lead to mojibake and other issues. For these reasons, I propose to deprecate locale.getdefaultlocale(): setlocale(), getpreferredencoding() and getlocale() should be used instead. For the background on these issues, see recent issue: * bpo-43552 * bpo-43557 ---------- components: Library (Lib) messages: 412647 nosy: vstinner priority: normal severity: normal status: open title: Deprecate locale.getdefaultlocale() function versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 12:51:35 2022 From: report at bugs.python.org (Sam Roberts) Date: Sun, 06 Feb 2022 17:51:35 +0000 Subject: [New-bugs-announce] [issue46660] datetime.datetime.fromtimestamp Message-ID: <1644169895.69.0.613310626242.issue46660@roundup.psfhosted.org> New submission from Sam Roberts : Python 3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 13:44:55) [MSC v.1928 64 bit (AMD64)] on win32 datetime.fromtimestamp() fails for naive-datetime values prior to the start of the epoch, but for some reason works properly for aware-datetime values prior to the start of the epoch. This is at least inconsistent, but seems like a bug. Negative timestamps for dates prior to the start of the epoch are used by yahoo finance and in the yfinance module. >>> import datetime >>> start = int(datetime.datetime(1962, 1, 31, tzinfo=datetime.timezone.utc).timestamp()) >>> start -249868800 >>> start = int(datetime.datetime(1962, 1, 31).timestamp()) Traceback (most recent call last): File "", line 1, in start = int(datetime.datetime(1962, 1, 31).timestamp()) OSError: [Errno 22] Invalid argument ---------- components: Library (Lib) messages: 412649 nosy: smrpy priority: normal severity: normal status: open title: datetime.datetime.fromtimestamp type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 13:43:45 2022 From: report at bugs.python.org (Guido van Rossum) Date: Sun, 06 Feb 2022 18:43:45 +0000 Subject: [New-bugs-announce] [issue46661] Duplicat deprecation warnings in docs for asyncio Message-ID: <1644173025.54.0.0182116442771.issue46661@roundup.psfhosted.org> New submission from Guido van Rossum : I found that several asyncio function descriptions, e.g. gather, have a duplicate deprecation notice like this: .. deprecated-removed:: 3.8 3.10 The ``loop`` parameter. This function has been implicitly getting the current running loop since 3.7. See :ref:`What's New in 3.10's Removed section ` for more information. For gather, that notice appears both before and after the example. For a few others, too. ---------- assignee: docs at python components: Documentation, asyncio messages: 412653 nosy: asvetlov, docs at python, gvanrossum, yselivanov priority: normal severity: normal status: open title: Duplicat deprecation warnings in docs for asyncio versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 14:03:59 2022 From: report at bugs.python.org (Bo-wei Chen) Date: Sun, 06 Feb 2022 19:03:59 +0000 Subject: [New-bugs-announce] [issue46662] Lib/sqlite3/dbapi2.py: convert_timestamp function failed to correctly parse timestamp Message-ID: <1644174239.54.0.192057033337.issue46662@roundup.psfhosted.org> New submission from Bo-wei Chen : convert_timestamp function in Lib/sqlite3/dbapi2.py fails to parse a timestamp correctly, if it does not have microseconds but comes with timezone information, e.g. b"2022-02-01 16:09:35+00:00" Traceback: Traceback (most recent call last): File "/Users/user/Desktop/test.py", line 121, in convert_timestamp(b"2022-02-01 16:09:35+00:00") File "/Users/user/Desktop/test.py", line 112, in convert_timestamp hours, minutes, seconds = map(int, timepart_full[0].split(b":")) ValueError: invalid literal for int() with base 10: b'35+00' ---------- components: Library (Lib) messages: 412655 nosy: Rayologist priority: normal severity: normal status: open title: Lib/sqlite3/dbapi2.py: convert_timestamp function failed to correctly parse timestamp type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 15:31:25 2022 From: report at bugs.python.org (STINNER Victor) Date: Sun, 06 Feb 2022 20:31:25 +0000 Subject: [New-bugs-announce] [issue46663] test_math test_cmath test_complex fails on Fedora Rawhide buildbots Message-ID: <1644179485.62.0.212038403809.issue46663@roundup.psfhosted.org> New submission from STINNER Victor : PPC64LE Fedora Rawhide LTO 3.10: https://buildbot.python.org/all/#/builders/674/builds/543 3 tests failed: test_cmath test_complex test_math That's a GCC 12 regression: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104389 ---------- components: Tests messages: 412663 nosy: vstinner priority: normal severity: normal status: open title: test_math test_cmath test_complex fails on Fedora Rawhide buildbots versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 16:20:42 2022 From: report at bugs.python.org (ov2k) Date: Sun, 06 Feb 2022 21:20:42 +0000 Subject: [New-bugs-announce] [issue46664] PY_SSIZE_T_MAX is not an integer constant expression Message-ID: <1644182442.72.0.928372209775.issue46664@roundup.psfhosted.org> New submission from ov2k : PY_SSIZE_T_MAX is currently defined in Include/pyport.h as: #define PY_SSIZE_T_MAX ((Py_ssize_t)(((size_t)-1)>>1)) This is not an integer constant expression, which means it can't be used in preprocessor conditionals. For example: #if PY_SSIZE_T_MAX > UINT32_MAX will fail to compile. This was touched upon and ignored a long time ago: https://mail.python.org/archives/list/python-dev at python.org/thread/27X7LINL4UO7DAJE6J3IFQEZGUKAO4VL/ I think the best fix is to move the definition of PY_SSIZE_T_MAX (and PY_SSIZE_T_MIN) next to the definition of Py_ssize_t, and use the proper corresponding limit macro. If Py_ssize_t is a typedef for ssize_t, then PY_SSIZE_T_MAX should be SSIZE_MAX. If Py_ssize_t is a typedef for Py_intptr_t, then PY_SSIZE_T_MAX should be INTPTR_MAX. There's a minor complication because Py_ssize_t can be defined in PC/pyconfig.h. I'm not so familiar with the various Windows compilers, so I'm not sure what's best to do here. I think __int64 has a corresponding _I64_MAX macro, and int obviously has INT_MAX. ---------- components: C API messages: 412670 nosy: ov2k priority: normal severity: normal status: open title: PY_SSIZE_T_MAX is not an integer constant expression type: compile error versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 16:43:55 2022 From: report at bugs.python.org (primexx) Date: Sun, 06 Feb 2022 21:43:55 +0000 Subject: [New-bugs-announce] [issue46665] IDLE Windows shortcuts by default Message-ID: <1644183835.83.0.808508877845.issue46665@roundup.psfhosted.org> New submission from primexx : In IDLE on Windows, there are certain keyboard shortcut idiosycracies in the default configuration. For example, redo is ctrl+shift+z (standard elsewhere) rather than ctrl+y (Microsoft's standard) de-indenting is ctrl+[ rather than shift+tab (also affects multi-line selected behaviour) Request: adjust the defaults based on OS platform and use windows style by default on windows If this is a dupe I apologize. I tried to search for an existing issue but wasn't able to find any with the keywords i can think of ---------- assignee: terry.reedy components: IDLE messages: 412671 nosy: primexx, terry.reedy priority: normal severity: normal status: open title: IDLE Windows shortcuts by default versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 16:52:47 2022 From: report at bugs.python.org (primexx) Date: Sun, 06 Feb 2022 21:52:47 +0000 Subject: [New-bugs-announce] [issue46666] IDLE indent guide Message-ID: <1644184367.6.0.417629376487.issue46666@roundup.psfhosted.org> New submission from primexx : Request: support indent guide for IDLE in editor window (i.e. not interactive shell) there appears to not be currently support for indent guides in idle one take is that idle is meant for small scripts and one should seek out a more complex IDE if it gets to the point of needing indent lines https://stackoverflow.com/q/66231105 i think that there would still be value in indent lines even in IDLE. it is a popular IDE for beginners and even in short scripts there can still be sufficiently large indented blocks, relatively speaking. it doesn't take that much code for indent guides to become helpful. ---------- assignee: terry.reedy components: IDLE messages: 412672 nosy: primexx, terry.reedy priority: normal severity: normal status: open title: IDLE indent guide versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 16:58:45 2022 From: report at bugs.python.org (Jonathan) Date: Sun, 06 Feb 2022 21:58:45 +0000 Subject: [New-bugs-announce] [issue46667] SequenceMatcher & autojunk - false negative Message-ID: <1644184725.54.0.128225003107.issue46667@roundup.psfhosted.org> New submission from Jonathan : The following two strings are identical other than the text "UNIQUESTRING". UNIQUESTRING is at the start of first and at the end of second. Running the below gives the following output: 0.99830220713073 0.99830220713073 0.023769100169779286 # ratio 0.99830220713073 0.99830220713073 0.023769100169779286 # ratio As you can see, Ratio is basically 0. Remove either of the UNIQUESTRING pieces and it goes up to 0.98 (correct)... Remove both and you get 1.0 (correct) ``` from difflib import SequenceMatcher first = """ UNIQUESTRING Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum """ second = """ Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum UNIQUESTRING """ sm = SequenceMatcher(None, first, second, autojunk=False) print(sm.real_quick_ratio()) print(sm.quick_ratio()) print(sm.ratio()) print() sm2 = SequenceMatcher(None, second, first, autojunk=False) print(sm2.real_quick_ratio()) print(sm2.quick_ratio()) print(sm2.ratio()) ``` If I add `autojunk=False`, then I get a correct looking ratio (0.98...), however from my reading of the autojunk docs, UNIQUESTRING shouldn't be triggering it. Furthermore, looking in the code, as far as I can see autojunk is having no effect... Autojunk considers these items to be "popular" in that string: `{'n', 'p', 'a', 'h', 'e', 'u', 'I', 'r', 'k', 'g', 'y', 'm', 'c', 'd', 't', 'l', 'o', 's', ' ', 'i'}` If I remove UNIQUESTRING from `first`, this is the autojunk popular set: `{'c', 'p', 'a', 'u', 'r', 'm', 'k', 'g', 'I', 'd', ' ', 'o', 'h', 't', 'e', 'i', 'l', 's', 'y', 'n'}` They're identical! In both scenarios, `b2j` is also identical. I don't pretend to understand what the module is doing in any detail, but this certainly seems like a false positive/negative. Python 3.8.10 ---------- components: Library (Lib) messages: 412673 nosy: jonathan-lp priority: normal severity: normal status: open title: SequenceMatcher & autojunk - false negative type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 18:06:49 2022 From: report at bugs.python.org (STINNER Victor) Date: Sun, 06 Feb 2022 23:06:49 +0000 Subject: [New-bugs-announce] [issue46668] encodings: the "mbcs" alias doesn't work Message-ID: <1644188809.05.0.861398707442.issue46668@roundup.psfhosted.org> New submission from STINNER Victor : While working on bpo-46659, I found a bug in the encodings "mbcs" alias. Even if the function has 2 tests (in test_codecs and test_site), both tests missed the bug :-( I fixed the alias with this change: --- commit 04dd60e50cd3da48fd19cdab4c0e4cc600d6af30 Author: Victor Stinner Date: Sun Feb 6 21:50:09 2022 +0100 bpo-46659: Update the test on the mbcs codec alias (GH-31168) encodings registers the _alias_mbcs() codec search function before the search_function() codec search function. Previously, the _alias_mbcs() was never used. Fix the test_codecs.test_mbcs_alias() test: use the current ANSI code page, not a fake ANSI code page number. Remove the test_site.test_aliasing_mbcs() test: the alias is now implemented in the encodings module, no longer in the site module. --- But Eryk found two bugs: """ This was never true before. With 1252 as my ANSI code page, I checked codecs.lookup('cp1252') in 2.7, 3.4, 3.5, 3.6, 3.9, and 3.10, and none of them return the "mbcs" encoding. It's not equivalent, and not supposed to be. The implementation of "cp1252" should be cross-platform, regardless of whether we're on a Windows system with 1252 as the ANSI code page, as opposed to a Windows system with some other ANSI code page, or a Linux or macOS system. The differences are that "mbcs" maps every byte, whereas our code-page encodings do not map undefined bytes, and the "replace" handler of "mbcs" uses a best-fit mapping (e.g. "?" -> "a") when encoding text, instead of mapping all undefined characters to "?". """ and my new test fails if PYTHONUTF8=1 env var is set: """ This will fail if PYTHONUTF8 is set in the environment, because it overrides getpreferredencoding(False) and _get_locale_encoding(). """ The code for the "mbcs" alias changed at lot between Python 3.5 and 3.7. In Python 3.5, site module: --- def aliasmbcs(): """On Windows, some default encodings are not provided by Python, while they are always available as "mbcs" in each locale. Make them usable by aliasing to "mbcs" in such a case.""" if sys.platform == 'win32': import _bootlocale, codecs enc = _bootlocale.getpreferredencoding(False) if enc.startswith('cp'): # "cp***" ? try: codecs.lookup(enc) except LookupError: import encodings encodings._cache[enc] = encodings._unknown encodings.aliases.aliases[enc] = 'mbcs' --- In Python 3.6, encodings module: --- (...) codecs.register(search_function) if sys.platform == 'win32': def _alias_mbcs(encoding): try: import _bootlocale if encoding == _bootlocale.getpreferredencoding(False): import encodings.mbcs return encodings.mbcs.getregentry() except ImportError: # Imports may fail while we are shutting down pass codecs.register(_alias_mbcs) --- Python 3.7, encodings module: --- (...) codecs.register(search_function) if sys.platform == 'win32': def _alias_mbcs(encoding): try: import _winapi ansi_code_page = "cp%s" % _winapi.GetACP() if encoding == ansi_code_page: import encodings.mbcs return encodings.mbcs.getregentry() except ImportError: # Imports may fail while we are shutting down pass codecs.register(_alias_mbcs) --- The Python 3.6 and 3.7 "codecs.register(_alias_mbcs)" doesn't work because "search_function()" is tested before and it works for "cpXXX" encodings. My changes changes the order in which codecs search functions are registered: first the MBCS alias, then the encodings search_function(). In Python 3.5, the alias was only created if Python didn't support the code page. ---------- components: Library (Lib) messages: 412678 nosy: vstinner priority: normal severity: normal status: open title: encodings: the "mbcs" alias doesn't work versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 18:12:51 2022 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 06 Feb 2022 23:12:51 +0000 Subject: [New-bugs-announce] [issue46669] Add types.Self Message-ID: <1644189171.85.0.34470950762.issue46669@roundup.psfhosted.org> New submission from Raymond Hettinger : Typeshed now has a nice self-describing type variable to annotate context managers: Self = TypeVar('Self') def __enter__(self: Self) -> Self: return self It would be nice to have that in the standard library types module as well. ---------- messages: 412682 nosy: Jelle Zijlstra, gvanrossum, rhettinger priority: normal severity: normal status: open title: Add types.Self versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 18:52:36 2022 From: report at bugs.python.org (STINNER Victor) Date: Sun, 06 Feb 2022 23:52:36 +0000 Subject: [New-bugs-announce] [issue46670] Build Python with -Wundef: don't use undefined macros Message-ID: <1644191556.12.0.192524677903.issue46670@roundup.psfhosted.org> New submission from STINNER Victor : Building Python with "gcc -Wundef" emits many warnings about usage of undefined macros. If a macro is not defined, it is equal to 0. The problem is that a macro can be undefined because of a missing #include, or because of a typo in its name, or because "#ifdef MACRO" should be used instead of "#if MACRO". It can hide bugs. I plan to fix these warnings. ---------- components: Build messages: 412690 nosy: vstinner priority: normal severity: normal status: open title: Build Python with -Wundef: don't use undefined macros versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 6 19:22:26 2022 From: report at bugs.python.org (Nnarol) Date: Mon, 07 Feb 2022 00:22:26 +0000 Subject: [New-bugs-announce] [issue46671] "ValueError: min() arg is an empty sequence" is wrong (builtins.min/max) Message-ID: <1644193346.19.0.843293726099.issue46671@roundup.psfhosted.org> New submission from Nnarol : Incorrect error message by min_max(): "ValueError: min() arg is an empty sequence" when using the form min(iterable, *[, default=obj, key=func]) -> value and "iterable" is empty, like so: min([]) or: min(set()) "Sequence" is referred to, even though the function accepts any iterable. E.g. if a different type of collection, such as a set was provided by the user, "sequence" is still printed. I propose to rephrase the error to "iterable argument is empty", to reflect actual behavior and be in line with the function's documented interface. "arg" also does not name either any specific variable in C code or a parameter in user-facing documentation. Such an abbreviation is not used by the function's other error messages either, which simply write "argument" or "arguments" in free text, as appropriate in the given context. Unlike for the error "max expected at least 1 argument, got 0", the above scenario's test does not include matching of the error string. This is probably the reason this was not noticed before. It would be nice to make the test more specific. The issue seems trivial, but I am not familiar with the CPython project's policy on whether to treat messages of errors, printed on stderr as an interface, in which case, the change would be backwards-incompatible. Definitely a decision to be made. ---------- components: Library (Lib) messages: 412694 nosy: Nnarol priority: normal severity: normal status: open title: "ValueError: min() arg is an empty sequence" is wrong (builtins.min/max) type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 00:24:45 2022 From: report at bugs.python.org (arl) Date: Mon, 07 Feb 2022 05:24:45 +0000 Subject: [New-bugs-announce] [issue46672] NameError in asyncio.gather when passing a invalid type as an arg with multiple awaitables Message-ID: <1644211485.82.0.488303396987.issue46672@roundup.psfhosted.org> New submission from arl : It is possible to cause a NameError in asyncio.gather if the second presumed coroutine fails the internal type check. Sample code: import asyncio async def main(): coros = (asyncio.sleep(1), {1: 1}) await asyncio.gather(*coros) asyncio.run(main()) Exception in callback gather.._done_callback(>) at /usr/local/lib/python3.10/asyncio/tasks.py:714 handle: ._done_callback(>) at /usr/local/lib/python3.10/asyncio/tasks.py:714> Traceback (most recent call last): File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/local/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete return future.result() File "", line 4, in main File "/usr/local/lib/python3.10/asyncio/tasks.py", line 775, in gather if arg not in arg_to_fut: TypeError: unhashable type: 'dict' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.10/asyncio/tasks.py", line 718, in _done_callback if outer.done(): NameError: free variable 'outer' referenced before assignment in enclosing scope Traceback (most recent call last): File "", line 5, in File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/local/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete return future.result() File "", line 4, in main File "/usr/local/lib/python3.10/asyncio/tasks.py", line 775, in gather if arg not in arg_to_fut: TypeError: unhashable type: 'dict' ---------- components: asyncio messages: 412709 nosy: asvetlov, onerandomusername, yselivanov priority: normal severity: normal status: open title: NameError in asyncio.gather when passing a invalid type as an arg with multiple awaitables versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 04:21:42 2022 From: report at bugs.python.org (Olli Lupton) Date: Mon, 07 Feb 2022 09:21:42 +0000 Subject: [New-bugs-announce] [issue46673] Py_BuildValue tuple creation segfaults in python3.9..3.11 Message-ID: <1644225702.08.0.855610983498.issue46673@roundup.psfhosted.org> New submission from Olli Lupton : The following function, compiled and linked into a shared library, segfaults when called from Python: ``` #define PY_SSIZE_T_CLEAN #include extern "C" PyObject* my_func() { return Py_BuildValue("(O)", Py_None); } ``` called using ctypes: ``` from ctypes import CDLL h = CDLL('./libtest.so?) h.my_func()? ``` crashes with a stacktrace ``` Program received signal SIGSEGV, Segmentation fault. _PyObject_GC_TRACK_impl (filename=0x7fffed7ab1b0 "src/Objects/tupleobject.c", lineno=36, op=(0x0,)) at src/Include/internal/pycore_object.h:43 (gdb) bt #0 _PyObject_GC_TRACK_impl (filename=0x7fffed7ab1b0 "src/Objects/tupleobject.c", lineno=36, op=(0x0,)) at src/Include/internal/pycore_object.h:43 #1 tuple_gc_track (op=0x7fffe5e42dc0) at src/Objects/tupleobject.c:36 #2 PyTuple_New (size=) at src/Objects/tupleobject.c:124 #3 PyTuple_New (size=size at entry=1) at src/Objects/tupleobject.c:100 #4 0x00007fffed7031eb in do_mktuple (p_format=0x7fffffffa8d0, p_va=0x7fffffffa8d8, endchar=, n=1, flags=1) at src/Python/modsupport.c:259 #5 0x00007fffed703358 in va_build_value (format=, va=va at entry=0x7fffffffa918, flags=flags at entry=1) at src/Python/modsupport.c:562 #6 0x00007fffed7036d9 in _Py_BuildValue_SizeT (format=) at src/Python/modsupport.c:530 #7 0x00007fffedae6126 in my_func () at test.cpp:4 #8 0x00007fffedaf1c9d in ffi_call_unix64 () from libffi.so.7 #9 0x00007fffedaf0623 in ffi_call_int () from libffi.so.7 ? ``` this is reproducible on RHEL7 (Python 3.9.7 built with GCC 11.2) and macOS (Python 3.9.10, 3.10.2 and 3.11.0a4 installed via MacPorts). It does not crash with Python 3.8, I tested on RHEL7 (Python 3.8.3 built with GCC 9.3.0) and macOS (Python 3.8.12 installed via MacPorts). This is meant to be a minimal example. It seems to be important that `Py_BuildValue` is returning a tuple, but the size of that tuple is not important. `"O"` and `Py_None` are also not important, it still crashes with `"i"` and `42`. The definition of `PY_SSIZE_T_CLEAN` also does not seem to be important; the only obvious difference it makes is whether I see `_Py_BuildValue_SizeT` or `Py_BuildValue` in the backtrace. This seems to be a bit of an unlikely bug, so apologies in advance if I have missed something obvious. I tried to be thorough, but I do not have a lot of experience working with the Python C API. ---------- components: C API, Extension Modules, ctypes messages: 412725 nosy: olupton priority: normal severity: normal status: open title: Py_BuildValue tuple creation segfaults in python3.9..3.11 type: crash versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 04:44:17 2022 From: report at bugs.python.org (Ramil Nugmanov) Date: Mon, 07 Feb 2022 09:44:17 +0000 Subject: [New-bugs-announce] [issue46674] Two if in a row in generators Message-ID: <1644227057.33.0.630372921709.issue46674@roundup.psfhosted.org> New submission from Ramil Nugmanov : treat without error two if in generators. >>>[x for x in [1, 2, 3] if 1 if 1] [1, 2, 3] >>>[x for x in [1, 2, 3] if 0 if 1] [] expected syntax error ---------- components: Parser messages: 412726 nosy: lys.nikolaou, pablogsal, stsouko priority: normal severity: normal status: open title: Two if in a row in generators type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 06:52:04 2022 From: report at bugs.python.org (Mark Shannon) Date: Mon, 07 Feb 2022 11:52:04 +0000 Subject: [New-bugs-announce] [issue46675] Allow more than 16 items in split-keys dicts and "virtual" object dicts. Message-ID: <1644234724.61.0.0969544850657.issue46675@roundup.psfhosted.org> New submission from Mark Shannon : https://bugs.python.org/issue45340 and https://github.com/python/cpython/pull/28802 allowed "virtual" object dicts (see faster-cpython/ideas#72 for full details). In order for this to work, we need to keep the insertion order on the values. The initial version (https://github.com/python/cpython/pull/28802) used a 64 bit value as a vector of 16 4-bit values, which allows only 16 items per values array. Stats gathered from the standard benchmark suite and informal evidence from elsewhere suggests that this causes a significant (5% and upwards) of these dicts to be materialized due to exceeding the 16 item limit. An alternative design that would allow up to ~254 items in the values array is to make the insertion order vector an array of bytes. The capacity is 254 as we need a byte for size, and another for capacity. This will increase the size of the values a bit for sizes from 7 to 15, but save a lot of memory for sizes 17+, as keys could still be shared. Pros: No need to materialize dicts of size 16+, saving ~3/4 of the memory per dict and helping specialization. Cons: Extra memory write to store a value* 1 extra word for values of size 7 to 14, 2 extra for size 15. Some extra complexity. *In a hypothetical optimized JIT, the insertion order vector would be stored as a single write for several writes, so this would make no difference. ---------- assignee: Mark.Shannon messages: 412735 nosy: Mark.Shannon priority: normal severity: normal status: open title: Allow more than 16 items in split-keys dicts and "virtual" object dicts. versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 14:16:53 2022 From: report at bugs.python.org (Gregory Beauregard) Date: Mon, 07 Feb 2022 19:16:53 +0000 Subject: [New-bugs-announce] [issue46676] ParamSpec args and kwargs are not equal to themselves. Message-ID: <1644261413.4.0.310987915665.issue46676@roundup.psfhosted.org> New submission from Gregory Beauregard : from typing import ParamSpec P = ParamSpec("P") print(P.args == P.args) # False print(P.kwargs == P.kwargs) # False ParamSpec args and kwargs are not equal to themselves; this can cause problems for unit tests and type introspection w/ e.g. `get_type_hints`. I will fix this by adding an __eq__ method like other places in typing.py ---------- components: Library (Lib) messages: 412781 nosy: GBeauregard, Jelle Zijlstra priority: normal severity: normal status: open title: ParamSpec args and kwargs are not equal to themselves. type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 14:27:59 2022 From: report at bugs.python.org (Jelle Zijlstra) Date: Mon, 07 Feb 2022 19:27:59 +0000 Subject: [New-bugs-announce] [issue46677] TypedDict docs are incomplete Message-ID: <1644262079.95.0.663870450502.issue46677@roundup.psfhosted.org> New submission from Jelle Zijlstra : https://docs.python.org/3.10/library/typing.html#typing.TypedDict It says: > To allow using this feature with older versions of Python that do not support PEP 526, TypedDict supports two additional equivalent syntactic forms But there is another reason to use the equivalent forms: if your keys aren't valid Python names. There's an example in typeshed that uses "in" (a keyword) as a TypedDict key, and I've seen others with keys that have hyphens in them. Also: - The docs mention attributes like `__required_keys__`, but don't clearly say what is in these attributes. We should document them explicitly with the standard syntax for attributes. - There is no mention of one TypedDict inheriting from another. ---------- assignee: docs at python components: Documentation messages: 412784 nosy: 97littleleaf11, AlexWaygood, Jelle Zijlstra, docs at python, sobolevn priority: normal severity: normal status: open title: TypedDict docs are incomplete versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 15:30:36 2022 From: report at bugs.python.org (Jason Wilkes) Date: Mon, 07 Feb 2022 20:30:36 +0000 Subject: [New-bugs-announce] [issue46678] Invalid cross device link in Lib/test/support/import_helper.py Message-ID: <1644265836.23.0.316073606856.issue46678@roundup.psfhosted.org> New submission from Jason Wilkes : In Lib/test/support/import_helper.py, the function make_legacy_pyc makes a call to os.rename which can fail when the source and target live on different devices. This happens (for example) when PYTHONPYCACHEPREFIX is set to a directory on a different device from where temporary files are stored. Replacing os.rename with shutil.move fixes it. Will submit a PR. ---------- components: Tests messages: 412791 nosy: notarealdeveloper priority: normal severity: normal status: open title: Invalid cross device link in Lib/test/support/import_helper.py type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 7 16:22:25 2022 From: report at bugs.python.org (Jason Wilkes) Date: Mon, 07 Feb 2022 21:22:25 +0000 Subject: [New-bugs-announce] [issue46679] test.support.wait_process ignores timeout argument Message-ID: <1644268945.76.0.936309177383.issue46679@roundup.psfhosted.org> New submission from Jason Wilkes : The function wait_process in Lib/test/support/__init__.py ignores its timeout argument. This argument is useful, for example, in tests that need to determine whether a deadlock has been fixed (e.g., in PR-30310). Will submit a pull request to fix this. ---------- components: Tests messages: 412793 nosy: notarealdeveloper priority: normal severity: normal status: open title: test.support.wait_process ignores timeout argument type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 8 08:57:09 2022 From: report at bugs.python.org (renzo) Date: Tue, 08 Feb 2022 13:57:09 +0000 Subject: [New-bugs-announce] [issue46680] file calls itself Message-ID: <1644328629.41.0.0810880607536.issue46680@roundup.psfhosted.org> New submission from renzo : good morning I created a file called prova.py inside I put 3 lines import test a = 2 print (a) d questions why does a file have to call itself .. is it intended? how come it prints the value of a twice and does not enter the loop? ---------- components: Build messages: 412840 nosy: lallo priority: normal severity: normal status: open title: file calls itself versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 8 09:52:52 2022 From: report at bugs.python.org (Ilya Leoshkevich) Date: Tue, 08 Feb 2022 14:52:52 +0000 Subject: [New-bugs-announce] [issue46681] gzip.compress does not forward compresslevel to zlib.compress Message-ID: <1644331972.38.0.86974192008.issue46681@roundup.psfhosted.org> New submission from Ilya Leoshkevich : Started with: commit ea23e7820f02840368569db8082bd0ca4d59b62a Author: Ruben Vorderman Date: Thu Sep 2 17:02:59 2021 +0200 bpo-43613: Faster implementation of gzip.compress and gzip.decompress (GH-27941) Co-authored-by: ?ukasz Langa The fix is quite trivial: --- a/Lib/gzip.py +++ b/Lib/gzip.py @@ -587,7 +587,8 @@ def compress(data, compresslevel=_COMPRESS_LEVEL_BEST, *, mtime=None): header = _create_simple_gzip_header(compresslevel, mtime) trailer = struct.pack(" _______________________________________ From report at bugs.python.org Tue Feb 8 12:39:09 2022 From: report at bugs.python.org (Paul Jaggi) Date: Tue, 08 Feb 2022 17:39:09 +0000 Subject: [New-bugs-announce] [issue46682] python 3.10 Py_Initialize/Py_Main std path no longer includes site-packages Message-ID: <1644341949.29.0.882181976417.issue46682@roundup.psfhosted.org> New submission from Paul Jaggi : Have the following simple program: #include #include using namespace std; int main(int argc, char** argv) { wchar_t* args[argc]; for(int i = 0; i < argc; ++i) { args[i] = Py_DecodeLocale(argv[i], nullptr); } Py_Initialize(); const int exit_code = Py_Main(argc, args); cout << "Exit code: " << exit_code << endl; cout << "press any key to exit" << endl; cin.get(); return 0; } When you run this program and in the console: import sys sys.path for Python versions between 3.7-3.9, you get the installed python site-packages by default. For Python 3.10, you don't. This happens on both windows and Mac. Is this an intentional change? ---------- components: C API messages: 412848 nosy: pjaggi1 priority: normal severity: normal status: open title: python 3.10 Py_Initialize/Py_Main std path no longer includes site-packages type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 8 12:55:55 2022 From: report at bugs.python.org (German Salazar) Date: Tue, 08 Feb 2022 17:55:55 +0000 Subject: [New-bugs-announce] [issue46683] Python 3.6.15 source tarball installs 3.6.8? Message-ID: <1644342955.93.0.0263962479248.issue46683@roundup.psfhosted.org> New submission from German Salazar : wanted to install 3.6.15, but the source tarball installs 3.6.8 ---------- messages: 412849 nosy: salgerman priority: normal severity: normal status: open title: Python 3.6.15 source tarball installs 3.6.8? versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 8 13:22:32 2022 From: report at bugs.python.org (Joshua Bronson) Date: Tue, 08 Feb 2022 18:22:32 +0000 Subject: [New-bugs-announce] [issue46684] Expose frozenset._hash classmethod Message-ID: <1644344552.11.0.546290047749.issue46684@roundup.psfhosted.org> New submission from Joshua Bronson : collections.abc.Set provides a _hash() method that includes the following in its docstring: """ Note that we don't define __hash__: not all sets are hashable. But if you define a hashable set type, its __hash__ should call this function. ... We match the algorithm used by the built-in frozenset type. """ Because Set._hash() is currently implemented in pure Python, users face having to make a potentially challenging decision between whether to trade off runtime efficiency vs. space efficiency: >>> hash(frozenset(x)) # Should I use this? >>> Set._hash(x) # Or this? The former requires O(n) memory to create the frozenset, merely to throw it immediately away, but on the other hand gets to use frozenset's __hash__ implementation, which is implemented in C. The latter requires only O(1) memory, but does not get the performance benefit of using the C implementation of this algorithm. Why not expose the C implementation via a frozenset._hash() classmethod, and change Set._hash() to merely call that? Then it would be much clearer that using Set._hash() is always the right answer. ---------- messages: 412856 nosy: jab priority: normal severity: normal status: open title: Expose frozenset._hash classmethod _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 8 15:03:46 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Tue, 08 Feb 2022 20:03:46 +0000 Subject: [New-bugs-announce] [issue46685] Add additional tests for new features in `typing.py` Message-ID: <1644350626.56.0.225939389842.issue46685@roundup.psfhosted.org> New submission from Nikita Sobolev : New features (like `Self` type and `Never` type), in my opinion, require some extra testing. Things that were not covered: - Inheritance from `Self`, only `type(Self)` is covered: https://github.com/python/cpython/blob/c018d3037b5b62e6d48d5985d1a37b91762fbffb/Lib/test/test_typing.py#L193-L196 - Equality and non-equality for `Self` and `Never`. We should be sure that `NoReturn` is not equal to `Never`, but they are equal to themselfs - `get_type_hints` with `Never` - `get_origin` with `Self` and `Never` types, it should return `None` for both cases - (not exactly related) I've also noticed that this line is not covered at all: https://github.com/python/cpython/blob/c018d3037b5b62e6d48d5985d1a37b91762fbffb/Lib/typing.py#L725 Maybe there are some other cases? I will send a PR :) ---------- components: Tests messages: 412865 nosy: AlexWaygood, Jelle Zijlstra, gvanrossum, kj, sobolevn priority: normal severity: normal status: open title: Add additional tests for new features in `typing.py` type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 8 17:48:39 2022 From: report at bugs.python.org (Kesh Ikuma) Date: Tue, 08 Feb 2022 22:48:39 +0000 Subject: [New-bugs-announce] [issue46686] [venv / PC/launcher] issue with a space in the installed python path Message-ID: <1644360519.94.0.220947714115.issue46686@roundup.psfhosted.org> New submission from Kesh Ikuma : After months of proper operation, my per-user Python install started to error out when I attempt `python -m venv .venv` with "Error: Command '['C:\\Users\\kesh\\test\\.venv\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 101." Following the StackOverflow solution, I reinstalled Python for all users and it was working OK. I recently looked into it deeper and found the root issue in the function PC/launcher.c/run_child(). The path to the "...\Python\Python310\python.exe" contains a space, and the CreateProcessW() call on Line 811 is passing the path without quoting the path, causing the process creation to fail. I fixed my issue by using the Windows short path convention on my path env. variable, but there must be a more permanent fix possible. Here is the link to my question and self-answering to the problem: https://stackoverflow.com/questions/71039131/troubleshooting-the-windows-venv-error-101 ---------- components: Windows messages: 412874 nosy: hokiedsp, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: [venv / PC/launcher] issue with a space in the installed python path type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 8 18:45:05 2022 From: report at bugs.python.org (Steve Dower) Date: Tue, 08 Feb 2022 23:45:05 +0000 Subject: [New-bugs-announce] [issue46687] Update pyexpat for CVE-2021-45960 Message-ID: <1644363905.53.0.879497437531.issue46687@roundup.psfhosted.org> New submission from Steve Dower : libexpat recently fixed a security issue relating to some arithmetic: https://github.com/libexpat/libexpat/pull/534 I assume we should take this fix, either by updating our entire bundled copy or just backporting the patch. ---------- components: XML messages: 412880 nosy: steve.dower priority: normal severity: normal stage: needs patch status: open title: Update pyexpat for CVE-2021-45960 type: security versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 00:01:17 2022 From: report at bugs.python.org (Inada Naoki) Date: Wed, 09 Feb 2022 05:01:17 +0000 Subject: [New-bugs-announce] [issue46688] Add sys.is_interned Message-ID: <1644382877.05.0.348106052559.issue46688@roundup.psfhosted.org> New submission from Inada Naoki : deepfreeze.py needs to know the unicode object is interned. Ref: https://bugs.python.org/issue46430 ---------- components: Interpreter Core messages: 412890 nosy: methane priority: normal severity: normal status: open title: Add sys.is_interned versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 06:31:51 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Wed, 09 Feb 2022 11:31:51 +0000 Subject: [New-bugs-announce] [issue46689] `list(FunctionType(a.gi_code, {})(0))` crashes Python Message-ID: <1644406311.26.0.616936444493.issue46689@roundup.psfhosted.org> New submission from Nikita Sobolev : Here's the simplest reproduction: ``` from types import FunctionType a = (x for x in [1]) list(FunctionType(a.gi_code, {})(0)) ``` I understand that the code above does not make much sense, but I still think it should not crash. Demo: ``` ? PYTHONFAULTHANDLER=1 ./python.exe Python 3.11.0a5+ (heads/issue-46647-dirty:88819357a5, Feb 5 2022, 18:19:59) [Clang 11.0.0 (clang-1100.0.33.16)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from types import FunctionType >>> a = (x for x in [1]) >>> list(FunctionType(a.gi_code, {})(0)) Fatal Python error: Segmentation fault Current thread 0x0000000112ece5c0 (most recent call first): File "", line 1 in File "", line 1 in [1] 22662 segmentation fault PYTHONFAULTHANDLER=1 ./python.exe ``` I can reproduce this on 3.9 and 3.10 as well. ---------- components: Interpreter Core messages: 412897 nosy: sobolevn priority: normal severity: normal status: open title: `list(FunctionType(a.gi_code, {})(0))` crashes Python type: crash versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 06:54:00 2022 From: report at bugs.python.org (James Marchant) Date: Wed, 09 Feb 2022 11:54:00 +0000 Subject: [New-bugs-announce] [issue46690] create_autospec() doesn't respect configure_mock style kwargs Message-ID: <1644407640.16.0.14753457907.issue46690@roundup.psfhosted.org> New submission from James Marchant : When using `create_autospec()` to create a mock object, it doesn't respect values passed through in the style described for passing mock configurations in the Mock constructor (https://docs.python.org/3.8/library/unittest.mock.html#unittest.mock.Mock.configure_mock). Instead, they seem to get discarded somewhere here (https://github.com/python/cpython/blob/77bab59c8a1f04922bb975cc4f11e5323d1d379d/Lib/unittest/mock.py#L2693-L2741). Here's a simple test case: ``` from unittest.mock import create_autospec class Test: def test_method(self): pass autospec_mock = create_autospec(Test, instance=True, **{"test_method.side_effect": ValueError}) # Should throw a ValueError exception autospec_mock.test_method() # Assign manually autospec_mock.test_method.side_effect = ValueError # Throws as expected autospec_mock.test_method() ``` ---------- components: Tests messages: 412898 nosy: marchant.jm priority: normal severity: normal status: open title: create_autospec() doesn't respect configure_mock style kwargs type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 07:16:49 2022 From: report at bugs.python.org (Duncan Macleod) Date: Wed, 09 Feb 2022 12:16:49 +0000 Subject: [New-bugs-announce] [issue46691] sysconfig.get_platform() raises ValueError on macOS if '-arch' is present in CFLAGS but doesn't refer to the '-arch' compiler flag Message-ID: <1644409009.26.0.313805274681.issue46691@roundup.psfhosted.org> New submission from Duncan Macleod : The `sysconfig.get_platform()` function raises a `ValueError` if the `cflags` config value (e.g. the `CFLAGS` used at build time) includes the text `-arch` where that doesn't refer to the compiler flag of the same name. Consider the following example build: $ sw_vers ProductName: macOS ProductVersion: 11.6.3 BuildVersion: 20G415 $ curl -LO https://www.python.org/ftp/python/3.10.2/Python-3.10.2.tar.xz $ tar -xf Python-3.10.2.tar.xz $ cd Python-3.10.2 $ export CFLAGS="-Itest-arch/fake" # just something that includes -arch $ ./configure --prefix=$(pwd)/test-arch $ make -j Here the build fails with the following error: ./python.exe -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi Traceback (most recent call last): File "/Users/duncanmacleod/src/Python-3.10.2/Lib/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/duncanmacleod/src/Python-3.10.2/Lib/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/duncanmacleod/src/Python-3.10.2/Lib/sysconfig.py", line 803, in _main() File "/Users/duncanmacleod/src/Python-3.10.2/Lib/sysconfig.py", line 791, in _main _generate_posix_vars() File "/Users/duncanmacleod/src/Python-3.10.2/Lib/sysconfig.py", line 457, in _generate_posix_vars pybuilddir = f'build/lib.{get_platform()}-{_PY_VERSION_SHORT}' File "/Users/duncanmacleod/src/Python-3.10.2/Lib/sysconfig.py", line 744, in get_platform osname, release, machine = _osx_support.get_platform_osx( File "/Users/duncanmacleod/src/Python-3.10.2/Lib/_osx_support.py", line 556, in get_platform_osx raise ValueError( ValueError: Don't know machine value for archs=() generate-posix-vars failed Sorry if this is a duplicate of an existing issue. ---------- components: Library (Lib) messages: 412900 nosy: duncanmmacleod priority: normal severity: normal status: open title: sysconfig.get_platform() raises ValueError on macOS if '-arch' is present in CFLAGS but doesn't refer to the '-arch' compiler flag type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 07:23:52 2022 From: report at bugs.python.org (Ali Rn) Date: Wed, 09 Feb 2022 12:23:52 +0000 Subject: [New-bugs-announce] [issue46692] match case does not support regex Message-ID: <1644409432.19.0.449385513127.issue46692@roundup.psfhosted.org> Change by Ali Rn : ---------- components: Regular Expressions nosy: AliRn, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: match case does not support regex type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 12:18:08 2022 From: report at bugs.python.org (Bruce Eckel) Date: Wed, 09 Feb 2022 17:18:08 +0000 Subject: [New-bugs-announce] [issue46693] dataclass generated __str__ does not use overridden member __str__ Message-ID: <1644427088.92.0.407820887742.issue46693@roundup.psfhosted.org> New submission from Bruce Eckel : When creating a dataclass using members of other classes that have overridden their __str__ methods, the __str__ method synthesized by the dataclass ignores the overridden __str__ methods in its component members. Demonstrated in attached file. ---------- components: Interpreter Core files: DataClassStrBug.py messages: 412927 nosy: Bruce Eckel priority: normal severity: normal status: open title: dataclass generated __str__ does not use overridden member __str__ type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file50611/DataClassStrBug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 12:55:00 2022 From: report at bugs.python.org (Nonsense) Date: Wed, 09 Feb 2022 17:55:00 +0000 Subject: [New-bugs-announce] [issue46694] isdigit/isnumeric vs int() Message-ID: <1644429300.87.0.85125786773.issue46694@roundup.psfhosted.org> New submission from Nonsense : When typing in "?".isdigit() or "?".isnumeric() it gives True but when typing in int("?") it errors out: ValueError: invalid literal for int() with base 10: '?' ---------- components: Interpreter Core messages: 412934 nosy: smtplukas.tanner.test priority: normal severity: normal status: open title: isdigit/isnumeric vs int() type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 13:00:50 2022 From: report at bugs.python.org (mirabilos) Date: Wed, 09 Feb 2022 18:00:50 +0000 Subject: [New-bugs-announce] [issue46695] _io_TextIOWrapper_reconfigure_impl errors out too early Message-ID: <1644429650.43.0.90241756772.issue46695@roundup.psfhosted.org> New submission from mirabilos : The following is not possible: with open('/tmp/x.ssv', 'r', newline='\n') as f: f.readline() # imagine a library call boundary here if hasattr(f, 'reconfigure'): f.reconfigure(newline='\n') The .reconfigure() call would not do anything, but it errors out nevertheless, simply because it is called (from reading the _io_TextIOWrapper_reconfigure_impl code in Modules/_io/textio.c). Unfortunately, I *have* to call this in my library because I have to rely on ?newline='\n'? behaviour (the hasattr avoids erroring out on binary streams), and the normal behaviour of erroring out if it?s too late to change is also good for me. But the behaviour of erroring out if called at all when anything has already been read is a problem. This can easily be solved without breaking backwards compatibility, as the operation is a nop. To clarify: I wish for? with open('/tmp/x.ssv', 'r', newline='\n') as f: f.readline() # imagine a library call boundary here if hasattr(f, 'reconfigure'): f.reconfigure(newline='\n') ? to work, but for? with open('/tmp/x.ssv', 'r') as f: f.readline() # imagine a library call boundary here if hasattr(f, 'reconfigure'): f.reconfigure(newline='\n') ? (line 1 is the only changed one) to continue to error out. ---------- components: IO messages: 412935 nosy: mirabilos priority: normal severity: normal status: open title: _io_TextIOWrapper_reconfigure_impl errors out too early versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 16:37:28 2022 From: report at bugs.python.org (David CARLIER) Date: Wed, 09 Feb 2022 21:37:28 +0000 Subject: [New-bugs-announce] [issue46696] socketmodule add Linux SO_INCOMING_CPU constasn Message-ID: <1644442648.93.0.9885945576.issue46696@roundup.psfhosted.org> Change by David CARLIER : ---------- nosy: devnexen priority: normal severity: normal status: open title: socketmodule add Linux SO_INCOMING_CPU constasn _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 17:05:40 2022 From: report at bugs.python.org (hydroflask) Date: Wed, 09 Feb 2022 22:05:40 +0000 Subject: [New-bugs-announce] [issue46697] _ctypes_simple_instance returns inverted logic Message-ID: <1644444340.15.0.469712253239.issue46697@roundup.psfhosted.org> New submission from hydroflask : `_ctypes_simple_instance` in _ctypes.c returns the opposite logic of what its documentation claims. It is supposed to return true when the argument (a type object) is a direct subclass of `Simple_Type` (`_SimpleCData` in Python code). However it returns false instead. No bugs have manifested from this because all of the call sites ( `callproc.c::GetResult`, `callbacks.c::_CallPythonObject`,_`ctypes.c::PyCData_get`, `_ctypes.c::Simple_from_outparm`) invert the return value of this function. The last example, `ctypes.c::Simple_from_outparm` only calls `Simple_get_value()` when `_ctypes_simple_instance` returns false, which makes sense because otherwise the invocation of `_ctypes.c::Simple_from_outparm()` could trigger an assertion error. This is not just simply an issue of inverted logic because the logic isn't inverted in all cases. In `_ctypes_simple_instance` in the case when `PyCSimpleTypeObject_Check(type)` returns false, if this were supposed to be perfect inverted logic then the whole routine would return 1 (True) not 0. Fortunately, due to the way the code is structured, I don't think there is a case when `PyCSimpleTypeObject_Check(type)` returns false so the incorrect case where it returns a constant 0 is effectively dead code. I have compiled a version of Python with the attached patch and run "make test" with no issues. ---------- components: ctypes files: _ctypes_simple_instance_inverted.patch keywords: patch messages: 412947 nosy: hydroflask priority: normal severity: normal status: open title: _ctypes_simple_instance returns inverted logic type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50612/_ctypes_simple_instance_inverted.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 17:29:19 2022 From: report at bugs.python.org (mirabilos) Date: Wed, 09 Feb 2022 22:29:19 +0000 Subject: [New-bugs-announce] [issue46700] wrong nomenclature (options vs. arguments) in argparse Message-ID: <1644445759.27.0.125748612313.issue46700@roundup.psfhosted.org> New submission from mirabilos : The argparse documentation and tutorial as well as its default option groups speak of "positional arguments" and "optional arguments". These are not used correctly, though. Elements of the argument vector (past item #0) are distinguished as options and (positional) arguments. Options are either flags (ls "-l", cmd "/c") or GNU long options ("--help"). They are usually optional ("[-h]") but may be mandatory (such as -o/-i/-p for cpio(1)). They may have option arguments (cpio(1) "-H format"). Arguments (also called positional arguments) may be mandatory ("file") or optional ("[file]"). They are also called operands (mostly in POSIX, not very common). The argparse documentation confused the hell out of me at first because I only saw argument documentation and could not find option documentation? ---------- components: Library (Lib) messages: 412952 nosy: mirabilos priority: normal severity: normal status: open title: wrong nomenclature (options vs. arguments) in argparse versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 17:31:16 2022 From: report at bugs.python.org (mirabilos) Date: Wed, 09 Feb 2022 22:31:16 +0000 Subject: [New-bugs-announce] [issue46701] cannot use typographical quotation marks in bug description Message-ID: <1644445876.51.0.942967500006.issue46701@roundup.psfhosted.org> New submission from mirabilos : When trying to use typographical quotation marks (U+201C, U+201D) in the Comment field trying to submit a bug here, I get a red-background error message saying: Error: 'utf8' codec can't decode bytes in position 198-199: invalid continuation byte ---------- messages: 412953 nosy: mirabilos priority: normal severity: normal status: open title: cannot use typographical quotation marks in bug description _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 19:35:53 2022 From: report at bugs.python.org (Brandt Bucher) Date: Thu, 10 Feb 2022 00:35:53 +0000 Subject: [New-bugs-announce] [issue46702] Specialize UNPACK_SEQUENCE Message-ID: <1644453353.53.0.226399964433.issue46702@roundup.psfhosted.org> New submission from Brandt Bucher : UNPACK_SEQUENCE already has fast paths for tuples and lists, which make up (literally) 99% of unpackings in the benchmark suite. What's more, two-element tuples make up about two-thirds of all unpackings (though I actually suspect it's even higher, since the unpack_sequence benchmark is definitely skewing the results towards 10-element lists and tuples). These specializations are trivial to implement and result in a solid 1% improvement overall. ---------- assignee: brandtbucher components: Interpreter Core messages: 412960 nosy: Mark.Shannon, brandtbucher priority: normal severity: normal stage: patch review status: open title: Specialize UNPACK_SEQUENCE type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 19:55:12 2022 From: report at bugs.python.org (jung mo sohn) Date: Thu, 10 Feb 2022 00:55:12 +0000 Subject: [New-bugs-announce] [issue46703] boolean operation issue (True == False == False) Message-ID: <1644454512.3.0.572499319459.issue46703@roundup.psfhosted.org> New submission from jung mo sohn : In python 3.6.8, 3.7.3, 3.7.4, 3.7.5, 3.7.12, 3.8.8 versions, the output is False as shown below. Python 3.7.5 (tags/v3.7.5:5c02a39a0b, Oct 15 2019, 00:11:34) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> print(True == False == False) False However, in the openjdk1.8 version, the output is "true" as shown below. public class Test { public static void main(String[] args) throws Exception{ System.out.println(true == false == false); } } > java Test true In my opinion, "True" seems to be correct. ---------- components: Parser messages: 412961 nosy: jmsohn.x, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: boolean operation issue (True == False == False) type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 20:23:04 2022 From: report at bugs.python.org (anthony shaw) Date: Thu, 10 Feb 2022 01:23:04 +0000 Subject: [New-bugs-announce] [issue46704] Parser API not checking for null-terminator Message-ID: <1644456184.13.0.709670971668.issue46704@roundup.psfhosted.org> New submission from anthony shaw : In tokenizer.c, the translate_newlines() function does a `strlen()` on the input string, if the string is not null-terminated, e.g. '\xbe' this leads to a heap-buffer-overflow. The overflow is not exploitable, but if there are further changes to the parser, it might be worth using a strlen() alternative, like strnlen(). static char * translate_newlines(const char *s, int exec_input, struct tok_state *tok) { int skip_next_lf = 0; size_t needed_length = strlen(s) + 2, final_length; This leads to a heap-buffer-overflow detected by ASAN in a simple reproducible example, calling PyRun_StringFlags() from the LLVM fuzzer: fuzz_target(47084,0x11356f600) malloc: nano zone abandoned due to inability to preallocate reserved vm space. Dictionary: 35 entries INFO: Running with entropic power schedule (0xFF, 100). INFO: Seed: 3034498392 INFO: Loaded 1 modules (43 inline 8-bit counters): 43 [0x10a2b71e8, 0x10a2b7213), INFO: Loaded 1 PC tables (43 PCs): 43 [0x10a2b7218,0x10a2b74c8), INFO: 1 files found in ../Tests/fuzzing/corpus INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 4096 bytes ================================================================= ==47084==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000003131 at pc 0x00010bd1d555 bp 0x7ff7b5da0590 sp 0x7ff7b5d9fd50 READ of size 2 at 0x602000003131 thread T0 #0 0x10bd1d554 in wrap_strlen+0x184 (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x15554) #1 0x10b12132b in translate_newlines+0x1b (Python:x86_64+0x5d32b) #2 0x10b12071c in _PyParser_ASTFromString+0x1ac (Python:x86_64+0x5c71c) #3 0x10b2f86de in PyRun_StringFlags+0x5e (Python:x86_64+0x2346de) #4 0x10a25ec6b in CompileCode(char const*) fuzz_target.cpp:54 #5 0x10a25f247 in LLVMFuzzerTestOneInput fuzz_target.cpp:68 #6 0x10a27aff3 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) FuzzerLoop.cpp:611 #7 0x10a27c3c4 in fuzzer::Fuzzer::ReadAndExecuteSeedCorpora(std::__1::vector >&) FuzzerLoop.cpp:804 #8 0x10a27c859 in fuzzer::Fuzzer::Loop(std::__1::vector >&) FuzzerLoop.cpp:857 #9 0x10a26aa5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) FuzzerDriver.cpp:906 #10 0x10a298e42 in main FuzzerMain.cpp:20 #11 0x1134f44fd in start+0x1cd (dyld:x86_64+0x54fd) 0x602000003131 is located 0 bytes to the right of 1-byte region [0x602000003130,0x602000003131) allocated by thread T0 here: #0 0x10bd58a0d in wrap__Znam+0x7d (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x50a0d) #1 0x10a27af02 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) FuzzerLoop.cpp:596 #2 0x10a27c3c4 in fuzzer::Fuzzer::ReadAndExecuteSeedCorpora(std::__1::vector >&) FuzzerLoop.cpp:804 #3 0x10a27c859 in fuzzer::Fuzzer::Loop(std::__1::vector >&) FuzzerLoop.cpp:857 #4 0x10a26aa5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) FuzzerDriver.cpp:906 #5 0x10a298e42 in main FuzzerMain.cpp:20 #6 0x1134f44fd in start+0x1cd (dyld:x86_64+0x54fd) SUMMARY: AddressSanitizer: heap-buffer-overflow (libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x15554) in wrap_strlen+0x184 Shadow bytes around the buggy address: 0x1c04000005d0: fa fa 02 fa fa fa 02 fa fa fa 02 fa fa fa 02 fa 0x1c04000005e0: fa fa 02 fa fa fa 02 fa fa fa 02 fa fa fa 02 fa 0x1c04000005f0: fa fa 03 fa fa fa 03 fa fa fa 03 fa fa fa 03 fa 0x1c0400000600: fa fa 01 fa fa fa 01 fa fa fa 01 fa fa fa 01 fa 0x1c0400000610: fa fa 00 00 fa fa 00 fa fa fa 00 fa fa fa 00 00 =>0x1c0400000620: fa fa 00 fa fa fa[01]fa fa fa fd fa fa fa fd fd 0x1c0400000630: fa fa fd fa fa fa fd fa fa fa 00 fa fa fa 04 fa 0x1c0400000640: fa fa 00 00 fa fa 01 fa fa fa 01 fa fa fa 01 fa 0x1c0400000650: fa fa fd fa fa fa fd fa fa fa fd fd fa fa 01 fa 0x1c0400000660: fa fa 00 00 fa fa 01 fa fa fa fd fa fa fa fd fa 0x1c0400000670: fa fa 01 fa fa fa 06 fa fa fa 00 00 fa fa 06 fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==47084==ABORTING MS: 0 ; base unit: 0000000000000000000000000000000000000000 artifact_prefix='./'; Test unit written to ./crash-da39a3ee5e6b4b0d3255bfef95601890afd80709 Base64: zsh: abort ./fuzz_target -dict=../Tests/fuzzing/python.dict -only_ascii=1 ---------- components: Parser messages: 412965 nosy: anthonypjshaw, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Parser API not checking for null-terminator versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 20:24:46 2022 From: report at bugs.python.org (Jack Nguyen) Date: Thu, 10 Feb 2022 01:24:46 +0000 Subject: [New-bugs-announce] [issue46705] Memory optimization for set.issubset Message-ID: <1644456286.75.0.570469463639.issue46705@roundup.psfhosted.org> New submission from Jack Nguyen : I noticed that the set.issubset cpython implementation casts its iterable argument to a set. In some cases, casting the whole iterable to a set is unnecessary (see https://bugs.python.org/issue18032). Although the latter suggestion is to perform early termination, my suggestion is to use the intersection instead. # PyAnySet_Check coming from the cpython source code. def issubset(self, other): # Intersection suggestion: if not PyAnySet_Check(other): return len(self.intersection(other)) == len(self) # Usual implementation for sets. else: return ... The main advantage that this implementation has is its memory performance, using only O(min(len(self), len(other))) memory, since it never stores elements it does not need. I'm assuming that set construction costs O(n) set.__contains__ calls. This implementation uses len(other) calls to self.__contains__ and tmp.__contains__, where tmp = set(other). The current implementation uses len(self) + len(other) calls to tmp.__contains__. Thus, I suspect the current implementation only has a chance at running noticeably faster when len(self) << len(other), where it performs fewer calls to set.__contains__. This is, however, also where the proposed implementation has significantly superior memory performance. ---------- components: Interpreter Core messages: 412966 nosy: panda1200 priority: normal severity: normal status: open title: Memory optimization for set.issubset type: performance versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 20:30:50 2022 From: report at bugs.python.org (claude-alexandre cabana) Date: Thu, 10 Feb 2022 01:30:50 +0000 Subject: [New-bugs-announce] [issue46706] AxelRacer Message-ID: <1644456650.45.0.239619008803.issue46706@roundup.psfhosted.org> Change by claude-alexandre cabana : ---------- components: Build nosy: claudealexcabana priority: normal severity: normal status: open title: AxelRacer type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 9 22:00:43 2022 From: report at bugs.python.org (anthony shaw) Date: Thu, 10 Feb 2022 03:00:43 +0000 Subject: [New-bugs-announce] [issue46707] Parser hanging on stacked { tokens Message-ID: <1644462043.51.0.510568323188.issue46707@roundup.psfhosted.org> New submission from anthony shaw : Providing an (invalid) input to the parser causes an exponentially-slow DoS to the Python executable in 3.10. e.g. python3.10 -c "{{{{{{{{{{{{{{{{{{{{{:" takes ~2 seconds python3.10 -c "{{{{{{{{{{{{{{{{{{{{{{{{:" takes ~22 seconds Tested this all the way up to d{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{```{{{{{{{ef f():y which took over an hour ---------- components: Parser keywords: 3.10regression messages: 412972 nosy: anthonypjshaw, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Parser hanging on stacked { tokens type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 06:53:56 2022 From: report at bugs.python.org (STINNER Victor) Date: Thu, 10 Feb 2022 11:53:56 +0000 Subject: [New-bugs-announce] [issue46708] test_asyncio: test_sock_client_fail() changes asyncio.events._event_loop_policy Message-ID: <1644494036.19.0.505794618548.issue46708@roundup.psfhosted.org> New submission from STINNER Victor : Seen on s390x RHEL7 Refleaks 3.x: https://buildbot.python.org/all/#/builders/129/builds/300 == Tests result: FAILURE == (...) 3 tests failed: test_asyncio test_importlib test_unittest (...) 0:36:44 load avg: 0.50 Re-running test_asyncio in verbose mode (matching: test_sock_client_fail) beginning 6 repetitions 123456 test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.062s OK test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.061s OK test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.055s OK test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.053s OK test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.060s OK test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_sock_client_fail (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.060s OK ...... Warning -- asyncio.events._event_loop_policy was modified by test_asyncio Warning -- Before: None Warning -- After: ---------- components: Tests, asyncio messages: 412991 nosy: asvetlov, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: test_sock_client_fail() changes asyncio.events._event_loop_policy versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 07:04:16 2022 From: report at bugs.python.org (STINNER Victor) Date: Thu, 10 Feb 2022 12:04:16 +0000 Subject: [New-bugs-announce] [issue46709] test_urllib: testInterruptCaught() has a race condition and fails randomly Message-ID: <1644494656.37.0.2113700788.issue46709@roundup.psfhosted.org> New submission from STINNER Victor : test_urllib failed and then passed when re-run on s390x RHEL7 Refleaks 3.x: https://buildbot.python.org/all/#builders/129/builds/300 I can reproduce the issue on my Linux laptop: $ ./python -m test -m unittest.test.test_break.TestBreakDefaultIntHandler.testInterruptCaught test_unittest -F 0:00:00 load avg: 1.52 Run tests sequentially 0:00:00 load avg: 1.52 [ 1] test_unittest 0:00:00 load avg: 1.52 [ 2] test_unittest 0:00:00 load avg: 1.52 [ 3] test_unittest 0:00:00 load avg: 1.52 [ 4] test_unittest 0:00:00 load avg: 1.52 [ 5] test_unittest 0:00:01 load avg: 1.52 [ 6] test_unittest 0:00:01 load avg: 1.52 [ 7] test_unittest 0:00:01 load avg: 1.52 [ 8] test_unittest test test_unittest failed -- Traceback (most recent call last): File "/home/vstinner/python/main/Lib/unittest/test/test_break.py", line 66, in testInterruptCaught test(result) ^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/unittest/test/test_break.py", line 63, in test self.assertTrue(result.shouldStop) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: False is not true test_unittest failed (1 failure) == Tests result: FAILURE == 7 tests OK. 1 test failed: test_unittest Total duration: 1.7 sec Tests result: FAILURE ---------- components: Tests messages: 412993 nosy: vstinner priority: normal severity: normal status: open title: test_urllib: testInterruptCaught() has a race condition and fails randomly versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 08:00:03 2022 From: report at bugs.python.org (Marcus Fillipe Groetares Rocha Siqueira) Date: Thu, 10 Feb 2022 13:00:03 +0000 Subject: [New-bugs-announce] [issue46710] Install launcher for all users on the domain Message-ID: <1644498003.71.0.314795481937.issue46710@roundup.psfhosted.org> New submission from Marcus Fillipe Groetares Rocha Siqueira : In Python 3.9.6 (64 bits) Windows Installer, the first page show a checkbox for "install launcher for all users (recommended)", but i'd like to now why the box is not currently allowed to check. In "customize installation" option, exist other "Install for all users" options and its not working also. I tried to install with my local admin account and with my AD Admin account as well. can somebody help me please? ---------- components: Windows files: Sem t?tulo.jpg messages: 412997 nosy: marcus.siqueira, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Install launcher for all users on the domain type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50617/Sem t?tulo.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 08:43:47 2022 From: report at bugs.python.org (STINNER Victor) Date: Thu, 10 Feb 2022 13:43:47 +0000 Subject: [New-bugs-announce] [issue46711] test_logging: test_post_fork_child_no_deadlock() failed with timeout on AMD64 Arch Linux Asan Debug 3.10 Message-ID: <1644500627.85.0.365242228935.issue46711@roundup.psfhosted.org> New submission from STINNER Victor : The test calls support.wait_process() which uses SHORT_TIMEOUT. wait_process() should use LONG_TIMEOUT, or the ASAN buildbot should increase its timeout (regrtest --timeout parameter). IMO using LONG_TIMEOUT is fine: it's ok if the test takes 2 minutes instead of 1 second, it's only important that it completes :-) The test should not measure the *performance* of the code, only if the code is valid. When tests are run in parallel, the buildbot system load can be very high. In this case, the system load was 1.70: 0:35:49 load avg: 1.70 [255/421/1] test_logging failed (1 failure) (1 min 18 sec) AMD64 Arch Linux Asan Debug 3.10: https://buildbot.python.org/all/#/builders/621/builds/466 ====================================================================== FAIL: test_post_fork_child_no_deadlock (test.test_logging.HandlerTest) Ensure child logging locks are not held; bpo-6721 & bpo-36533. ---------------------------------------------------------------------- Traceback (most recent call last): File "/buildbot/buildarea/3.10.pablogsal-arch-x86_64.asan_debug/build/Lib/test/test_logging.py", line 750, in test_post_fork_child_no_deadlock support.wait_process(pid, exitcode=0) File "/buildbot/buildarea/3.10.pablogsal-arch-x86_64.asan_debug/build/Lib/test/support/__init__.py", line 1971, in wait_process raise AssertionError(f"process {pid} is still running " AssertionError: process 406366 is still running after 52.5 seconds ---------- components: Tests messages: 413000 nosy: vstinner priority: normal severity: normal status: open title: test_logging: test_post_fork_child_no_deadlock() failed with timeout on AMD64 Arch Linux Asan Debug 3.10 versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 09:18:03 2022 From: report at bugs.python.org (Kumar Aditya) Date: Thu, 10 Feb 2022 14:18:03 +0000 Subject: [New-bugs-announce] [issue46712] Share global string identifiers in deepfreeze Message-ID: <1644502683.1.0.673362326219.issue46712@roundup.psfhosted.org> New submission from Kumar Aditya : Since bpo-46541, the global strings are statically allocated so they can now be referenced by deep-frozen modules just like any other singleton. Sharing identifiers with deepfreeze will reduce the duplicated strings hence it would save space. See https://github.com/faster-cpython/ideas/issues/218 See https://github.com/faster-cpython/ideas/issues/230 ---------- messages: 413003 nosy: gvanrossum, kumaraditya303 priority: normal severity: normal status: open title: Share global string identifiers in deepfreeze versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 14:19:45 2022 From: report at bugs.python.org (Joshua Bronson) Date: Thu, 10 Feb 2022 19:19:45 +0000 Subject: [New-bugs-announce] [issue46713] Provide a C implementation of collections.abc.KeysView and friends Message-ID: <1644520785.16.0.447466630814.issue46713@roundup.psfhosted.org> New submission from Joshua Bronson : As suggested by @rhettinger in https://bugs.python.org/msg409443, I'm creating a feature request for C implementations of collections.abc.KeysView, ValuesView, and ItemsView. Because these do not currently benefit from C speedups, they're a lot slower than their dict_keys, dict_values, and dict_items counterparts. As a result, libraries that implement custom Mapping types that are backed by dicts are incentivized to override the implementations of keys(), values(), and items() they inherit from collections.abc.Mapping to instead return their backing dicts' mapping views, causing a potential abstraction leak. An example can be found in https://github.com/jab/bidict, which implements bidirectional mapping types that wrap a forward and an inverse dict which are kept in sync with one another. >>> from bidict import * >>> bi = bidict({1: 'one', 2: 'two'}) >>> bi.items() # Overridden for performance: dict_items([(1, 'one'), (2, 'two')]) Ditto for OrderedBidict: >>> OrderedBidict(bi).keys() _OrderedBidictItemsView(OrderedBidict([(1, 'one'), (2, 'two')])) (The _OrderedBidictItemsView is a custom view whose __iter__ uses the implementation inherited by its collections.abc.ItemsView base class so that the correct order is respected, but proxies other method calls through to the backing dict's dict_items object: https://github.com/jab/bidict/blob/2ab42a/bidict/_orderedbidict.py#L90-L150) Here is a microbenchmark of calling __eq__ on an _OrderedBidictItemsView vs. a collections.abc.ItemsView, to estimate the performance impact (using Python 3.10): ? set setup ' from collections.abc import ItemsView from bidict import OrderedBidict d = dict(zip(range(9999), range(9999))) ob = OrderedBidict(d)' ? python -m pyperf timeit -s $setup 'ob.items() == d.items()' -o 1.json ? python -m pyperf timeit -s $setup 'ItemsView(ob) == d.items()' -o 2.json ? pyperf compare_to 2.json 1.json Mean +- std dev: [2] 4.21 ms +- 1.10 ms -> [1] 168 us +- 6 us: 25.13x faster This demonstrates a potentially significant speedup. Similar microbenchmarks for ItemsView vs. dict_items, as well as KeysView vs. both dict_keys and _OrderedBidictKeysView, also indicate similarly significant potential. Note that the performance benefits of this may propagate to other code as well. For example, bidicts' __eq__ methods are implemented in terms of their itemsviews (see https://github.com/jab/bidict/blob/2ab42a/bidict/_base.py#L285-L286), so speeding up bidict.items().__eq__ speeds up bidict.__eq__ commensurately. ---------- messages: 413020 nosy: jab priority: normal severity: normal status: open title: Provide a C implementation of collections.abc.KeysView and friends _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 15:37:34 2022 From: report at bugs.python.org (richd) Date: Thu, 10 Feb 2022 20:37:34 +0000 Subject: [New-bugs-announce] [issue46714] Python 3.10 - Users (except from the one who installed) not able to see python in add remove programs. Message-ID: <1644525454.7.0.0618238851907.issue46714@roundup.psfhosted.org> New submission from richd : Experiencing the same issue as reported in https://bugs.python.org/issue31011 When Python is deployed using an enterprise solution, Python is not displayed in Programs and Features. Examples: 1. Using PSExec as System to install Python 3.10.x, logged in users will not see Python installed. The Python launcher does appear however. 2. Deployment of Python through SCCM has the same behavior, where logged in users do not see the installed Python version in Programs and Features. ---------- components: Windows messages: 413022 nosy: paul.moore, richd, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python 3.10 - Users (except from the one who installed) not able to see python in add remove programs. type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 16:24:35 2022 From: report at bugs.python.org (John Snow) Date: Thu, 10 Feb 2022 21:24:35 +0000 Subject: [New-bugs-announce] [issue46715] asyncio.create_unix_server has an off-by-one error concerning the backlog parameter Message-ID: <1644528275.99.0.104636658968.issue46715@roundup.psfhosted.org> New submission from John Snow : Hi, asyncio.create_unix_server appears to treat the "backlog" parameter as where 0 means that *no connection will ever possibly be pending*, which (at the very least for UNIX sockets on my machine) is untrue. Consider a (non-asyncio) server: ```python import os, socket, sys, time sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind('test.sock') sock.listen(backlog=0) while True: print('.', end='', file=sys.stderr) time.sleep(1) ``` This server never calls accept(), and uses a backlog of zero. However, a client can actually still successfully call connect against such a server: ```python import os, socket, time sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setblocking(False) sock.connect('test.sock') print("Connected!") ``` When run against the server example, the first invocation of this client will actually connect successfully (Surprising, but that's how the C syscalls work too, so... alright) but the second invocation of this client will raise BlockingIOError (EAGAIN). Further, if we amend the first server example to actually call accept(), it will succeed when the first client connects -- demonstrating that the actual total queue length here was actually effectively 1, not 0. (i.e. there's always room for at least one connection to be considered, and the backlog counts everybody else.) However, in asyncio.BaseSelectorEventLoop._accept_connection(...), the code uses `for _ in range(backlog)` to determine the maximum number of accept calls to make. When backlog is set to zero, this means we will *never* call accept, even when there are pending connections. Note that when backlog=1, this actually allows for *two* pending connections before clients are rejected, but this loop will only fire once. This behavior is surprising, because backlog==0 means we'll accept no clients, but backlog==1 means we will allow for two to enqueue before accepting both. There is seemingly no way with asyncio to actually specify "Exactly one pending connection". I think this loop should be amended to reflect the actual truth of the backlog parameter, and it should iterate over `backlog + 1`. This does necessitate a change to `Lib/test/test_asyncio/test_selector_events.py` which believes that backlog=100 means that accept() should be called 100 times (instead of 101.) A (very) simple fix is attached here; if it seems sound, I can spin a real PR on GitHub. ---------- components: asyncio files: issue.patch keywords: patch messages: 413025 nosy: asvetlov, jnsnow, yselivanov priority: normal severity: normal status: open title: asyncio.create_unix_server has an off-by-one error concerning the backlog parameter type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50618/issue.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 18:09:17 2022 From: report at bugs.python.org (STINNER Victor) Date: Thu, 10 Feb 2022 23:09:17 +0000 Subject: [New-bugs-announce] [issue46716] regrtest didn't respect the timeout on AMD64 Windows11 3.x Message-ID: <1644534557.28.0.000333882929265.issue46716@roundup.psfhosted.org> New submission from STINNER Victor : regrtest was run with --timeout 900 on AMD64 Windows11 3.x: timeout confirmed by "(timeout: 15 min, worker timeout: 20 min)" log. But then test_subprocss was only stopped after "4 hour 55 min". If the regrtest main process is able to display an update 2x per minutes (every 30 sec), it should be able to stop the test worker process (test_subprocess) after 20 minutes. How is it possible that the process took so long? There are multiple guards: * (1) in the worker process: _runtest() calls faulthandler.dump_traceback_later(ns.timeout, exit=True) * (2) libregrtest/runtest_mp.py: TestWorkerProcess._run_process() thread uses popen.communicate(timeout=self.timeout) * (3) faulthandler.dump_traceback_later(MAIN_PROCESS_TIMEOUT, exit=True): kill the parent process if it is blocked for longer than 5 minutes Guards (1) and (2) didn't work. Maybe the parent process should implement a 4th guard using the 20 minute timeout: almost 5 hours is way longer than 20 minutes! C:\buildbot\3.x.kloth-win11\build>"C:\buildbot\3.x.kloth-win11\build\PCbuild\amd64\python_d.exe" -u -Wd -E -bb -m test -uall -rwW --slowest --timeout 1200 --fail-env-changed -j1 -j2 --junit-xml test-results.xml -j40 --timeout 900 == CPython 3.11.0a5+ (main, Feb 10 2022, 04:03:24) [MSC v.1930 64 bit (AMD64)] == Windows-10-10.0.22000-SP0 little-endian == cwd: C:\buildbot\3.x.kloth-win11\build\build\test_python_5732? == CPU count: 32 == encodings: locale=cp1252, FS=utf-8 Using random seed 6320493 0:00:00 Run tests in parallel using 40 child processes (timeout: 15 min, worker timeout: 20 min) (...) 0:03:13 load avg: 0.76 [431/432] test_multiprocessing_spawn passed (3 min 13 sec) -- running: test_subprocess (3 min 11 sec) 0:03:43 load avg: 0.46 running: test_subprocess (3 min 41 sec) (...) 4:53:17 load avg: 0.00 running: test_subprocess (4 hour 53 min) 4:53:47 load avg: 0.00 running: test_subprocess (4 hour 53 min) 4:54:17 load avg: 0.09 running: test_subprocess (4 hour 54 min) 4:54:47 load avg: 0.35 running: test_subprocess (4 hour 54 min) 4:55:17 load avg: 0.48 running: test_subprocess (4 hour 55 min) 4:55:46 load avg: 0.50 [432/432/1] test_subprocess timed out (4 hour 55 min) (4 hour 55 min) == Tests result: FAILURE == 397 tests OK. 10 slowest tests: - test_subprocess: 4 hour 55 min - test_multiprocessing_spawn: 3 min 13 sec - test_concurrent_futures: 2 min 46 sec - test_peg_generator: 2 min 32 sec - test_compileall: 1 min 34 sec - test_unparse: 1 min 31 sec - test_distutils: 1 min 23 sec - test_asyncio: 1 min 22 sec - test_tokenize: 1 min 8 sec - test_io: 1 min 5 sec 1 test failed: test_subprocess ---------- components: Tests messages: 413028 nosy: vstinner priority: normal severity: normal status: open title: regrtest didn't respect the timeout on AMD64 Windows11 3.x versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 10 21:59:51 2022 From: report at bugs.python.org (George Gensure) Date: Fri, 11 Feb 2022 02:59:51 +0000 Subject: [New-bugs-announce] [issue46717] Raising exception multiple times leaks memory Message-ID: <1644548391.27.0.391231331567.issue46717@roundup.psfhosted.org> New submission from George Gensure : Instantiating an exception and raising it multiple times causes 1 frame and 2 traceback objects to remain allocated for each raise. The attached example causes python to consume 8GB of ram after a few seconds of execution on Windows/Linux. ---------- components: Interpreter Core files: exc.py messages: 413035 nosy: ggensure priority: normal severity: normal status: open title: Raising exception multiple times leaks memory type: resource usage versions: Python 3.11 Added file: https://bugs.python.org/file50619/exc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 07:08:56 2022 From: report at bugs.python.org (=?utf-8?b?0JzQsNGA0Log0JrQvtGA0LXQvdCx0LXRgNCz?=) Date: Fri, 11 Feb 2022 12:08:56 +0000 Subject: [New-bugs-announce] [issue46718] Feature: iptertools: add batches Message-ID: <1644581336.38.0.660450208507.issue46718@roundup.psfhosted.org> New submission from ???? ????????? : I want a new function introduced in intertools. Something like this, but more optimal, and in C: ======================= from itertools import chain, islice from typing import Iterable, TypeVar T = TypeVar('T') # pylint: disable=invalid-name def batches(items: Iterable[T], num: int) -> Iterable[Iterable[T]]: items = iter(items) while True: try: first_item = next(items) except StopIteration: break yield chain((first_item,), islice(items, 0, num - 1)) ======================= Splits big arrays to iterable chunks of fixed size (except the last one). Similar to `group_by`, but spawns new iterable group based on the group size. For example, when passing many record to a database, passing one by one is obviously too slow. Passing all the records at once may increase latency. So, a good solution is to pass, say, 1000 records in one transaction. The smae in REST API batches. P.S. Yes, I saw solution https://docs.python.org/3/library/itertools.html#itertools-recipes `def grouper`, but it is not optimal for big `n` values. ---------- components: Library (Lib) messages: 413061 nosy: socketpair priority: normal severity: normal status: open title: Feature: iptertools: add batches type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 08:29:14 2022 From: report at bugs.python.org (David Castells-Rufas) Date: Fri, 11 Feb 2022 13:29:14 +0000 Subject: [New-bugs-announce] [issue46719] Call not visited in ast.NodeTransformer Message-ID: <1644586154.7.0.293922264327.issue46719@roundup.psfhosted.org> New submission from David Castells-Rufas : If I create a class derived from ast.NodeTransformer and implement the visit_Call. When run on the below code, the visit_Call function is only called once (for the print function, and not for ord). It looks like calls in function arguments are ignored. def main(): print(ord('A')) On the other hand, on the following code it correctly visits both functions (print and ord). def main(): c = org('A') print(c) ---------- components: Library (Lib) messages: 413069 nosy: davidcastells priority: normal severity: normal status: open title: Call not visited in ast.NodeTransformer type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 09:57:09 2022 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 11 Feb 2022 14:57:09 +0000 Subject: [New-bugs-announce] [issue46720] Add support of path-like objects to multiprocessing.set_executable for Windows to match Unix-like systems Message-ID: <1644591429.6.0.540789984747.issue46720@roundup.psfhosted.org> New submission from G?ry : Any [path-like object](https://docs.python.org/3/glossary.html) can be passed to `multiprocessing.set_executable`, i.e. objects with `str`, `bytes`, or `os.PathLike` type. For instance these work (tested on MacOS with all start methods: ?spawn?, ?fork?, and ?forkserver?): - `multiprocessing.set_executable(sys.executable)` (`str`); - `multiprocessing.set_executable(sys.executable.encode())` (`bytes`); - `multiprocessing.set_executable(pathlib.Path(sys.executable))` (`os.PathLike`). This is because the ?fork? start method does not exec any program in the subprocess, the ?spawn? start method converts its path argument to `bytes` with `os.fsencode` before passing to [`_posixsubprocess.fork_exec`](https://github.com/python/cpython/blob/v3.10.2/Lib/multiprocessing/util.py#L452-L455), and the ?forkserver? start method spawns a server process (like with the ?spawn? start method) which then forks itself at each request (like the ?fork? start method): ``` return _posixsubprocess.fork_exec( args, [os.fsencode(path)], True, passfds, None, None, -1, -1, -1, -1, -1, -1, errpipe_read, errpipe_write, False, False, None, None, None, -1, None) ``` Linux (and other Unix-like systems) uses the same code than MacOS for the three start methods so it should work for it too. However I have not tested this on Windows which uses the function [`_winapi.CreateProcess`](https://github.com/python/cpython/blob/v3.10.2/Lib/multiprocessing/popen_spawn_win32.py#L73-L75) for the ?spawn? start method (the only start method available on this OS) but I noticed that no conversion to `str` (not to `bytes` this time, since [the function expects `str`](https://github.com/python/cpython/blob/v3.10.2/Modules/_winapi.c#L1049)) of the path argument with `os.fsdecode` (not `os.fsencode` this time) is performed before passing it to the function: ``` hp, ht, pid, tid = _winapi.CreateProcess( python_exe, cmd, None, None, False, 0, env, None, None) ``` So on Windows only `str` path can be passed to `multiprocessing.set_executable`. This PR fixes this to be on a par with Unix-like systems which accept any path-like objects. ---------- components: Library (Lib) messages: 413073 nosy: maggyero priority: normal severity: normal status: open title: Add support of path-like objects to multiprocessing.set_executable for Windows to match Unix-like systems versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 10:15:23 2022 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 11 Feb 2022 15:15:23 +0000 Subject: [New-bugs-announce] [issue46721] Optimize set.issuperset() for non-set argument Message-ID: <1644592523.91.0.460084742995.issue46721@roundup.psfhosted.org> New submission from Serhiy Storchaka : If the argument of set.issuperset() is not a set, it is first converted to a set. It is equivalent to the following code: if not isinstance(other, (set, frozenset)): other = set(other) # The following is equivalent to: # return set.issubset(other, self) for x in other: if x not in self return False return True Two drawbacks of this algorithm: 1. It creates a new set, which takes O(len(other)) time and consumes O(len(set(other))) memory. 2. It needs to iterate other to the end, even if the result is known earlier. The proposed PR straightforward the code. The C code is now larger, but it no longer need additional memory, performs less operations and can stop earlier. ---------- components: Interpreter Core messages: 413075 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Optimize set.issuperset() for non-set argument type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 10:26:49 2022 From: report at bugs.python.org (Iliya Zinoviev) Date: Fri, 11 Feb 2022 15:26:49 +0000 Subject: [New-bugs-announce] [issue46722] Different behavior for functiools.partial between inspect.isfunction() and other inspect.is*function() Message-ID: <1644593209.5.0.742878484415.issue46722@roundup.psfhosted.org> New submission from Iliya Zinoviev : 1) isfunction() returns `True` for partial object only when one passes `func` attribute of it. 2) For instance, `isgeneratorfunction()` and `iscoroutinefunction()` for partial obj work with passing partial obj as well as with passing `func` attr of this obj, when obj is partially applied generator function or partially applied coroutine function respectively. I offer to unify behavior for handling partial object for r'inspect.is*function()' by the next way: 1) Add `functools._unwrap_partial()` to `inspect.isfunction()` as well as it were done in other r'inspect.is*function()'. P.S.I'm ready to deal with this issue. Python 3.10.2 (main, Jan 15 2022, 19:56:27) [GCC 11.1.0] Type 'copyright', 'credits' or 'license' for more information Operating System: Manjaro Linux KDE Plasma Version: 5.23.5 KDE Frameworks Version: 5.90.0 Qt Version: 5.15.2 Kernel Version: 5.4.176-1-MANJARO (64-bit) Graphics Platform: X11 Processors: 4 ? Intel? Core? i5-6200U CPU @ 2.30GHz Memory: 11.6 GiB of RAM Graphics Processor: Mesa Intel? HD Graphics 520 ---------- components: Library (Lib) files: isfuncs_behavior.py messages: 413077 nosy: IliyaZinoviev priority: normal severity: normal status: open title: Different behavior for functiools.partial between inspect.isfunction() and other inspect.is*function() type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file50621/isfuncs_behavior.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 12:12:33 2022 From: report at bugs.python.org (Antony Cardazzi) Date: Fri, 11 Feb 2022 17:12:33 +0000 Subject: [New-bugs-announce] [issue46723] SimpleQueue.put_nowait() documentation error Message-ID: <1644599553.87.0.456089916352.issue46723@roundup.psfhosted.org> New submission from Antony Cardazzi : SimpleQueue.put_nowait(item) documentation says it is equivalent to SimpleQueue.put(item) when it's actually equivalent to Simple que.put(item, block=False) ---------- assignee: docs at python components: Documentation messages: 413087 nosy: antonycardazzi, docs at python priority: normal severity: normal status: open title: SimpleQueue.put_nowait() documentation error type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 12:22:26 2022 From: report at bugs.python.org (Saul Shanabrook) Date: Fri, 11 Feb 2022 17:22:26 +0000 Subject: [New-bugs-announce] [issue46724] Odd Bytecode Generation in 3.10 Message-ID: <1644600146.24.0.968453673832.issue46724@roundup.psfhosted.org> New submission from Saul Shanabrook : I noticed that in Python 3.10, and also in main, a certain control flow construct produces some very odd bytecode (showing on main but same on python 3.10 tags): ``` ./python.exe -c 'import dis; dis.dis("while not (a < b < c): pass")' 0 RESUME 0 1 2 LOAD_NAME 0 (a) 4 LOAD_NAME 1 (b) 6 SWAP 2 8 COPY 2 10 COMPARE_OP 0 (<) 12 POP_JUMP_IF_FALSE 11 (to 22) 14 LOAD_NAME 2 (c) 16 COMPARE_OP 0 (<) 18 POP_JUMP_IF_TRUE 28 (to 56) 20 JUMP_FORWARD 1 (to 24) >> 22 POP_TOP >> 24 LOAD_NAME 0 (a) 26 LOAD_NAME 1 (b) 28 SWAP 2 30 COPY 2 32 COMPARE_OP 0 (<) 34 POP_JUMP_IF_FALSE 23 (to 46) 36 LOAD_NAME 2 (c) 38 COMPARE_OP 0 (<) 40 POP_JUMP_IF_FALSE 12 (to 24) 42 LOAD_CONST 0 (None) 44 RETURN_VALUE >> 46 POP_TOP 48 EXTENDED_ARG 255 50 EXTENDED_ARG 65535 52 EXTENDED_ARG 16777215 54 JUMP_FORWARD 4294967280 (to 8589934616) >> 56 LOAD_CONST 0 (None) 58 RETURN_VALUE ``` The last JUMP_FORWARD has a rather larger argument! This was the minimal example I could find to replicate this. However, this is an example of some runnable code that also encounters it: ``` a = b = c = 1 while not (a < b < c): if c == 1: c = 3 else: b = 2 print(a, b, c) ``` This actually executes fine, but I notice that when it's executing it does execute that very large arg, but that the `oparg` to JUMP_FORWARD ends up being negative! By adding some tracing, I was able to see that the `oparg` variable in the `TARGET(JUMP_FORWARD)` case is `-32`. I am not sure if this is a bug or intended behavior. It does seem a bit odd to have this unnecessarily large argument that ends up turning into a negative jump! But the behavior seems fine. At the least, maybe `dis` should be modified so that it properly sees this as a negative jump and debugs it properly? I am happy to submit a PR to modify `dis` to handle this case, but I also wanted to flag that maybe it's a bug to being with. ---------- components: Interpreter Core messages: 413088 nosy: saulshanabrook priority: normal severity: normal status: open title: Odd Bytecode Generation in 3.10 versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 13:10:30 2022 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 11 Feb 2022 18:10:30 +0000 Subject: [New-bugs-announce] [issue46725] Unpacking without parentheses is allowed since 3.9 Message-ID: <1644603030.68.0.559962279778.issue46725@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Seems that this is allowed since the PEG parser rewrite: for x in *a, *b: print(x) but I cannot find anywhere were we discussed this. I am not sure if we should keep it or treat it as a bug and fix it. ---------- components: Parser messages: 413089 nosy: BTaskaya, gvanrossum, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Unpacking without parentheses is allowed since 3.9 versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 18:47:13 2022 From: report at bugs.python.org (Kevin Shweh) Date: Fri, 11 Feb 2022 23:47:13 +0000 Subject: [New-bugs-announce] [issue46726] Thread spuriously marked dead after interrupting a join call Message-ID: <1644623233.63.0.43795348291.issue46726@roundup.psfhosted.org> New submission from Kevin Shweh : This code in Thread._wait_for_tstate_lock: try: if lock.acquire(block, timeout): lock.release() self._stop() except: if lock.locked(): # bpo-45274: lock.acquire() acquired the lock, but the function # was interrupted with an exception before reaching the # lock.release(). It can happen if a signal handler raises an # exception, like CTRL+C which raises KeyboardInterrupt. lock.release() self._stop() raise has a bug. The "if lock.locked()" check doesn't check whether this code managed to acquire the lock. It checks if *anyone at all* is holding the lock. The lock is almost always locked, so this code will perform a spurious call to self._stop() if it gets interrupted while trying to acquire the lock. Thread.join uses this method to wait for a thread to finish, so a thread will spuriously be marked dead if you interrupt a join call with Ctrl-C while it's trying to acquire the lock. Here's a reproducer: import time import threading event = threading.Event() def target(): event.wait() print('thread done') t = threading.Thread(target=target) t.start() print('joining now') try: t.join() except KeyboardInterrupt: pass print(t.is_alive()) event.set() Interrupt this code with Ctrl-C during the join(), and print(t.is_alive()) will print False. ---------- components: Library (Lib) messages: 413106 nosy: Kevin Shweh priority: normal severity: normal status: open title: Thread spuriously marked dead after interrupting a join call type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 11 20:43:24 2022 From: report at bugs.python.org (Jelle Zijlstra) Date: Sat, 12 Feb 2022 01:43:24 +0000 Subject: [New-bugs-announce] [issue46727] Should shutil functions support bytes paths? Message-ID: <1644630204.06.0.284247645765.issue46727@roundup.psfhosted.org> New submission from Jelle Zijlstra : The shutil documentation doesn't say anything about bytes paths, and the CPython unit tests don't test them. But some functions do work with bytes paths in practice, and on typeshed we've received some requests to add support for them in the type stubs. Links: - https://github.com/python/typeshed/pull/7165/files (shutil.unpack_archive works with bytes paths, but only sometimes) - https://github.com/python/typeshed/pull/6868 (shutil.make_archive) - https://github.com/python/typeshed/pull/6832 (shutil.move accepts bytes paths, except when moving into an existing directory) My overall impression is that bytes paths sometimes work by accident because they happen to not hit any code paths where we do os.path.join or string concatenation, but relying on them is risky because minor changes in the call site or in the file system can cause the call to break. Here's three possible proposals: (1) We document in the shutil docs that only str paths are officially supported. Bytes paths may sometimes work, but do it at your own risk. (2) We add this documentation, but also make code changes to deprecate or even remove any support for bytes paths. (3) We decide that bytes paths are officially supported, and we add tests for them and fix any cases where they don't work. My preference is for (1). (2) feels like gratuitously breaking backward compatibility, and (3) is more work and there is little indication that bytes path support is a desired feature. ---------- components: Library (Lib) messages: 413113 nosy: AlexWaygood, Jelle Zijlstra, serhiy.storchaka priority: normal severity: normal status: open title: Should shutil functions support bytes paths? type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 04:13:23 2022 From: report at bugs.python.org (DongGeon Lee) Date: Sat, 12 Feb 2022 09:13:23 +0000 Subject: [New-bugs-announce] [issue46728] Docstring of combinations_with_replacement for consistency Message-ID: <1644657203.94.0.089626219133.issue46728@roundup.psfhosted.org> New submission from DongGeon Lee : I've found that there is an unnecessary double quote. It lost its another pair. It needs to be removed. And I would like to suggest changing its output format in docstring for consistency with similar kinds of other methods, if it was not intentional. ---------- components: Argument Clinic messages: 413120 nosy: LeeDongGeon1996, larry priority: normal severity: normal status: open title: Docstring of combinations_with_replacement for consistency versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 06:02:01 2022 From: report at bugs.python.org (Irit Katriel) Date: Sat, 12 Feb 2022 11:02:01 +0000 Subject: [New-bugs-announce] [issue46729] Better str() for BaseExceptionGroup Message-ID: <1644663721.5.0.89729285647.issue46729@roundup.psfhosted.org> New submission from Irit Katriel : The str() of exception groups currently contains just the msg as passed to the constructor. This turned out to be confusing (see https://github.com/python/cpython/pull/31270#issuecomment-1036418346). We should consider whether it is possible to design a more informative str(). Note that the str() is included in the standard traceback, which include the line: f"{type(e)}: {str(e)}" So str() should not repeat the type, and should not clutter this too much. Probably just the msg plus the number of contained leaf exceptions. PEP 654 needs to be updated with what we do here, and the change needs to be approved by the SC. ---------- components: Interpreter Core keywords: 3.2regression messages: 413121 nosy: iritkatriel priority: normal severity: normal status: open title: Better str() for BaseExceptionGroup versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 06:15:48 2022 From: report at bugs.python.org (Neil Girdhar) Date: Sat, 12 Feb 2022 11:15:48 +0000 Subject: [New-bugs-announce] [issue46730] Please consider mentioning property without setter when an attribute can't be set Message-ID: <1644664548.7.0.814482728123.issue46730@roundup.psfhosted.org> New submission from Neil Girdhar : class C: @property def f(self) -> int: return 2 class D(C): pass D().f = 2 Gives: Traceback (most recent call last): File "/home/neil/src/cmm/a.py", line 10, in D().f = 2 AttributeError: can't set attribute 'f' This can be a pain to debug when the property is buried in a base class. Would it make sense to mention the reason why the attribute can't be set, namely that it's on a property without a setter? ---------- components: Interpreter Core messages: 413122 nosy: NeilGirdhar priority: normal severity: normal status: open title: Please consider mentioning property without setter when an attribute can't be set versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 11:48:33 2022 From: report at bugs.python.org (David CARLIER) Date: Sat, 12 Feb 2022 16:48:33 +0000 Subject: [New-bugs-announce] [issue46731] posix._fcopyfile flags addition Message-ID: <1644684513.75.0.553009105622.issue46731@roundup.psfhosted.org> New submission from David CARLIER : Exposing more flags for direct calls, shutil fastcopy still only using COPYFILE_DATA one. ---------- components: Library (Lib) messages: 413137 nosy: devnexen priority: normal pull_requests: 29459 severity: normal status: open title: posix._fcopyfile flags addition versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 14:10:01 2022 From: report at bugs.python.org (Jelle Zijlstra) Date: Sat, 12 Feb 2022 19:10:01 +0000 Subject: [New-bugs-announce] [issue46732] object.__bool__ docstring is wrong Message-ID: <1644693001.93.0.383788579819.issue46732@roundup.psfhosted.org> New submission from Jelle Zijlstra : >>> None.__bool__.__doc__ 'self != 0' This isn't true, since None does not equal 0. I suggest rewording it to "True if self else False". ---------- assignee: Jelle Zijlstra components: Interpreter Core messages: 413141 nosy: Jelle Zijlstra priority: normal severity: normal status: open title: object.__bool__ docstring is wrong type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 14:35:13 2022 From: report at bugs.python.org (Barney Gale) Date: Sat, 12 Feb 2022 19:35:13 +0000 Subject: [New-bugs-announce] [issue46733] pathlib.Path methods can raise NotImplementedError Message-ID: <1644694513.05.0.985214969865.issue46733@roundup.psfhosted.org> New submission from Barney Gale : The docs for NotImplementedError say: > In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method, or while the class is being developed to indicate that the real implementation still needs to be added. pathlib's use of NotImplementedError appears to be more broad. It can be raised in the following circumstances: 1. When attempting to construct a WindowsPath from a non-Windows system, and vice-versa. This is the only case where NotImplementedError is mentioned in the pathlib docs (in a repl example) 2. In glob() and rglob() when an absolute path is supplied as a pattern 3. In owner() if the pwd module isn't available 4. In group() if the grp module isn't available 5. In readlink() if os.readlink() isn't available 6. In symlink_to() if os.symlink() isn't available 7. In hardlink_to() if os.hardlink() isn't available 8. In WindowsPath.is_mount(), unconditionally I suspect there are better choices for exception types in all these cases. ---------- components: Library (Lib) messages: 413142 nosy: barneygale priority: normal severity: normal status: open title: pathlib.Path methods can raise NotImplementedError type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 17:21:50 2022 From: report at bugs.python.org (Stephen Gildea) Date: Sat, 12 Feb 2022 22:21:50 +0000 Subject: [New-bugs-announce] [issue46734] Add Maildir.get_flags() to access message flags without opening the file Message-ID: <1644704510.06.0.116254271482.issue46734@roundup.psfhosted.org> New submission from Stephen Gildea : A message's flags are stored in its filename by Maildir, so the flags are available without reading the message file itself. The structured message file name makes it efficient to scan a large mailbox to select only messages that are, for example, not Trashed. The mailbox.Maildir interface does not expose these flags, however. The only way to access the flags through the mailbox library is to create a mailbox.MaildirMessage object, which has a get_flags() method. But creating a MaildirMessage requires opening the message file, which is slow. I propose adding a parallel get_flags(key) method to mailbox.Maildir, so that the flags are available without having to create a MaildirMessage object. In iterating through a mailbox with thousands of messages, I find that this proposed Maildir.get_flags() method is 50 times faster than MaildirMessage.get_flags(). ---------- components: Library (Lib) messages: 413145 nosy: gildea priority: normal severity: normal status: open title: Add Maildir.get_flags() to access message flags without opening the file type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 12 20:33:53 2022 From: report at bugs.python.org (unmellow the gamer) Date: Sun, 13 Feb 2022 01:33:53 +0000 Subject: [New-bugs-announce] [issue46735] gettext.translations crashes when locale is unset Message-ID: <1644716033.71.0.382285385303.issue46735@roundup.psfhosted.org> New submission from unmellow the gamer : The issue listed below contains an example of this problem I assume python programs crashing when an environment variable is unset is unintended and thought after all this time i should probably bring it to your attention https://github.com/k4yt3x/video2x/issues/349 ---------- components: Parser messages: 413151 nosy: amazingminecrafter2015, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: gettext.translations crashes when locale is unset type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 03:33:08 2022 From: report at bugs.python.org (Dominic Davis-Foster) Date: Sun, 13 Feb 2022 08:33:08 +0000 Subject: [New-bugs-announce] [issue46736] Generate HTML 5 with SimpleHTTPRequestHandler.list_directory Message-ID: <1644741188.03.0.745095490398.issue46736@roundup.psfhosted.org> New submission from Dominic Davis-Foster : Currently SimpleHTTPRequestHandler.list_directory (which is used with `python3 -m http.server` amongst other things) generates HTML with the doctype: i.e. HTML 4.01. I propose making the generated page HTML 5 instead. The only necessary change is in the doctype; the rest of the page is valid already. HTML 5 has been supported by Chrome, Firefox, Safari and Opera since 2013, and Edge since 2015 so there shouldn't be any issues with browser compatibility. The generated page has been HTML 4.01 since https://bugs.python.org/issue13295 in 2011, where it was originally proposed to switch to HTML 5. Switching to HTML 5 would also allow http.server to be used to serve a simple index for pip that's compliant with PEP 503. There's some discussion in https://github.com/pypa/pip/issues/10825 ---------- components: Library (Lib) messages: 413173 nosy: dom1310df priority: normal severity: normal status: open title: Generate HTML 5 with SimpleHTTPRequestHandler.list_directory type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 06:27:05 2022 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 13 Feb 2022 11:27:05 +0000 Subject: [New-bugs-announce] [issue46737] Default to the standard normal distribution Message-ID: <1644751625.52.0.483632432415.issue46737@roundup.psfhosted.org> New submission from Raymond Hettinger : This is really minor, but it would convenient if we provided default arguments: random.gauss(mu=0.0, sigma=1.0) random.normalvariate(mu=0.0, sigma=1.0) ---------- components: Library (Lib) messages: 413177 nosy: rhettinger priority: normal severity: normal status: open title: Default to the standard normal distribution type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 07:57:58 2022 From: report at bugs.python.org (Tzu-ping Chung) Date: Sun, 13 Feb 2022 12:57:58 +0000 Subject: [New-bugs-announce] [issue46738] Allow http.server to emit HTML 5 Message-ID: <1644757078.96.0.518529275585.issue46738@roundup.psfhosted.org> New submission from Tzu-ping Chung : Currently, a directory listing page emitted by http.parser uses the HTML 4.01 doctype. While this is perfectly fine for most uses, the server tool is sometimes used for things that require another doctype; PEP 503[1], for example, requires an HTML 5 document. >From what I can tell, http.parser is already emitting a valid HTML 5 page, so it should be possible to simply change the doctype declaration. Or, if backward compatibility is paramount, this could live behind a --doctype flag as well. If we go the latter route, more doctypes (e.g. XHTML) could potentially be supported as well with minimal modification. [1]: https://www.python.org/dev/peps/pep-0503/ ---------- components: Library (Lib) messages: 413179 nosy: uranusjr priority: normal severity: normal status: open title: Allow http.server to emit HTML 5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 11:35:46 2022 From: report at bugs.python.org (Craig Coleman) Date: Sun, 13 Feb 2022 16:35:46 +0000 Subject: [New-bugs-announce] [issue46739] dataclasses __eq__ isn't logical Message-ID: <1644770146.93.0.142732009848.issue46739@roundup.psfhosted.org> New submission from Craig Coleman : In a test, dataclasses generate an __eq__ function appears to be wrong. @dataclass class C: pass class K: pass a = C() b = C() c = K() d = K() (a is b) # False (a == b) # True # Incorrect, Why? (c is d) # False (c == d) # False # Correct Using @dataclass(eq = False) for annotation of C would make (a == b) == False which I think is correct behaviour. ---------- components: Library (Lib) messages: 413188 nosy: ccoleman priority: normal severity: normal status: open title: dataclasses __eq__ isn't logical type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 15:17:40 2022 From: report at bugs.python.org (Martin Kirchgessner) Date: Sun, 13 Feb 2022 20:17:40 +0000 Subject: [New-bugs-announce] [issue46740] Improve Telnetlib's throughput Message-ID: <1644783460.72.0.175033327075.issue46740@roundup.psfhosted.org> New submission from Martin Kirchgessner : While using `telnetlib` I sometimes received unusually "large" messages (around 1Mb) from another process on the same machine, and was surprised `read_until` took more than a second. After instrumenting I discovered such messages were received at roughly 500kbyte/s. I think this low throughput comes from two implementation details: - `Telnet.fill_rawq` is calling `self.sock.recv(50)`, whereas 4096 is now recommended - the `Telnet.process_rawq` method is transferring from raw queue to cooked queue by appending byte per byte. For the latter, transferring by slices looks much faster (I'm measuring at least 5x). I'm preparing a PR. ---------- components: Library (Lib) messages: 413195 nosy: martin_kirch priority: normal severity: normal status: open title: Improve Telnetlib's throughput type: resource usage versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 16:13:07 2022 From: report at bugs.python.org (Alex Waygood) Date: Sun, 13 Feb 2022 21:13:07 +0000 Subject: [New-bugs-announce] [issue46741] Docstring for asyncio.protocols.BufferedProtocol appears out of date Message-ID: <1644786787.46.0.446565917898.issue46741@roundup.psfhosted.org> New submission from Alex Waygood : The docstring for asyncio.protocols.BufferedProtocol includes this paragraph: """ Important: this has been added to asyncio in Python 3.7 *on a provisional basis*! Consider it as an experimental API that might be changed or removed in Python 3.8. """ The main branch is now 3.11, and the class has not yet been removed, so I'm guessing it's now safe to say that it's here to stay? ---------- assignee: docs at python components: Documentation, asyncio messages: 413196 nosy: AlexWaygood, asvetlov, docs at python, yselivanov priority: normal severity: normal status: open title: Docstring for asyncio.protocols.BufferedProtocol appears out of date type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 16:39:31 2022 From: report at bugs.python.org (Charles Howes) Date: Sun, 13 Feb 2022 21:39:31 +0000 Subject: [New-bugs-announce] [issue46742] Add '-d $fd' option to trace module, akin to bash -x feature Message-ID: <1644788371.77.0.167561149161.issue46742@roundup.psfhosted.org> New submission from Charles Howes : The 'trace' module logs trace output to stdout, intermingled with regular program output. This is a problem when you want to read either the trace output or the normal output of the program separately. To separate the trace output, it could be written to a file or to another file descriptor. A pull request has been created that fixes this by mimicking bash's behaviour: bash can be told to write trace output to a different file descriptor using the BASH_XTRACEFD shell variable: `exec 42> xtrace.out; BASH_XTRACEFD=42; ...` Usage of this new feature: python -m trace -t -d 111 your_program.py 111> /tmp/your_trace.txt or: t = Trace(count=1, trace=1, trace_fd=1, countfuncs=0, countcallers=0, ignoremods=(), ignoredirs=(), infile=None, outfile=None, timing=False) Notes: * `bash -x` sends trace logs to stderr by default; `python -m trace -t` sends them to stdout. I wanted to change Python to match, but was worried that this might break existing code. * Also considered writing trace logs to the file specified with the `-f FILE` option, but worried that it would mess up the count file if `-t` and `-c` were used together. ---------- components: Library (Lib) messages: 413197 nosy: PenelopeFudd priority: normal severity: normal status: open title: Add '-d $fd' option to trace module, akin to bash -x feature type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 18:49:55 2022 From: report at bugs.python.org (Gobot1234) Date: Sun, 13 Feb 2022 23:49:55 +0000 Subject: [New-bugs-announce] [issue46743] Enable usage of object.__orig_class__ in __init__ Message-ID: <1644796195.73.0.52600427485.issue46743@roundup.psfhosted.org> New submission from Gobot1234 : When using `__call__` on a `typing/types.GenericAlias` `__orig_class__` is set to the `GenericAlias` instance, however currently the mechanism for this does not allow the `__origin__` to access the `GenericAlias` from `__origin__.__init__` as it performs something akin to: ```py def __call__(self, *args, **kwargs): object = self.__origin__(*args, **kwargs) object.__orig_class__ = self return object ``` I'd like to propose changing this to something like: ```py def __call__(self, *args, **kwargs): object = self.__origin__.__new__(*args, **kwargs) object.__orig_class__ = self object.__init__(*args, **kwargs) return object ``` (Ideally `__orig_class__` should also be available in `__new__` but I'm not entirely sure if that's possible) AFAICT this was possible in the typing version back in 3.6 (https://github.com/python/typing/issues/658 and maybe https://github.com/python/typing/issues/519). Was there a reason this was removed? ---------- components: Library (Lib) messages: 413198 nosy: Gobot1234, gvanrossum, kj priority: normal severity: normal status: open title: Enable usage of object.__orig_class__ in __init__ type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 18:50:52 2022 From: report at bugs.python.org (conio) Date: Sun, 13 Feb 2022 23:50:52 +0000 Subject: [New-bugs-announce] [issue46744] installers on ARM64 suggest wrong folders Message-ID: <1644796252.37.0.322200421464.issue46744@roundup.psfhosted.org> New submission from conio : Thank you for your work on bringing Python to Windows on ARM64. I recently installed it an noticed some strange behaviours. On ARM64 Windows 11 the recent prerelease (3.11.0a5, post #33125) acts in a way I believe is wrong: Checking the _install for all users_ checkbox causes the installer to suggest the `C:\Program Files (Arm)\Python311-Arm64` folder, but the `C:\Program Files (Arm)` is intended for ARM32 programs, similarly to how the `C:\Program Files (x86)` is intended for x86 programs. The folder for programs that are native for the platform is simply `C:\Program Files` - which is x86 on x86 Windows, x64 on x64 Windows and ARM64 on ARM64 Windows. So on ARM64 Windows the ARM64 Python should go into the native Program Files folder which is `C:\Program Files`. -- A closely related issue is that the installer for x64 Python wants to install into `C:\Program Files\Python311`, but I already installed the ARM64 version there. The x86 acts as as should be expected and wants to install into `C:\Program Files (x86)\Python311-32`. But there's no "Program Files (x64)", so where should the x64 version on ARM64 machines go? I argue that the x64 version should go into `C:\Program Files\Python311-amd64" while the ARM64 version should go into `C:\Program Files\Python311`, because the ARM64 is the native on on this platform, while the x64 is foreign, and should get an elaborate name, like the x86, which is also foreign, gets. (The dotnet team also had this problem, and they decided similarly.) Internally `sys.winver` and the PEP 514 Registry structure can say whatever you like, but on the filesystem it's much more appropriate for the unqualified folder to be the system native one. ---------- components: Installation, Windows messages: 413199 nosy: conio, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: installers on ARM64 suggest wrong folders type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 21:15:03 2022 From: report at bugs.python.org (=?utf-8?q?Robert-Andr=C3=A9_Mauchin?=) Date: Mon, 14 Feb 2022 02:15:03 +0000 Subject: [New-bugs-announce] [issue46745] Typo in new PositionsIterator Message-ID: <1644804903.84.0.228689662177.issue46745@roundup.psfhosted.org> New submission from Robert-Andr? Mauchin : In Objects/codeobject.c, poisitions_iterator should read positions_iterator ---------- components: C API messages: 413209 nosy: eclipseo priority: normal pull_requests: 29479 severity: normal status: open title: Typo in new PositionsIterator versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 13 22:33:18 2022 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 14 Feb 2022 03:33:18 +0000 Subject: [New-bugs-announce] [issue46746] IDLE: Consistently handle non .py source files Message-ID: <1644809598.08.0.43493225596.issue46746@roundup.psfhosted.org> New submission from Terry J. Reedy : Python will attempt to execute any file it can decode to unicode text as a startup script. It will only import .py files as a module. #45447 turned on syntax coloring for .pyi stub files. (.pyw files and files starting with "!#.*python" were already recognized as source (scripts).) It also added '.pyi' as a possible python extension in open and save dialogs. For this issue, fix some other modules, as appropriate, for non-.py files. Pathbrowser: Except for the files in sys.path, pathbrowser only shows .py files and directories including such. It should be easy to also list .pyw and .pyi files and directories. Perhaps a button could be added to list all files. Open Module: Opens a module when given a valid import name. So it cannot be used to open non-modules, which is to say, non .py files. .pyi files are condensed modules, not startup files, but opening them would require considerable change since the import machinery is currently used. We could add a message to the box saying, "To open a non-module (non .py) file, use File => Open." Modulebrowser: This was originally called Classbrowser as it only browsed top-level classes and their methods. It now browses all classes and def-ined functions and I renamed it to indicate the expanded scope. Since it only browses .py files, I did not know that I was theoretically narrowing the scope to exclude non-.py files. Currently, when editing a non-.py file and trying to open a module browser, a window is opened and nothing happens. This is the same as with a file with no classes or functions. Either browse or display an error message. The latter would include files with nothing to browse. Anything else? ---------- assignee: terry.reedy components: IDLE messages: 413210 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: Consistently handle non .py source files type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 02:09:36 2022 From: report at bugs.python.org (Stefan Pochmann) Date: Mon, 14 Feb 2022 07:09:36 +0000 Subject: [New-bugs-announce] [issue46747] bisect.bisect/insort don't document key parameter Message-ID: <1644822576.25.0.825142706397.issue46747@roundup.psfhosted.org> New submission from Stefan Pochmann : The signatures for the versions without "_right" suffix are missing the key parameter: bisect.bisect_right(a, x, lo=0, hi=len(a), *, key=None) bisect.bisect(a, x, lo=0, hi=len(a))? bisect.insort_right(a, x, lo=0, hi=len(a), *, key=None) bisect.insort(a, x, lo=0, hi=len(a))? https://docs.python.org/3/library/bisect.html#bisect.bisect_right https://docs.python.org/3/library/bisect.html#bisect.insort_right ---------- assignee: docs at python components: Documentation messages: 413213 nosy: Stefan Pochmann, docs at python priority: normal severity: normal status: open title: bisect.bisect/insort don't document key parameter versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 04:28:19 2022 From: report at bugs.python.org (Petr Viktorin) Date: Mon, 14 Feb 2022 09:28:19 +0000 Subject: [New-bugs-announce] [issue46748] Python.h includes stdbool.h Message-ID: <1644830899.62.0.590844926135.issue46748@roundup.psfhosted.org> New submission from Petr Viktorin : In main, cpython/pystate.h newly includes stdbool.h, providing a definition for `bool` that might be incompatible with other software. See here: https://github.com/cmusphinx/sphinxbase/pull/90 Eric, is this necessary? Would an old-school `int` do? Or should we say it's 2022 already and everyone needs to use stdbool.hfore bools? ---------- messages: 413216 nosy: eric.snow, petr.viktorin priority: normal severity: normal status: open title: Python.h includes stdbool.h versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 09:23:11 2022 From: report at bugs.python.org (autoantwort) Date: Mon, 14 Feb 2022 14:23:11 +0000 Subject: [New-bugs-announce] [issue46749] Support cross compilation on macOS Message-ID: <1644848591.29.0.688015722934.issue46749@roundup.psfhosted.org> New submission from autoantwort : Currently you get the following output: ``` ? debug git:(main) ? ../configure --host=x86_64-apple-darwin --build=arm64-apple-darwin --with-build-python=./python3.11 checking for git... found checking build system type... aarch64-apple-darwin checking host system type... x86_64-apple-darwin checking for --with-build-python... ./python3.11 checking for Python interpreter freezing... ./python3.11 checking for python3.11... (cached) ./python3.11 checking Python for regen version... Python 3.11.0a5+ checking for x86_64-apple-darwin-pkg-config... no checking for pkg-config... /opt/homebrew/bin/pkg-config configure: WARNING: using cross tools not prefixed with host triplet checking pkg-config is at least version 0.9.0... yes checking for --enable-universalsdk... no checking for --with-universal-archs... no checking MACHDEP... configure: error: cross build not supported for x86_64-apple-darwin ``` Is "needed" for https://github.com/microsoft/vcpkg/issues/22603 ---------- components: Build messages: 413224 nosy: autoantwort priority: normal severity: normal status: open title: Support cross compilation on macOS type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 10:55:15 2022 From: report at bugs.python.org (Shivaram Lingamneni) Date: Mon, 14 Feb 2022 15:55:15 +0000 Subject: [New-bugs-announce] [issue46750] some code paths in ssl and _socket still import idna unconditionally Message-ID: <1644854115.6.0.132786473021.issue46750@roundup.psfhosted.org> New submission from Shivaram Lingamneni : Importing the idna encoding has a significant time and memory cost. Therefore, the standard library tries to avoid importing it when it's not needed (i.e. when the domain name is already pure ASCII), e.g. in Lib/http/client.py and Modules/socketmodule.c with `idna_converter`. However, there are code paths that still attempt to encode or decode as idna unconditionally, in particular Lib/ssl.py and _socket.getaddrinfo. Here's a one-line test case: python3 -c "import sys, urllib.request; urllib.request.urlopen('https://www.google.com'); assert 'encodings.idna' not in sys.modules" These code paths can be converted using existing code to do the import conditionally (I'll send a PR). ---------- assignee: christian.heimes components: Interpreter Core, Library (Lib), SSL messages: 413229 nosy: christian.heimes, slingamn priority: normal severity: normal status: open title: some code paths in ssl and _socket still import idna unconditionally type: resource usage versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 13:42:01 2022 From: report at bugs.python.org (Mike Kaganski) Date: Mon, 14 Feb 2022 18:42:01 +0000 Subject: [New-bugs-announce] [issue46751] Windows-style path is not recognized under cygwin Message-ID: <1644864121.29.0.790596029405.issue46751@roundup.psfhosted.org> New submission from Mike Kaganski : Using cyqwin 3.3.4-2, and python3: Python 3.9.10 (main, Jan 20 2022, 21:37:52) [GCC 11.2.0] on cygwin Trying this bash command line: > python3 C:/path/to/script.py results in this error: "python3: can't open file '/cygdrive/c/path/to/curdir/C:/path/to/script.py': [Errno 2] No such file or directory" OTOH, calling it like > python3 /cygdrive/c/path/to/script.py gives the expected output: "usage: script.py [-h] ..." It seems that python3 doesn't recognize "C:/path/to/script.py" to be a proper full path under cygwin, while most other cygwin apps handle those fine. E.g., > nano C:/path/to/script.py opens the script for editing without problems. The mentioned path syntax is useful and supported under cygwin, so it would be nice if python3 could support it, too. Especially useful it is in mixed development environment, mixing Windows native tools and cygwin ones; using such path style allows to use same paths for both kinds of tools, simplifying scripts. ---------- components: Windows messages: 413247 nosy: mikekaganski, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows-style path is not recognized under cygwin type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 15:44:29 2022 From: report at bugs.python.org (Guido van Rossum) Date: Mon, 14 Feb 2022 20:44:29 +0000 Subject: [New-bugs-announce] [issue46752] Introduce task groups to asyncio and change task cancellation semantics Message-ID: <1644871469.64.0.38816696304.issue46752@roundup.psfhosted.org> New submission from Guido van Rossum : After some conversations with Yury, and encouraged by the SC's approval of PEP 654, I am proposing to add a new class, asyncio.TaskGroup, which introduces structured concurrency similar to nurseries in Trio. I started with EdgeDb's TaskGroup implementation (https://github.com/edgedb/edgedb/blob/master/edb/common/taskgroup.py) and tweaked it only slightly. I also changed a few things in asyncio.Task (see below). The key change I made to EdgeDb's TaskGroup is that subtasks can keep spawning more subtasks while __aexit__ is running; __aexit__ exits once the last subtask is done. I made this change after consulting some Trio folks, who knew of real-world use cases for this behavior, and did not know of real-world code in need of prohibiting task creation as soon as __aexit__ starts running. I added some tests for the new behavior; none of the existing tests needed to be adjusted to accommodate this change. (For other changes relative to the EdgeDb's TaskGroup, see GH-31270.) In order to avoid the need to monkey-patch the parent task, I added two new methods to asyncio.Task, .cancelled() and .uncancel(), that manage a flag corresponding to __cancel_requested__ in EdgeDb's TaskGroup. **This introduces a change in behavior around task cancellation:** * A task that catches CancelledError is allowed to run undisturbed (ignoring further .cancel() calls and allowing any number of await calls!) until it either exits or calls .uncancel(). This change in semantics did not cause any asyncio unittests to fail. However, it may be surprising (especially to Trio folks, where the semantics are pretty much the opposite, once a Trio task is cancelled all further await calls in that task fail unless explicitly shielded). For the TaskGroup tests to pass, we require a flag that is not cleared. However, it is probably not really required to ignore subsequent .cancel() calls until .uncancel() is called. This just seemed more consistent, and it is what @asvetlov proposed above and implemented in GH-31313 (using a property .__cancel_requested__ as the API). ---------- assignee: gvanrossum components: asyncio keywords: needs review messages: 413260 nosy: asvetlov, gvanrossum, iritkatriel, yselivanov priority: normal severity: normal stage: patch review status: open title: Introduce task groups to asyncio and change task cancellation semantics versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 17:40:07 2022 From: report at bugs.python.org (Eric Snow) Date: Mon, 14 Feb 2022 22:40:07 +0000 Subject: [New-bugs-announce] [issue46753] Statically allocate and initialize the empty tuple. Message-ID: <1644878407.77.0.113778164261.issue46753@roundup.psfhosted.org> New submission from Eric Snow : Currently it is created dynamically from the tuple freelist. ---------- assignee: eric.snow components: Interpreter Core messages: 413268 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Statically allocate and initialize the empty tuple. versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 14 20:19:11 2022 From: report at bugs.python.org (Guido van Rossum) Date: Tue, 15 Feb 2022 01:19:11 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue46754=5D_Improve_Python_La?= =?utf-8?q?nguage_Reference_based_on_=5BK=C3=B6hl_2020=5D?= Message-ID: <1644887951.55.0.170879911355.issue46754@roundup.psfhosted.org> New submission from Guido van Rossum : In https://arxiv.org/pdf/2109.03139.pdf ("M K?hl, An Executable Structural Operational Formal Semantics for Python, Master Thesis 2020 Saarland University) there are some observations on cases where the Language Reference (referred to as PLR) is ambiguous or incorrect. Somebody should go over the thesis, collect the issues, and then we can update the language reference. See also https://github.com/faster-cpython/ideas/issues/208#issuecomment-1039612432 ---------- messages: 413275 nosy: gvanrossum priority: normal severity: normal status: open title: Improve Python Language Reference based on [K?hl 2020] _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 03:36:55 2022 From: report at bugs.python.org (Erik Montnemery) Date: Tue, 15 Feb 2022 08:36:55 +0000 Subject: [New-bugs-announce] [issue46755] QueueHandler logs stack_info twice Message-ID: <1644914215.76.0.40425567068.issue46755@roundup.psfhosted.org> New submission from Erik Montnemery : logging.handlers.QueueHandler logs stack twice when stack_info=True: >>> import logging >>> from logging.handlers import QueueHandler, QueueListener >>> from queue import Queue >>> q = Queue() >>> logging.getLogger().addHandler(QueueHandler(q)) >>> listener = QueueListener(q, logging.StreamHandler()) >>> listener.start() >>> _LOGGER.error("Hello", stack_info=True) Hello Stack (most recent call last): File "", line 1, in Stack (most recent call last): File "", line 1, in Reproduced on CPython 3.9.9, but the code is unchanged in 3.10 and 3.11, so the issue should exist there too. Patching QueueHandler.prepare() to set stack_info to None seems to fix this: diff --git a/Lib/logging/handlers.py b/Lib/logging/handlers.py index d42c48de5f..7cd5646d85 100644 --- a/Lib/logging/handlers.py +++ b/Lib/logging/handlers.py @@ -1452,6 +1452,7 @@ def prepare(self, record): record.args = None record.exc_info = None record.exc_text = None + record.stack_info = None return record def emit(self, record): Related issue: Issue34334, with patch https://github.com/python/cpython/pull/9537 ---------- components: Library (Lib) messages: 413278 nosy: erik.montnemery priority: normal severity: normal status: open title: QueueHandler logs stack_info twice type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 04:48:07 2022 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 15 Feb 2022 09:48:07 +0000 Subject: [New-bugs-announce] [issue46756] Incorrect Message-ID: <1644918487.72.0.244898503691.issue46756@roundup.psfhosted.org> New submission from Serhiy Storchaka : There is an error in determining a sub-URI in the urllib.request module. Due to it, if the user is authorized for example.org/foo, it gets also access to example.org/foobar. ---------- components: Library (Lib) messages: 413280 nosy: orsenthil, serhiy.storchaka priority: high severity: normal status: open title: Incorrect type: security versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 06:13:27 2022 From: report at bugs.python.org (Neil Girdhar) Date: Tue, 15 Feb 2022 11:13:27 +0000 Subject: [New-bugs-announce] [issue46757] dataclasses should define an empty __post_init__ Message-ID: <1644923607.6.0.495534870715.issue46757@roundup.psfhosted.org> New submission from Neil Girdhar : When defining a dataclass, it's possible to define a post-init (__post_init__) method to, for example, verify contracts. Sometimes, when you inherit from another dataclass, that dataclass has its own post-init method. If you want that method to also do its checks, you need to explicitly call it with super. However, if that method doesn't exist calling it with super will crash. Since you don't know whether your superclasses implement post-init or not, you're forced to check if the superclass has one or not, and call it if it does. Essentially, post-init implements an "augmenting pattern" like __init__, ___enter__, __exit__, __array_finalize__, etc. All such methods define an empty method at the top level so that child classes can safely call super. Please consider adding such an empty method to dataclasses so that children who implement __post_init__ can safely call super. ---------- components: Library (Lib) messages: 413283 nosy: NeilGirdhar priority: normal severity: normal status: open title: dataclasses should define an empty __post_init__ _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 07:01:56 2022 From: report at bugs.python.org (SIGSEG V) Date: Tue, 15 Feb 2022 12:01:56 +0000 Subject: [New-bugs-announce] [issue46758] Incorrect behaviour creating a Structure with ctypes.c_bool bitfields Message-ID: <1644926516.47.0.674432386115.issue46758@roundup.psfhosted.org> New submission from SIGSEG V : Setting/getting values in a Structure containing multiple c_bool bitfields like: _fields_ = [ ('one', c_bool, 1), ('two', c_bool, 1), ] results in an unexpected behavior. Setting any one of these fields to `True` results in ALL of these fields being set to `True` (i.e.: setting `struct.one` to `True` causes both `struct.one` as well as `struct.two` to be set to `True`. This also results in the binary representation of the struct to be incorrect. The only possible outcomes for `bytes(struct)` are `b'\x00` and `b'\x01'` Expected behavior should be that when setting `struct.one` only sets the desired field. This is achievable when defining the same Structure with `c_byte` rather than `c_bool`. When defining the struct like: _fields_ = [ ('one', c_byte, 1), ('two', c_byte, 1), ] setting `struct.one` only affects `struct.one` and not `struct.two`. The binary representation of the structure is also correct. When setting `struct.two` to `True`, `bytes(struct)` returns `b'\x02'` (aka. 0b10). I've attached a Minimal Runnable Example that hopefully helps outline the issue. ---------- components: Windows, ctypes files: mre.py messages: 413284 nosy: dudenwatschn, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Incorrect behaviour creating a Structure with ctypes.c_bool bitfields type: behavior versions: Python 3.10, Python 3.9 Added file: https://bugs.python.org/file50624/mre.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 07:10:41 2022 From: report at bugs.python.org (Colin Watson) Date: Tue, 15 Feb 2022 12:10:41 +0000 Subject: [New-bugs-announce] [issue46759] sys.excepthook documentation doesn't mention that it isn't called for SystemExit Message-ID: <1644927041.52.0.688076844331.issue46759@roundup.psfhosted.org> New submission from Colin Watson : In https://bugs.debian.org/1005803, Matthew Vernon reports that the library documentation for sys.excepthook doesn't mention the detail that that sys.excepthook isn't called for uncaught SystemExit exceptions, although help(sys) does mention this. (He also mentions that help(sys.excepthook) doesn't mention this. I think this would make less sense, since that gets the docstring of a particular implementation of an excepthook - on a given system it might not be Python's built-in version, for instance. But adding information to the main library documentation seems reasonable.) ---------- assignee: docs at python components: Documentation messages: 413285 nosy: cjwatson, docs at python priority: normal severity: normal status: open title: sys.excepthook documentation doesn't mention that it isn't called for SystemExit versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 09:17:08 2022 From: report at bugs.python.org (Mark Shannon) Date: Tue, 15 Feb 2022 14:17:08 +0000 Subject: [New-bugs-announce] [issue46760] test_dis should test the dis module, not everything else Message-ID: <1644934628.87.0.437399260946.issue46760@roundup.psfhosted.org> New submission from Mark Shannon : This is getting really annoying. It takes longer to fix all the heavily coupled and poorly written tests in test_dis than to make the real changes. Tiny changes in the calling sequence, or reordering CFGs, cause huge diffs in the test_dis module. No one ever checks these changes, they are just noise. I've put this under "enhancement" as there is no "wastes a huge amount of time" category. The test_dis should not: Contain offsets; they turn one line diffs into 100 line diffs Contain tests for the compiler; they belong elsewhere. Contain big strings; write proper tests not just regex matches. Tests for Instruction should should not depend on the compiler output; create the bytecode directly. This is not a new problem, but it does seem to be getting progressively worse. A lot of the irritation stems from https://github.com/python/cpython/commit/b39fd0c9b8dc6683924205265ff43cc597d1dfb9 although the tests from before that still hardcode offsets. ---------- components: Tests messages: 413291 nosy: Mark.Shannon priority: normal severity: normal status: open title: test_dis should test the dis module, not everything else type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 13:06:14 2022 From: report at bugs.python.org (Larry Hastings) Date: Tue, 15 Feb 2022 18:06:14 +0000 Subject: [New-bugs-announce] [issue46761] functools.update_wrapper breaks the signature of functools.partial objects Message-ID: <1644948374.87.0.765462873315.issue46761@roundup.psfhosted.org> New submission from Larry Hastings : It's considered good hygiene to use functools.update_wrapper() to make your wrapped functions look like the original. However, when using functools.partial() to pre-supply arguments to a function, if you then call functools.update_wrapper() to update that partial object, inspect.signature() returns the *original* function's signature, not the *wrapped* function's signature. To be precise: if you wrap a function with functools.partial() to pre-provide arguments, then immediately call inspect.signature() on that partial object, it returns the correct signature with the pre-filled parameters removed. If you then call functools.update_wrapper() to update the partial from the original function, inspect.signature() now returns the *wrong* signature. I looked into it a little. The specific thing changing inspect.signature()'s behavior is the '__wrapped__' attribute added by functools.update_wrapper(). By default inspect.signature() will unwrap partial objects, but only if it has a '__wrapped__' attribute. This all looks pretty deliberate. And it seems like there was some thought given to this wrinkle; inspect.signature() takes a "follow_wrapper_chains" parameter the user can supply to control this behavior. But the default is True, meaning that by default it unwraps partial objects if they have a '__wrapped__'. I admit I don't have any context for this. Why do we want inspect.signature() to return the wrong signature by default? ---------- components: Library (Lib) files: update_wrapper.breaks.partial.signature.test.py messages: 413299 nosy: larry priority: normal severity: normal stage: test needed status: open title: functools.update_wrapper breaks the signature of functools.partial objects type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50625/update_wrapper.breaks.partial.signature.test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 17:19:52 2022 From: report at bugs.python.org (Ammar Askar) Date: Tue, 15 Feb 2022 22:19:52 +0000 Subject: [New-bugs-announce] [issue46762] assertion failure in f-string parsing Parser/string_parser.c Message-ID: <1644963592.59.0.718749450454.issue46762@roundup.psfhosted.org> New submission from Ammar Askar : Similar to https://bugs.python.org/issue46503 found by the ast.literal_eval fuzzer ``` >>> f'{<' python: Parser/string_parser.c:346: fstring_compile_expr: Assertion `*expr_end == '}' || *expr_end == '!' || *expr_end == ':' || *expr_end == '='' failed. [1] 14060 abort ./python ``` ---------- assignee: eric.smith components: Parser messages: 413302 nosy: ammar2, eric.smith, gregory.p.smith, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: assertion failure in f-string parsing Parser/string_parser.c type: crash versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 20:45:13 2022 From: report at bugs.python.org (Nick Venenga) Date: Wed, 16 Feb 2022 01:45:13 +0000 Subject: [New-bugs-announce] [issue46763] os.path.samefile incorrect results for shadow copies Message-ID: <1644975913.58.0.577499491465.issue46763@roundup.psfhosted.org> New submission from Nick Venenga : shutil.copy fails to copy a file from a shadow copy back to its original file since os.path.samefile returns True. os.path.samefile doesn't reliably detect these files are different since it relies on ino which is the same for both files >>> sc = pathlib.Path('//?/GLOBALROOT/Device/HarddiskVolumeShadowCopy3/test.file') >>> o = pathlib.Path("V:/test.file") >>> os.path.samefile(sc, o) True >>> os.stat(sc) os.stat_result(st_mode=33206, st_ino=3458764513820579328, st_dev=1792739134, st_nlink=1, st_uid=0, st_gid=0, st_size=1, st_atime=1644973968, st_mtime=1644974052, st_ctime=1644973968) >>> os.stat(o) os.stat_result(st_mode=33206, st_ino=3458764513820579328, st_dev=1792739134, st_nlink=1, st_uid=0, st_gid=0, st_size=2, st_atime=1644973968, st_mtime=1644974300, st_ctime=1644973968) >>> open(sc, "r").read() '1' >>> open(o, "r").read() '12' In the above example, you can see the shadow copy file and the original file. Their mode and ino are the same, but their modified time and contents are different ---------- components: Windows messages: 413307 nosy: nijave, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.path.samefile incorrect results for shadow copies type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 20:56:29 2022 From: report at bugs.python.org (Michael J. Sullivan) Date: Wed, 16 Feb 2022 01:56:29 +0000 Subject: [New-bugs-announce] [issue46764] Wrapping a bound method with a @classmethod no longer works Message-ID: <1644976589.23.0.130675982792.issue46764@roundup.psfhosted.org> New submission from Michael J. Sullivan : class A: def foo(self, cls): return 1 class B: pass class B: bar = classmethod(A().foo) B.bar() In Python 3.8 and prior, this worked. Since Python 3.9, it produces "TypeError: A.foo() missing 1 required positional argument: 'cls'" I tracked it down, and the issue was introduced by https://github.com/python/cpython/pull/8405/files, which makes classmethod's tp_descr_get invoke its argument tp_descr_get when present instead of calling PyMethod_New. That this was a semantics change that could break existing code may have been missed (though it is a fairly obscure such change). The reason it breaks this case in particular of bound methods, though, is that bound methods have a tp_descr_get that does nothing (increfs the method and then returns it). Dropping that tp_descr_get fixes this issue and doesn't introduce any test failures. Not sure if there is some potential downstream breakage of that? (This issue was originally reported to me by Jared Hance) ---------- components: Interpreter Core messages: 413310 nosy: msullivan priority: normal severity: normal status: open title: Wrapping a bound method with a @classmethod no longer works versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 22:16:42 2022 From: report at bugs.python.org (Eric Snow) Date: Wed, 16 Feb 2022 03:16:42 +0000 Subject: [New-bugs-announce] [issue46765] Replace Locally Cached Strings with Statically Initialized Objects Message-ID: <1644981402.43.0.406024734296.issue46765@roundup.psfhosted.org> New submission from Eric Snow : This removes a number of static variables and is a little more efficient. ---------- assignee: eric.snow components: Interpreter Core messages: 413313 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Replace Locally Cached Strings with Statically Initialized Objects versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 15 23:56:32 2022 From: report at bugs.python.org (Isaac Johnson) Date: Wed, 16 Feb 2022 04:56:32 +0000 Subject: [New-bugs-announce] [issue46766] Add a class for file operations so a syntax such as open("file.img", File.Write | File.Binary | File.Disk) is possible. Message-ID: <1644987392.44.0.618461662976.issue46766@roundup.psfhosted.org> New submission from Isaac Johnson : I think it would be great for something like this to be with the IO module. It will improve code readability. ---------- components: Library (Lib) messages: 413315 nosy: isaacsjohnson22 priority: normal severity: normal status: open title: Add a class for file operations so a syntax such as open("file.img", File.Write | File.Binary | File.Disk) is possible. versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 06:16:39 2022 From: report at bugs.python.org (Pierre Thierry) Date: Wed, 16 Feb 2022 11:16:39 +0000 Subject: [New-bugs-announce] [issue46767] [Doc] sqlite3 Cursor.execute() return value is unspecified Message-ID: <1645010199.07.0.291024534189.issue46767@roundup.psfhosted.org> New submission from Pierre Thierry : In the documentation of the sqlite3 module, the return value for Connection.execute() is told to be the Cursor that was implicitly created, but nothing is said about the return value/type when using Cursor.execute(). ---------- components: Library (Lib) messages: 413327 nosy: kephas priority: normal severity: normal status: open title: [Doc] sqlite3 Cursor.execute() return value is unspecified type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 07:57:14 2022 From: report at bugs.python.org (zjmxq) Date: Wed, 16 Feb 2022 12:57:14 +0000 Subject: [New-bugs-announce] [issue46768] CVE-201-4160 Vulnerability Is Found in Lib/site-packages/cryptography/hazmat/bindings/_openssl.pyd for Cryptography Version 3.3.2 Message-ID: <1645016234.03.0.578869822824.issue46768@roundup.psfhosted.org> Change by zjmxq : ---------- components: Library (Lib) nosy: zjmxq priority: normal severity: normal status: open title: CVE-201-4160 Vulnerability Is Found in Lib/site-packages/cryptography/hazmat/bindings/_openssl.pyd for Cryptography Version 3.3.2 type: security versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 12:01:18 2022 From: report at bugs.python.org (Alex Waygood) Date: Wed, 16 Feb 2022 17:01:18 +0000 Subject: [New-bugs-announce] [issue46769] Improve documentation for `typing.TypeVar` Message-ID: <1645030878.3.0.822530505617.issue46769@roundup.psfhosted.org> New submission from Alex Waygood : There are three variants of `TypeVar`s: (1) TypeVars that are neither constrained nor bound: `T = TypeVar("T")` (2) TypeVars that are bound: `U = TypeVar("U", bound=str)` (3) TypeVars that are constrained: `V = TypeVar("V", str, bytes)` The third variant is important for annotating certain functions, such as those in the `re` module. However, it has a number of issues (see https://github.com/python/typing/discussions/1080 for further discussion): (1) It has somewhat surprising semantics in many situations. (2) It is difficult for type checkers to deal with, leading to a number of bugs in mypy, for example. (3) Many users (especially people relatively inexperienced with Python typing) reach for constrained TypeVars in situations where using bound TypeVars or the @overload decorator would be more appropriate. Both PEP 484 and the documentation for the typing module, however: (1) Give examples for variants (1) and (3), but not for variant (2), which is treated as something of an afterthought. (2) Do not mention that TypeVars can be bound to a union of types, which is an important point: `T = TypeVar("T", str, bytes)` has different semantics to `T = TypeVar("T", bound=str|bytes)`, and often the latter is more appropriate. ---------- assignee: docs at python components: Documentation messages: 413342 nosy: AlexWaygood, Jelle Zijlstra, docs at python, gvanrossum, kj, sobolevn priority: normal severity: normal stage: needs patch status: open title: Improve documentation for `typing.TypeVar` type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 12:17:23 2022 From: report at bugs.python.org (Mark Lonnemann) Date: Wed, 16 Feb 2022 17:17:23 +0000 Subject: [New-bugs-announce] [issue46770] ConfigParser(dict_type=) not behaving as expected Message-ID: <1645031843.76.0.82823546709.issue46770@roundup.psfhosted.org> New submission from Mark Lonnemann : ConfigParser() is not using a custom dictionary class correctly, according to my understanding. I have duplicate options in a config file that I want to rename uniquely. The following code does not work. x = 0 class MultiDict(dict): def __setitem__(self, key, value): if key == 'textsize': global x key += str(x) x += 1 dict.__setitem__(self, key, value) ... config1 = cp.ConfigParser(dict_type=MultiDict) config1.read('ini_file.ini') "textsize" is the option named twice in my config file. When I run the code, I get a DuplicateOptionError for "textsize". No one seems to know how to solve this, so it could be a bug. If it's sloppy coding, I apoligize. ---------- components: Extension Modules messages: 413343 nosy: malonn priority: normal severity: normal status: open title: ConfigParser(dict_type=) not behaving as expected type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 12:59:12 2022 From: report at bugs.python.org (Guido van Rossum) Date: Wed, 16 Feb 2022 17:59:12 +0000 Subject: [New-bugs-announce] [issue46771] Add some form of cancel scopes Message-ID: <1645034352.96.0.773044052656.issue46771@roundup.psfhosted.org> New submission from Guido van Rossum : Now that TaskGroup is merged (see bpo-46752) we might consider adding some form of cancel scopes (another Trio idea). There's a sensible implementation we could use as a starting point in @asvetlov's async-timeout package (https://github.com/aio-libs/async-timeout). ---------- components: asyncio messages: 413345 nosy: asvetlov, gvanrossum, iritkatriel, njs, yselivanov priority: normal severity: normal stage: needs patch status: open title: Add some form of cancel scopes type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 13:32:25 2022 From: report at bugs.python.org (Eric Snow) Date: Wed, 16 Feb 2022 18:32:25 +0000 Subject: [New-bugs-announce] [issue46772] Statically Initialize PyArg_Parser in clinic.py Message-ID: <1645036345.9.0.392519998688.issue46772@roundup.psfhosted.org> New submission from Eric Snow : The code generated by clinic.py is already partially statically initialized. Currently we init the other fields in Python/getargs.c:parser_init(), which runs the first time we try to use each parser. AFAICS, that remaining init that could be done statically using the data we have available in clinic.py during code generation. My primary interest is in static init of PyArg_Parser.kwtuple, which is a tuple containing only strings. ---------- assignee: eric.snow components: Interpreter Core messages: 413351 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Statically Initialize PyArg_Parser in clinic.py versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 13:34:50 2022 From: report at bugs.python.org (Eric Snow) Date: Wed, 16 Feb 2022 18:34:50 +0000 Subject: [New-bugs-announce] [issue46773] Add a Private API for Looking Up Global Objects Message-ID: <1645036490.54.0.531825126178.issue46773@roundup.psfhosted.org> New submission from Eric Snow : We need this to statically initialize PyArg_Parser.kwtuple. (See bpo-46772.) For now this will be a "private" API (leading underscore). Ultimately, we'll want a Public API, so we can eventually stop exposing *any* objects as symbols in the C-API. However, that will need a PEP. ---------- assignee: eric.snow components: Interpreter Core messages: 413352 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Add a Private API for Looking Up Global Objects versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 20:41:35 2022 From: report at bugs.python.org (Kevin Kirsche) Date: Thu, 17 Feb 2022 01:41:35 +0000 Subject: [New-bugs-announce] [issue46774] Importlib.metadata.version picks first distribution not latest Message-ID: <1645062095.18.0.489026763345.issue46774@roundup.psfhosted.org> New submission from Kevin Kirsche : When using importlib.metadata.version with tools such as poetry which may install the current package one or more times, importlib.metadata.version is not deterministic in returning the latest version of the package, instead returning the first one located. As it's unclear if this behavior is desired by importlib, I'm creating this issue to determine if this is intentional behavior or a bug. I have opened the following poetry issue: * https://github.com/python-poetry/poetry/issues/5204 I have also created the following reproduction repository for the installation issue: https://github.com/kkirsche/poetry-remove-untracked When the after is modified to return the version, it returns the first one found (e.g. if you go 3.0.0 -> 3.0.1 -> 3.0.2, each would be installed and the library would return 3.0.0 to the caller) Thank you for your time and consideration. I apologize if this is not something that requires action by the Python team. I'd be open to trying to submit a PR, but want to verify whether this is intentional or not. ---------- components: Library (Lib) messages: 413375 nosy: kkirsche2 priority: normal severity: normal status: open title: Importlib.metadata.version picks first distribution not latest type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 16 22:54:21 2022 From: report at bugs.python.org (Eryk Sun) Date: Thu, 17 Feb 2022 03:54:21 +0000 Subject: [New-bugs-announce] [issue46775] [Windows] OSError should unconditionally call winerror_to_errno Message-ID: <1645070061.53.0.530684552502.issue46775@roundup.psfhosted.org> New submission from Eryk Sun : bpo-37705 overlooked fixing the OSError constructor. oserror_parse_args() in Objects/exceptions.c should unconditionally call winerror_to_errno(), which is defined in PC/errmap.h. winerror_to_errno() maps the Winsock range 10000-11999 directly, except for the 6 errors in this range that are based on C errno values: WSAEINTR, WSAEBADF, WSAEACCES, WSAEFAULT, WSAEINVAL, and WSAEMFILE. Otherwise, Windows error codes that aren't mapped explicitly get mapped by default to EINVAL. ---------- components: Interpreter Core, Windows keywords: easy (C) messages: 413383 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: [Windows] OSError should unconditionally call winerror_to_errno type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 07:48:24 2022 From: report at bugs.python.org (chen-y0y0) Date: Thu, 17 Feb 2022 12:48:24 +0000 Subject: [New-bugs-announce] [issue46776] RecursionError when using property() inside classes Message-ID: <1645102104.25.0.972023321131.issue46776@roundup.psfhosted.org> New submission from chen-y0y0 : A simple class definition: class Foo: bar = property(lambda self: self.bar) And get the value of Foo.bar, it returns correctly, . And get the value of Foo().bar, it raises RecursionError: Traceback (most recent call last): File "", line 1, in File "", line 1, in File "", line 1, in File "", line 1, in [Previous line repeated 996 more times] RecursionError: maximum recursion depth exceeded ---------- components: Interpreter Core messages: 413403 nosy: prasechen priority: normal severity: normal status: open title: RecursionError when using property() inside classes type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 07:54:16 2022 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 17 Feb 2022 12:54:16 +0000 Subject: [New-bugs-announce] [issue46777] Fix incorrect use of directives in asyncio documentation Message-ID: <1645102456.61.0.455925245808.issue46777@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are some issues with formatting added or removed parameters in the asyncio module. 1. "deprecated-removed" directives were used for already removed directives. It should be used for deprecated features with known term of removal. For removed features "versionchanged" is more appropriate. 2. Text for removed parameters was too verbose. "Removed the XXX parameter" would be enough. 3. "versionadded" directives were used for new parameters. "versionchanged" directive is more appropriate. It is a date of the change in existing function, not the date of adding the function itself. 4. Some directives were written not in order of increasing version number. 5. In some places parameters were marked up as ``name``. *name* is commonly used for parameters. ---------- assignee: docs at python components: Documentation, asyncio messages: 413404 nosy: asvetlov, docs at python, kj, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Fix incorrect use of directives in asyncio documentation type: enhancement versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 11:24:10 2022 From: report at bugs.python.org (Jeremy Kloth) Date: Thu, 17 Feb 2022 16:24:10 +0000 Subject: [New-bugs-announce] [issue46778] Enable parallel compilation on Windows builds Message-ID: <1645115050.56.0.993068912752.issue46778@roundup.psfhosted.org> New submission from Jeremy Kloth : While the current build does enable building of projects in parallel (msbuild -m), the compilation of each project's source files is done sequentially. For large projects like pythoncore or _freeze_module this can take quite some time. This simple PR speeds things up significantly, ~2x on machines that I have access. ---------- components: Build, Windows messages: 413412 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Enable parallel compilation on Windows builds versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 11:53:40 2022 From: report at bugs.python.org (Adrian Freund) Date: Thu, 17 Feb 2022 16:53:40 +0000 Subject: [New-bugs-announce] [issue46779] Add ssl.CERT_REQUIRED_NO_VERIFY as possible value for ssl.SSLContext.verify_mode Message-ID: <1645116820.13.0.214856830596.issue46779@roundup.psfhosted.org> New submission from Adrian Freund : Some networked applications might require connecting to client with invalid certificates but still requiring the client to send a certificate. ssl.SSLContext.verify_mode currently supports the following options: ssl.CERT_NONE: Don't require the client to send a certificate and don't validate it if they send one anyways. ssl.CERT_OPTIONAL: Don't require the client to send a certificate but validate it if they send one. ssl.CERT_REQUIRED: Require the client to send a certificate and validate it. There is currently no option for servers that want to require the client to send a certificate but don't validate it. This would for example be needed it a server should accept clients with self-signed certificates and then store their certificates to recognize them again later. A concrete example is the KDEConnect protocol. An alternative solution would be bpo-31242. That would also solve this problem is a more general, but also more complicated way. I think that the solution proposed here this issue is better for it's simplicity and also solves most usecases for bpo-31242. Note that a ssl.CERT_REQUIRED_NO_VERIFY was already proposed in bpo-18293, but that issue was closed because it was specifically in relation to a deprecated api. The mentioned values are however also used in modern asyncio apis. ---------- assignee: christian.heimes components: SSL messages: 413416 nosy: christian.heimes, freundTech priority: normal severity: normal status: open title: Add ssl.CERT_REQUIRED_NO_VERIFY as possible value for ssl.SSLContext.verify_mode type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 12:27:34 2022 From: report at bugs.python.org (Lee Newberg) Date: Thu, 17 Feb 2022 17:27:34 +0000 Subject: [New-bugs-announce] [issue46780] Allow Fractions to return 1/6 for "0.17", "0.167", "0.1667", etc. Message-ID: <1645118854.48.0.835738770183.issue46780@roundup.psfhosted.org> New submission from Lee Newberg : For example, a string such as "0.167" could be rounded from anything in [0.1665, 0.1675). Within that interval, the fraction with the lowest numerator and denominator is 1/6. Here it is proposed that we add a new flag to the Fractions constructor, perhaps called `_assume_rounded`, which defaults to False and then yields no change from current behavior. However, when it is True, the constructed Fraction first computes the range of the values that the input string could have been rounded from, and then computes the fraction in that half-open interval with the lowest numerator and denominator. This is described at https://en.wikipedia.org/wiki/Continued_fraction#Best_rational_within_an_interval, which uses continued fractions to arrive at the answer. For extra bells and whistles, we'd support strings like "0x0.2AAB" which is hexadecimal for 1/6 rounded to that many places. In this case, we'd find 1/6 as the fraction with lowest numerator and denominator in the interval [0x0.2AAA8, 0x0.2AAB8). Likewise for binary, octal, and any other formats supported by Python. ---------- components: Library (Lib) messages: 413418 nosy: Leengit priority: normal severity: normal status: open title: Allow Fractions to return 1/6 for "0.17", "0.167", "0.1667", etc. type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 13:08:22 2022 From: report at bugs.python.org (Matthias Urlichs) Date: Thu, 17 Feb 2022 18:08:22 +0000 Subject: [New-bugs-announce] [issue46781] Tracing: c_return doesn't report the result Message-ID: <1645121302.3.0.866413840913.issue46781@roundup.psfhosted.org> New submission from Matthias Urlichs : When tracing/profiling, the "return" event reports the value returned by the exiting function. However, this does not work for C functions. The profiler's "c_return" hook is called with the same C function object as "c_call". This unnecessarily complicates debugging and should be fixed. https://stackoverflow.com/questions/61067303/get-return-value-of-python-builtin-functions-while-tracing ---------- components: C API messages: 413421 nosy: smurfix priority: normal severity: normal status: open title: Tracing: c_return doesn't report the result type: enhancement versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 13:12:21 2022 From: report at bugs.python.org (sjndlnv brjkbn) Date: Thu, 17 Feb 2022 18:12:21 +0000 Subject: [New-bugs-announce] [issue46782] Docs error for 3.10 Message-ID: <1645121541.07.0.568386298101.issue46782@roundup.psfhosted.org> New submission from sjndlnv brjkbn : Document for 3.10 version seems auto convert 0o777 to 511. And it's correct for 3.9 (May be due to new version of Sphinx? Seems source code for docs are correct.) [img]https://i.imgur.com/ByWSJ6A.png[/img] [img]https://i.imgur.com/rK0romC.png[/img] [img]https://i.imgur.com/WXYMcrT.png[/img] [img]https://i.imgur.com/W5YskgQ.png[/img] It may misleading user. If we use os.mkdir(mode=511), it's not equal to use os.mkdir(mode=0o777) ---------- assignee: docs at python components: Documentation messages: 413425 nosy: docs at python, usetohandletrush priority: normal severity: normal status: open title: Docs error for 3.10 type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 13:37:45 2022 From: report at bugs.python.org (Hossein) Date: Thu, 17 Feb 2022 18:37:45 +0000 Subject: [New-bugs-announce] [issue46783] Add a new feature to enumerate(iterable, start=0) built-in function Message-ID: <1645123065.19.0.230809388528.issue46783@roundup.psfhosted.org> New submission from Hossein : Hi everyone. I have an idea which is add a new feature to enumerate(iterable, start=0) built-in function. I mean, "start" is ascending by default, we can add a feature to this function to change start in descending order. for example: enumerate(iterable, start=100, reverse=True) reverse: If True, the start is reversed. (100, iterable[0]), (99, iterable[1],), and so on. ---------- assignee: docs at python components: Argument Clinic, Build, Demos and Tools, Documentation, Interpreter Core, Parser messages: 413428 nosy: HosseinRanjbari, docs at python, larry, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Add a new feature to enumerate(iterable, start=0) built-in function type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 17 18:35:45 2022 From: report at bugs.python.org (Yilei Yang) Date: Thu, 17 Feb 2022 23:35:45 +0000 Subject: [New-bugs-announce] [issue46784] Duplicated symbols when linking embedded Python with libexpat Message-ID: <1645140945.97.0.0561003008604.issue46784@roundup.psfhosted.org> New submission from Yilei Yang : The libexpat 2.4.1 upgrade from https://bugs.python.org/issue44394 introduced the following new exported symbols: testingAccountingGetCountBytesDirect testingAccountingGetCountBytesIndirect unsignedCharToPrintable XML_SetBillionLaughsAttackProtectionActivationThreshold XML_SetBillionLaughsAttackProtectionMaximumAmplification We need to adjust Modules/expat/pyexpatns.h (The newer libexpat upgrade https://bugs.python.org/issue46400 has no new symbols). I'll send a PR. ---------- components: XML messages: 413464 nosy: yilei priority: normal severity: normal status: open title: Duplicated symbols when linking embedded Python with libexpat versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 03:50:37 2022 From: report at bugs.python.org (Antony Lee) Date: Fri, 18 Feb 2022 08:50:37 +0000 Subject: [New-bugs-announce] [issue46785] On Windows, os.stat() can fail if called while another process is creating or deleting the file Message-ID: <1645174237.48.0.436213701589.issue46785@roundup.psfhosted.org> New submission from Antony Lee : In a first Python process, repeatedly create and delete a file: from pathlib import Path while True: Path("foo").touch(); Path("foo").unlink() In another process, repeatedly check for the path's existence: from pathlib import Path while True: print(Path("foo").exists()) On Linux, the second process prints a random series of True and False. On Windows, it quickly fails after a few dozen iterations (likely machine-dependent) with PermissionError: [WinError 5] Access is denied: 'foo' which is actually raised by the stat() call. I would suggest that this is not really desirable behavior? ---------- components: Windows messages: 413468 nosy: Antony.Lee, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: On Windows, os.stat() can fail if called while another process is creating or deleting the file versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 05:23:19 2022 From: report at bugs.python.org (jnns) Date: Fri, 18 Feb 2022 10:23:19 +0000 Subject: [New-bugs-announce] [issue46786] embed, source, track, wbr HTML elements not considered empty Message-ID: <1645179799.58.0.92424391054.issue46786@roundup.psfhosted.org> New submission from jnns : [According to the WHATWG][1], the elements `area`, `base`, `br`, `col`, `embed`, `hr`, `img`, `input`, `link`, `meta`, `param`, `source`, `track`, `wbr` are *void elements* that don't need and therefore shouldn't have a closing tag. The source view of Firefox 96 shows a warning about an unexpected closing tag [1]. In Python 3.10.2 `xml.etree` seems to correctly recognize most of them as such and doesn't generate closing tags when using the `.tostring()` method. A few elements are serialized with a closing tag (`` for example). ```python from xml.etree import ElementTree as etree void_elements = [ "area", "base","br", "col", "embed", "hr", "img", "input", "link", "meta", "param", "source", "track", "wbr" ] for el in void_elements: el = etree.Element(el) print(etree.tostring(el, method="html", encoding="unicode")) ``` ```html

``` HTML_EMPTY in Lib/xml/etree/ElementTree.py only contains the following entries: "area", "base", "basefont", "br", "col", "frame", "hr", "img", "input", "isindex", "link", "meta", "param" I suppose "embed", "source", "track" and "wbr" should be added to that list. [1]: https://html.spec.whatwg.org/multipage/syntax.html#void-elements [2]: https://i.stack.imgur.com/rBTHw.png ---------- components: XML messages: 413473 nosy: jnns priority: normal severity: normal status: open title: embed, source, track, wbr HTML elements not considered empty type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 10:16:10 2022 From: report at bugs.python.org (Vladimir Vinogradenko) Date: Fri, 18 Feb 2022 15:16:10 +0000 Subject: [New-bugs-announce] [issue46787] ProcessPoolExecutor exception memory leak Message-ID: <1645197370.76.0.740159578707.issue46787@roundup.psfhosted.org> New submission from Vladimir Vinogradenko : If an exception occurs in ProcessPoolExecutor work item, all the exception frame local variables are not garbage collected (or are garbage collected too lately) because they are referenced by the exception's traceback. Attached file is a test case. With unpatched python 3.9 (debian bullseye) it prints: root at truenas[~/freenas/freenas]# python test.py At iteration 0 memory usage is 226070528 At iteration 1 memory usage is 318763008 At iteration 2 memory usage is 318509056 At iteration 3 memory usage is 321662976 At iteration 4 memory usage is 321404928 At iteration 5 memory usage is 324292608 At iteration 6 memory usage is 324296704 At iteration 7 memory usage is 326922240 At iteration 8 memory usage is 326922240 At iteration 9 memory usage is 329543680 With the proposed patch there is no memory usage growth: At iteration 0 memory usage is 226410496 At iteration 1 memory usage is 226451456 At iteration 2 memory usage is 226451456 At iteration 3 memory usage is 226443264 At iteration 4 memory usage is 226443264 At iteration 5 memory usage is 226435072 At iteration 6 memory usage is 226426880 At iteration 7 memory usage is 226426880 At iteration 8 memory usage is 226435072 At iteration 9 memory usage is 226426880 ---------- components: Library (Lib) files: 1.py messages: 413485 nosy: themylogin priority: normal severity: normal status: open title: ProcessPoolExecutor exception memory leak type: resource usage versions: Python 3.9 Added file: https://bugs.python.org/file50628/1.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 11:36:09 2022 From: report at bugs.python.org (Jeremy Kloth) Date: Fri, 18 Feb 2022 16:36:09 +0000 Subject: [New-bugs-announce] [issue46788] regrtest fails to start on missing performance counter names Message-ID: <1645202169.41.0.134197812756.issue46788@roundup.psfhosted.org> New submission from Jeremy Kloth : When attempting to run the test harness, I receive the following: Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Public\Devel\cpython\main\Lib\test\__main__.py", line 2, in main() ^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\main.py", line 736, in main Regrtest().main(tests=tests, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\support\os_helper.py", line 396, in temp_dir yield path ^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\support\os_helper.py", line 427, in change_cwd yield os.getcwd() ^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\support\os_helper.py", line 449, in temp_cwd yield cwd_dir ^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\main.py", line 658, in main self._main(tests, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\main.py", line 704, in _main self.win_load_tracker = WindowsLoadTracker() ^^^^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\win_utils.py", line 41, in __init__ self.start() ^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\win_utils.py", line 70, in start counter_name = self._get_counter_name() ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Public\Devel\cpython\main\Lib\test\libregrtest\win_utils.py", line 90, in _get_counter_name system = counters_dict['2'] ~~~~~~~~~~~~~^^^^^ KeyError: '2' This is due to my machine missing the localized names for the performance counters. Other performance monitoring tools operate just fine. While I have been working around this issue for some time, it has become difficult to seperate the workarounds from actually changes in the test harness. The PR (https://github.com/python/cpython/pull/26578) from https://bugs.python.org/issue44336 also solves this issue by accessing the counters directly instead of relying on their localized names. ---------- components: Tests, Windows messages: 413493 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: regrtest fails to start on missing performance counter names versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 12:00:25 2022 From: report at bugs.python.org (Jeremy Kloth) Date: Fri, 18 Feb 2022 17:00:25 +0000 Subject: [New-bugs-announce] [issue46789] Restore caching of externals on Windows buildbots Message-ID: <1645203625.57.0.7602685108.issue46789@roundup.psfhosted.org> New submission from Jeremy Kloth : A recent change to the buildmaster config effectively disabled the caching of the externals for Windows buildbots: https://github.com/python/buildmaster-config/pull/255 If the caching is desired, a simple change to the buildmaster config is needed (define EXTERNALS_DIR in the build environment). Or, to continue with fetching them each run, the buildbot scripts in Tools\buildbot can be simplified. Once a course of action is determined I can develop the requisite PR(s) in the appropriate tracker. ---------- components: Build, Tests, Windows messages: 413494 nosy: jkloth, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Restore caching of externals on Windows buildbots _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 12:26:30 2022 From: report at bugs.python.org (Jeremy Kloth) Date: Fri, 18 Feb 2022 17:26:30 +0000 Subject: [New-bugs-announce] [issue46790] Normalize handling of negative timeouts in subprocess.py Message-ID: <1645205190.98.0.586198331999.issue46790@roundup.psfhosted.org> New submission from Jeremy Kloth : As a follow on to bpo-46716, the various timeout parameters currently deal with negative values differently in POSIX and Windows. On POSIX, a negative value is treated the same as 0; check completion and raise TimeoutExpired is still running. On Windows, the negative value is treated as unsigned and ultimately waits for ~49 days. While the Windows behavior is obviously wrong and will be fixed internally as part of bpo-46716, that still leaves what to do with timeouts coming from user-space. The current documentation just states that after `timeout` seconds TimeoutExpired is raised. A liberal reading of the documentation could lead one to believe any value <=0 would suffice for an "active" check (the POSIX behavior). OR, the documentation could be amended and negative values are now invalid and apply range checking in the user-facing functions. ---------- components: Library (Lib) messages: 413496 nosy: jkloth priority: normal severity: normal status: open title: Normalize handling of negative timeouts in subprocess.py versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 12:55:13 2022 From: report at bugs.python.org (Dan Snider) Date: Fri, 18 Feb 2022 17:55:13 +0000 Subject: [New-bugs-announce] [issue46791] Allow os.remove to defer to rmdir Message-ID: <1645206913.87.0.447648839641.issue46791@roundup.psfhosted.org> New submission from Dan Snider : It appears sometime recently-ish that POSIX updated remove to the following: #include int remove(const char *path); If?path?does not name a directory,?remove(path) shall be equivalent to?unlink(path). If?path?names a directory,?remove(path) shall be equivalent to?rmdir(path). ---------- messages: 413499 nosy: bup priority: normal severity: normal status: open title: Allow os.remove to defer to rmdir type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 13:54:19 2022 From: report at bugs.python.org (Harshini) Date: Fri, 18 Feb 2022 18:54:19 +0000 Subject: [New-bugs-announce] [issue46792] Indentation now preserved with ruamel.yaml.round_trip_dump Message-ID: <1645210459.61.0.588712461043.issue46792@roundup.psfhosted.org> New submission from Harshini : I observed that the indentation is not preserved when I load and dump a YAML file using round_trip_dump from ruamel.yaml. For example, I have an input file with the data: Inventory: - name: Apples count: 10 vendors: - - vendor_name: xyz boxes: 2 - name: Bananas number: 20 vendors: - - vendor_name: abc boxes: 1 - vendor_name: xyz boxes: 4 I wrote a simple script that just loads and dumps the same data into a new YAML file: import sys import os import ruamel.yaml yaml = ruamel.yaml.YAML() input_file = sys.argv[1] output_yaml = r"data_output.yaml" output = open(output_yaml, 'w+') with open(input_file, 'r') as f: data = yaml.load(f) ruamel.yaml.round_trip_dump(data, output) I would expect the output yaml to be the same as the input yaml but what is see is: Inventory: - name: Apples count: 10 vendors: - - vendor_name: xyz boxes: 2 - name: Bananas number: 20 vendors: - - vendor_name: abc boxes: 1 - vendor_name: xyz boxes: 4 It is missing the indentation under Inventory as there should be a tab before "- name" . ---------- components: Library (Lib) files: input.yaml messages: 413502 nosy: nvorugan priority: normal severity: normal status: open title: Indentation now preserved with ruamel.yaml.round_trip_dump type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file50629/input.yaml _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 16:27:31 2022 From: report at bugs.python.org (Gregory P. Smith) Date: Fri, 18 Feb 2022 21:27:31 +0000 Subject: [New-bugs-announce] [issue46793] expose expat XML billion laughs attack mitigation APIs Message-ID: <1645219651.82.0.456438689192.issue46793@roundup.psfhosted.org> New submission from Gregory P. Smith : Quoting from https://github.com/python/cpython/pull/31397#issuecomment-1044796561 """ XML_SetBillionLaughsAttackProtectionActivationThreshold XML_SetBillionLaughsAttackProtectionMaximumAmplification I still hope that someone can make those two^^ accessible (with additional glue code) to the user on pyexpat level in CPython. """ - Sebastian Pipping @hartwork ---------- messages: 413513 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: expose expat XML billion laughs attack mitigation APIs type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 18 18:36:28 2022 From: report at bugs.python.org (sping) Date: Fri, 18 Feb 2022 23:36:28 +0000 Subject: [New-bugs-announce] [issue46794] Please update bundled libexpat to 2.4.5 with security fixes (5 CVEs) Message-ID: <1645227388.68.0.727345462152.issue46794@roundup.psfhosted.org> New submission from sping : Thank you! https://github.com/libexpat/libexpat/blob/97a4840578693a346e79302909b67d97492e1880/expat/Changes#L6-L35 ---------- components: XML messages: 413517 nosy: sping priority: normal severity: normal status: open title: Please update bundled libexpat to 2.4.5 with security fixes (5 CVEs) type: security versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 01:59:26 2022 From: report at bugs.python.org (zjmxq) Date: Sat, 19 Feb 2022 06:59:26 +0000 Subject: [New-bugs-announce] [issue46795] Why does 3rd/Python39/Lib/site-packages/psycopg2/_psycopg.cp39-win_amd64.pyd have the CVE-201-4160 vulnerability when I use Python 3.9.2 Message-ID: <1645253966.69.0.935118725759.issue46795@roundup.psfhosted.org> Change by zjmxq : ---------- components: Library (Lib) nosy: zjmxq priority: normal severity: normal status: open title: Why does 3rd/Python39/Lib/site-packages/psycopg2/_psycopg.cp39-win_amd64.pyd have the CVE-201-4160 vulnerability when I use Python 3.9.2 type: security versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 02:56:05 2022 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 19 Feb 2022 07:56:05 +0000 Subject: [New-bugs-announce] [issue46796] Simplify handling of removed parameter "loop" in asyncio Message-ID: <1645257365.33.0.518204757476.issue46796@roundup.psfhosted.org> New submission from Serhiy Storchaka : Before 3.10 many asyncio classes did have an optional parameter "loop". It was deprecated in 3.8. To simplify the code, such classes inherited a constructor from _LoopBoundMixin which set the _loop attribute and (since 3.8) emitted a warning if the loop argument was passed. Since 3.10 the _LoopBoundMixin no longer sets the _loop attribute and always raises a TypeError if the loop argument is passed. The same effect can be achieved if just remove the loop parameter (and the _LoopBoundMixin constructor as it will do nothing). The only difference in the error message: it will be standard "Lock.__init__() got an unexpected keyword argument 'loop'" instead of "As of 3.10, the *loop* parameter was removed from Lock() since it is no longer necessary". Usually we do not keep specialized error messages for removed parameters. ---------- components: asyncio messages: 413539 nosy: asvetlov, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Simplify handling of removed parameter "loop" in asyncio versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 03:29:16 2022 From: report at bugs.python.org (Jakub Wilk) Date: Sat, 19 Feb 2022 08:29:16 +0000 Subject: [New-bugs-announce] [issue46797] ast.Constant.n deprecated without warning Message-ID: <1645259356.98.0.34127429103.issue46797@roundup.psfhosted.org> New submission from Jakub Wilk : ast.Constant.n is documented to be deprecated, but you don't get any warning when you use it: $ python3.11 -Wd Python 3.11.0a5 (main, Feb 12 2022, 17:11:59) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import ast >>> help(ast.Constant.n) Help on property: Deprecated. Use value instead. >>> ast.Constant(value=42).n 42 ---------- components: Library (Lib) messages: 413541 nosy: jwilk, serhiy.storchaka priority: normal severity: normal status: open title: ast.Constant.n deprecated without warning versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 06:31:31 2022 From: report at bugs.python.org (padremayi) Date: Sat, 19 Feb 2022 11:31:31 +0000 Subject: [New-bugs-announce] [issue46798] xml.etree.ElementTree: get() doesn't return default value, always ATTLIST value Message-ID: <1645270291.57.0.995603390679.issue46798@roundup.psfhosted.org> New submission from padremayi : XML test file: ]>
This is a simple object 2022 Myself
Python code: import xml.etree.ElementTree try: xml_data = xml.etree.ElementTree.iterparse("test.xml", events=("start", "end")) for event, xml_tag in xml_data: if event == "end" and xml_tag.tag == "object": object_name = xml_tag.get("name") object_description = xml_tag.find("description").text works = xml_tag.get("works", default="foo") print("works value: " + str(works)) xml_tag.clear() print("Done!") except (NameError, xml.etree.ElementTree.ParseError): print("XML error!") Output: works value: yes Done! Expected behaviour: works value: foo Done! ---------- components: XML messages: 413543 nosy: padremayi priority: normal severity: normal status: open title: xml.etree.ElementTree: get() doesn't return default value, always ATTLIST value type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 07:02:30 2022 From: report at bugs.python.org (Ting-Che Lin) Date: Sat, 19 Feb 2022 12:02:30 +0000 Subject: [New-bugs-announce] [issue46799] ShareableList memory bloat and performance improvement Message-ID: <1645272150.7.0.530571625794.issue46799@roundup.psfhosted.org> New submission from Ting-Che Lin : The current implementation of ShareableList keeps an unnecessary list of offsets in self._allocated_offsets. This list could have a large memory footprint if the number of items in the list is high. Additionally, this list will be copied in each process that needs access to the ShareableList, sometimes negating the benefit of the shared memory. Furthermore, in the current implementation, different metadata is kept at different sections of shared memory, requiring multiple struck.unpack_from calls for a __getitem__ call. I have attached a prototype that merged the allocated offsets and packing format into a single section in the shared memory. This allows us to use single struck.unpack_from operation to obtain both the allocated offset and the packing format. By removing the self._allocated_offset list and reducing the number of struck.unpack_from operations, we can drastically reduce the memory usage and increase the reading performance by 10%. In the case where there are only integers in the ShareableList, we can reduce the memory usage by half. The attached implementation also fixed the issue https://bugs.python.org/issue44170 that causes error when reading some Unicode characters. I am happy to adapt this implementation into a proper bugfix/patch if it is deemed reasonable. ---------- components: Library (Lib) files: shareable_list.py messages: 413544 nosy: davin, pitrou, tcl326 priority: normal severity: normal status: open title: ShareableList memory bloat and performance improvement type: performance versions: Python 3.10, Python 3.11, Python 3.9 Added file: https://bugs.python.org/file50632/shareable_list.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 12:24:27 2022 From: report at bugs.python.org (Philip Rowlands) Date: Sat, 19 Feb 2022 17:24:27 +0000 Subject: [New-bugs-announce] [issue46800] Support for pause(2) Message-ID: <1645291467.44.0.659055078048.issue46800@roundup.psfhosted.org> New submission from Philip Rowlands : Went looking for os.pause() but found nothing in the docs, bpo, or Google. https://man7.org/linux/man-pages/man2/pause.2.html Obviously not a popular syscall, but I have a use case for it. ---------- components: Library (Lib) messages: 413554 nosy: philiprowlands priority: normal severity: normal status: open title: Support for pause(2) type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 14:36:56 2022 From: report at bugs.python.org (Irit Katriel) Date: Sat, 19 Feb 2022 19:36:56 +0000 Subject: [New-bugs-announce] [issue46801] test_typing emits deprecation warnings Message-ID: <1645299416.66.0.862941946832.issue46801@roundup.psfhosted.org> New submission from Irit Katriel : ====================================================================== ERROR: test_typeddict_create_errors (test.test_typing.TypedDictTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/iritkatriel/src/cpython-1/Lib/test/test_typing.py", line 4589, in test_typeddict_create_errors TypedDict('Emp', _fields={'name': str, 'id': int}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/iritkatriel/src/cpython-1/Lib/typing.py", line 2609, in TypedDict warnings.warn( ^^^^^^^^^^^^^^ DeprecationWarning: The kwargs-based syntax for TypedDict definitions is deprecated in Python 3.11, will be removed in Python 3.13, and may not be understood by third-party type checkers. ====================================================================== ERROR: test_typeddict_errors (test.test_typing.TypedDictTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/iritkatriel/src/cpython-1/Lib/test/test_typing.py", line 4602, in test_typeddict_errors TypedDict('Hi', x=1) ^^^^^^^^^^^^^^^^^^^^ File "/Users/iritkatriel/src/cpython-1/Lib/typing.py", line 2609, in TypedDict warnings.warn( ^^^^^^^^^^^^^^ DeprecationWarning: The kwargs-based syntax for TypedDict definitions is deprecated in Python 3.11, will be removed in Python 3.13, and may not be understood by third-party type checkers. ---------- components: Tests messages: 413558 nosy: iritkatriel priority: normal severity: normal status: open title: test_typing emits deprecation warnings versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 19 18:06:15 2022 From: report at bugs.python.org (Jonathan) Date: Sat, 19 Feb 2022 23:06:15 +0000 Subject: [New-bugs-announce] [issue46802] Wrong result unpacking binary data with ctypes bitfield. Message-ID: <1645311975.58.0.202718468445.issue46802@roundup.psfhosted.org> New submission from Jonathan : I have issues unpacking binary data, produced by C++. The appended jupyter notebook shows the problem. It is also uploaded to github gist: https://gist.github.com/helo9/04125ae67b493e505d5dce4b254a2ccc ---------- components: ctypes files: ctypes_bitfield_problem.ipynb messages: 413559 nosy: helo9 priority: normal severity: normal status: open title: Wrong result unpacking binary data with ctypes bitfield. type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file50633/ctypes_bitfield_problem.ipynb _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 01:30:30 2022 From: report at bugs.python.org (Jason Yang) Date: Sun, 20 Feb 2022 06:30:30 +0000 Subject: [New-bugs-announce] [issue46803] Item not shown when using mouse wheel to scroll for Listbox/Combobox Message-ID: <1645338630.05.0.344258150366.issue46803@roundup.psfhosted.org> New submission from Jason Yang : When scrolled items by mouse wheel in tk.Listbox/ttk.Combobox, some items not shown. Is it a bug ? or I did something wrong ? In following case, 'Wednesday' will not shown when scroll mouse wheel at - tk.Listbox or vertical scrollbar of tk.Listbox, or - listbox of ttk.Combo ```python from tkinter import * from tkinter import ttk font = ('Courier New', 24) lst = ('Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday') root = Tk() frame1 = Frame(root) frame1.pack(side=LEFT) vsb1 = Scrollbar(frame1, orient='v') vsb1.pack(side=RIGHT, fill='y') var = StringVar() var.set(lst) listbox = Listbox(frame1, width=10, height=3, listvariable=var, font=font, yscrollcommand=vsb1.set) listbox.pack(side=LEFT) vsb1.configure(command=listbox.yview) frame2 = Frame(root) frame2.pack(side=LEFT, fill='y') combobox = ttk.Combobox(frame2, values=lst, width=10, height=3, font=font) combobox.pack() root.mainloop() ``` Platform: WIN10 ---------- components: Tkinter files: PeS9r.png messages: 413564 nosy: Jason990420 priority: normal severity: normal status: open title: Item not shown when using mouse wheel to scroll for Listbox/Combobox type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50634/PeS9r.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 05:27:09 2022 From: report at bugs.python.org (=?utf-8?q?Alex_Gr=C3=B6nholm?=) Date: Sun, 20 Feb 2022 10:27:09 +0000 Subject: [New-bugs-announce] [issue46805] Add low level UDP socket functions to asyncio Message-ID: <1645352829.56.0.926800544018.issue46805@roundup.psfhosted.org> New submission from Alex Gr?nholm : The asyncio module currently has a number of low-level functions for working asynchronously with raw socket objects. Such functions for working with UDP sockets are, however, notably absent, and there is no workaround for this. You can of course use sock_receive() with UDP sockets but that would discard the sender address which is a showstopper problem. Also, having a send function that applies back pressure to the sender if the kernel buffer is full would also be prudent. I will provide a PR if you're okay with this. It would include the following functions: * sock_sendto() * sock_recvfrom() * sock_recvfrom_into() ---------- components: asyncio messages: 413579 nosy: alex.gronholm, asvetlov, yselivanov priority: normal severity: normal status: open title: Add low level UDP socket functions to asyncio type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 07:34:47 2022 From: report at bugs.python.org (aklajnert) Date: Sun, 20 Feb 2022 12:34:47 +0000 Subject: [New-bugs-announce] [issue46806] Overlapping PYTHONPATH may cause Message-ID: <1645360487.42.0.278270431314.issue46806@roundup.psfhosted.org> New submission from aklajnert : I'm not 100% sure whether it is a bug or intentional behavior but looks like a bug to me. I couldn't find anything about it here or anywhere else. Sample project structure: ``` . ??? main.py ??? src ??? __init__.py ??? common_object.py ??? user_1.py ??? user_2.py ??? user_3.py ``` `__init__.py` is an empty file. ``` # src/common_object.py OBJECT = object() ``` ``` # src/user_1.py from .common_object import OBJECT ``` ``` # src/user_2.py from src.common_object import OBJECT ``` ``` # src/user_3.py from common_object import OBJECT ``` ``` # main.py import sys sys.path.append("src") from src import user_1, user_2, user_3 if __name__ == '__main__': print(user_1.OBJECT is user_2.OBJECT) # True print(user_1.OBJECT is user_3.OBJECT) # False ``` Since `src` package is added to `PYTHONPATH`, it is possible to import `common_object` by calling `from src.common_object` or `from common_object`. Both methods work, but using import without `src.` makes Python load the same module again instead of using the already loaded one. If you extend `main.py` with the following code, you'll see a bit more: ``` modules = [ module for name, module in sys.modules.items() if "common_object" in name ] print(len(modules)) # 2 print(modules[0].__file__ == modules[1].__file__) # True ``` In the `sys.modules` dict there will be two separate modules - one called `common_object` and another named `src.common_object`. If you compare the `__file__` value for both modules you'll see that they are the same. It seems that python gets the module name wrong. ---------- messages: 413584 nosy: aklajnert priority: normal severity: normal status: open title: Overlapping PYTHONPATH may cause type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 08:05:31 2022 From: report at bugs.python.org (Konstantin) Date: Sun, 20 Feb 2022 13:05:31 +0000 Subject: [New-bugs-announce] [issue46807] Wrong class __annotations__ when field name and type are equal Message-ID: <1645362331.92.0.497127869678.issue46807@roundup.psfhosted.org> New submission from Konstantin : In [18]: class Str(str): ...: pass In [19]: class Class: ...: Str: str ...: ...: ...: Class.__annotations__ Out[19]: {'Str': str} In [20]: class Class: ...: Str: str = "" ...: ...: ...: Class.__annotations__ Out[20]: {'Str': str} In [21]: class Class: ...: Str: Str = "" ...: ...: ...: Class.__annotations__ # Wrong! Out[21]: {'Str': ''} In [22]: class Class: ...: Str: Str ...: ...: ...: Class.__annotations__ Out[22]: {'Str': __main__.Str} It reproduced all the version which support annotations as part of the core (I tested python 3.6..3.10.2) ---------- components: Parser messages: 413586 nosy: kgubaev, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Wrong class __annotations__ when field name and type are equal type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 09:48:48 2022 From: report at bugs.python.org (Irit Katriel) Date: Sun, 20 Feb 2022 14:48:48 +0000 Subject: [New-bugs-announce] [issue46808] remove NEXT_BLOCK() from compile.c Message-ID: <1645368528.16.0.363497063609.issue46808@roundup.psfhosted.org> New submission from Irit Katriel : The compiler currently requires the code-generation functions to explicitly specify where basic blocks end, with a NEXT_BLOCK(). If you get that wrong, you get an exception about "malformed control flow graph" later, in the cfg analysis stage. It is not obvious then where the error is, and it makes it difficult to make changes in the compiler. We can instead make the compiler implicitly create a new block when this is needed (which is after specific opcodes). ---------- assignee: iritkatriel components: Interpreter Core messages: 413589 nosy: iritkatriel priority: normal severity: normal status: open title: remove NEXT_BLOCK() from compile.c type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 10:57:04 2022 From: report at bugs.python.org (Royce Mitchell) Date: Sun, 20 Feb 2022 15:57:04 +0000 Subject: [New-bugs-announce] [issue46809] copy.deepcopy can fail with unhelpful diagnostics Message-ID: <1645372624.51.0.480394370702.issue46809@roundup.psfhosted.org> New submission from Royce Mitchell : Dear devs, I have a small change request to make to a built-in Python file. I'm currently running python 3.9.5 The file is copy.py I would like to propose changing line 264 (in _reconstruct) from this: y = func(*args) to something like this: try: y = func(*args) except TypeError as e: raise TypeError( f'calling {func.__module__}.{func.__qualname__}: {e.args[0]}', *e.args[1:] ).with_traceback(e.__traceback__) from None All the change does it inject the module and qualified-name of the function trying to be created onto the front-end of the error. It makes this: TypeError: __init__() missing 1 required positional argument: 'delta' into this: TypeError: calling datetime.datetime: calling mytz.Tzoffset: __init__() missing 1 required positional argument: 'delta' Here's a summary of the situation that led to this difficulty: I have a project that is a couple years old and I'm no longer intimately aware of every single thing the program is doing. I went to make some enhancements and noticed the unit tests hadn't been touched since early in the project and decided I wanted to start using it. I got stuck trying to prettyprint an object and getting a TypeError from the line above because it was trying to call a function but was missing a required argument. The traceback was unhelpful because I didn't know what object it was trying to copy, which was very complicated with lots of data and sub-objects. It turns out that a dataclass (named TransDetail) I was trying to prettyprint had a list of another dataclass (named Billing) which had a datetime.datetime object with a custom tzinfo object that I had never tried to deepcopy before. (The custom tzinfo object was adapted from examples on StackOverflow) Trying to google the issue, I found many others experiencing the same problem. The fix was to define a default datetime.timedelta value for that custom tzinfo object, but I had to make the changes to copy.py in order to efficiently figure out that this was the problem. ---------- components: Library (Lib) messages: 413594 nosy: remdragon priority: normal severity: normal status: open title: copy.deepcopy can fail with unhelpful diagnostics type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 12:36:46 2022 From: report at bugs.python.org (Michael Hupfer) Date: Sun, 20 Feb 2022 17:36:46 +0000 Subject: [New-bugs-announce] [issue46810] multiprocessing.connection.Client doesn't support ipv6 Message-ID: <1645378606.64.0.848284398217.issue46810@roundup.psfhosted.org> New submission from Michael Hupfer : Hi there, connecting a multiprocessing.connection.Client to an ipv6 address is not possible, since the address family is not passed into the constructor of class SocketClient. The constructor determines the family by calling address_type(address), which never returns AF_INET6. The class SocketListener already implemented ipv6 support. kind regards ---------- messages: 413599 nosy: mhupfer priority: normal severity: normal status: open title: multiprocessing.connection.Client doesn't support ipv6 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 14:54:05 2022 From: report at bugs.python.org (sping) Date: Sun, 20 Feb 2022 19:54:05 +0000 Subject: [New-bugs-announce] [issue46811] Test suite needs adjustments for Expat >=2.4.5 Message-ID: <1645386845.86.0.727249777809.issue46811@roundup.psfhosted.org> New submission from sping : It has been reported at https://bugs.python.org/issue46794#msg413587 that the current CPython test suite needs some adjustments for Expat >=2.4.5. Since that is somewhat separate from updating the bundled copy of Expat to >=2.4.6, I am creating this dedicated ticket. I pull request for discussion will follow shortly. ---------- components: XML messages: 413605 nosy: mgorny, sping priority: normal severity: normal status: open title: Test suite needs adjustments for Expat >=2.4.5 versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 20 21:47:19 2022 From: report at bugs.python.org (Mark Gordon) Date: Mon, 21 Feb 2022 02:47:19 +0000 Subject: [New-bugs-announce] [issue46812] Thread starvation with threading.Condition Message-ID: <1645411639.57.0.97137643572.issue46812@roundup.psfhosted.org> New submission from Mark Gordon : When using Condition variables to manage access to shared resources you can run into starvation issues due to the thread that just gave up a resource (making a call to notify/notify_all) having priority on immediately reacquiring that resource before any of the waiting threads get a chance. The issue appears to arise because unlike the Lock implementation Condition variables are implemented partly in Python and a thread must hold the GIL when it reacquires its underlying condition variable lock. Coupled with Python's predictable switch interval this means that if a thread notifies others of a resource being available and then shortly after attempts to reacquire that resource it will be able to do so since it will have held the GIL the entire time. This can lead to some threads being entirely starved (forever) for access to a shared resource. This came up in a real world situation for me when I had multiple threads trying to access a shared database connection repeatedly without blocking between accesses. Some threads were never getting a connection leading to unexpected timeouts. See https://github.com/sqlalchemy/sqlalchemy/issues/7679 Here's a simple example of this issue using the queue.Queue implementation: https://gist.github.com/msg555/36a10bb5a0c0fe8c89c89d8c05d00e21 Similar example just using Condition variables directly: https://gist.github.com/msg555/dd491078cf10dbabbe7b1cd142644910 Analagous C++ implementation. On Linux 5.13 this is still not _that_ fair but does not completely starve threads: https://gist.github.com/msg555/14d8029b910704a42d372004d3afa465 Thoughts: - Is this something that's worth fixing? The behavior at the very least is surprising and I was unable to find discussion or documentation of it. - Can Condition variables be implemented using standard C libraries? (e.g. pthreads) Maybe at least this can happen when using the standard threading.Lock as the Condition variables lock? - I mocked up a fair Condition variable implementation at https://github.com/msg555/fairsync/blob/main/fairsync/condition.py. However fairness comes at its own overhead of additional context switching. Tested on Python 3.7-3.10 ---------- components: Library (Lib) messages: 413629 nosy: msg555 priority: normal severity: normal status: open title: Thread starvation with threading.Condition type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 02:32:59 2022 From: report at bugs.python.org (penguin_wwy) Date: Mon, 21 Feb 2022 07:32:59 +0000 Subject: [New-bugs-announce] [issue46813] Allow developer to resize the dictionary Message-ID: <1645428779.91.0.472362648196.issue46813@roundup.psfhosted.org> New submission from penguin_wwy <940375606 at qq.com>: https://github.com/faster-cpython/ideas/discussions/288 ---------- components: Interpreter Core messages: 413634 nosy: penguin_wwy priority: normal severity: normal status: open title: Allow developer to resize the dictionary type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 03:25:45 2022 From: report at bugs.python.org (Josh A. Mitchell) Date: Mon, 21 Feb 2022 08:25:45 +0000 Subject: [New-bugs-announce] [issue46814] Documentation for constructin abstract base classes is misleading Message-ID: <1645431945.33.0.392126816255.issue46814@roundup.psfhosted.org> New submission from Josh A. Mitchell : The docs for the abc[0] module states "With this class, an abstract base class can be created by simply deriving from ABC", and then gives an example of a class with no contents. This is not sufficient to construct an ABC; an ABC in Python additionally requires at least one abstract method. This can be demonstrated by executing the example code and instantiating it (ABCs cannot be instantiated) or calling the inspect.isabstract() function on it (returns False). The requirement is also (cryptically) explicated in the Implementation paragraph of the "The abc Module: an ABC Support Framework" section of PEP 3119[1]. This requirement of implementing an abstract method is not mentioned in the docs for the abc module or in the module's docstrings. An ABC with no abstract methods is sometimes used to mark a parent class that is not intended to be instantiated on its own, so this limitation of the Python implementation should be documented. [0] https://docs.python.org/3.11/library/abc.html [1] https://www.python.org/dev/peps/pep-3119/#the-abc-module-an-abc-support-framework ---------- assignee: docs at python components: Documentation messages: 413639 nosy: Yoshanuikabundi, docs at python priority: normal severity: normal status: open title: Documentation for constructin abstract base classes is misleading versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 05:22:11 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Mon, 21 Feb 2022 10:22:11 +0000 Subject: [New-bugs-announce] [issue46815] Extra `DeprecationWarning` when running `lib2to3` tests Message-ID: <1645438931.34.0.27153420052.issue46815@roundup.psfhosted.org> New submission from Nikita Sobolev : I first noticed it in the buildbot logs: ``` 0:24:42 load avg: 3.87 [430/431/1] test_lib2to3 passed (1 min 38 sec) :2: DeprecationWarning: lib2to3 package is deprecated and may not be able to parse Python 3.10+ ``` But, it also happens locally: ``` ? ./python.exe Lib/test/test_lib2to3.py /Users/sobolev/Desktop/cpython/Lib/unittest/loader.py:350: DeprecationWarning: lib2to3 package is deprecated and may not be able to parse Python 3.10+ __import__(name) Refactor file: /Users/sobolev/Desktop/cpython/Lib/lib2to3/refactor.py ``` After my patch it is gone: ``` ? ./python.exe Lib/test/test_lib2to3.py Refactor file: /Users/sobolev/Desktop/cpython/Lib/lib2to3/refactor.py ``` ---------- components: Tests messages: 413643 nosy: Jelle Zijlstra, benjamin.peterson, lukasz.langa, sobolevn priority: normal severity: normal status: open title: Extra `DeprecationWarning` when running `lib2to3` tests type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 05:48:39 2022 From: report at bugs.python.org (Oleg Iarygin) Date: Mon, 21 Feb 2022 10:48:39 +0000 Subject: [New-bugs-announce] [issue46816] Remove declarations for non-__STDC__ compilers Message-ID: <1645440519.15.0.480736088206.issue46816@roundup.psfhosted.org> New submission from Oleg Iarygin : Currently, Python code contains two places where presence of __STDC__ is checked: - Include/internal/pycore_pymath.h:12 - Python/errors.c:13 These checks are used to add extern functions missing in non-standard versions of math.h. However, after Python switched to C99, there is a guarantee that every compiler conforms to ISO C so checks of __STDC__ have no sense anymore. Note, that: - errors.c check was added by 53e8d44 on 9 Mar 1995 - pycore_pymath.h check was initially added into Objects/floatobject.c by eddc144 on 20 Nov 2003 then moved to pycore_pymath by 53876d9. ---------- components: Interpreter Core messages: 413645 nosy: arhadthedev priority: normal severity: normal status: open title: Remove declarations for non-__STDC__ compilers type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 07:59:12 2022 From: report at bugs.python.org (Mark Shannon) Date: Mon, 21 Feb 2022 12:59:12 +0000 Subject: [New-bugs-announce] [issue46817] Add a line-start table to the code object. Message-ID: <1645448352.85.0.240103559175.issue46817@roundup.psfhosted.org> New submission from Mark Shannon : Computing whether an instruction is the first on a line (for tracing) in the interpreter is complicated and slow. Doing it in the compiler should be simpler and has no runtime cost. Currently we decide if the current instruction is the first on a line, by using the `co_lines` table, but if the previous instruction executed was a jump from the same line over a block of instructions with different line number(s) then we can get this wrong. This doesn't seem to a problem now, but could be with either: Specialization of FOR_ITER inlining generators, or Compiler improvements leading to different code layout. The table is only one bit per instruction, so shouldn't be a problem in terms of space. ---------- components: Interpreter Core messages: 413651 nosy: Mark.Shannon, iritkatriel priority: normal severity: normal status: open title: Add a line-start table to the code object. type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 08:03:49 2022 From: report at bugs.python.org (Kristiyan) Date: Mon, 21 Feb 2022 13:03:49 +0000 Subject: [New-bugs-announce] [issue46818] Proper way to inherit from collections.abc.Coroutine Message-ID: <1645448629.24.0.443450493338.issue46818@roundup.psfhosted.org> New submission from Kristiyan : Hello, Last several days I'm trying to implement an async "opener" object that can be used as Coroutine as well as an AsyncContextManager (eg. work with `await obj.open()` and `async with obj.open()`). I've researched several implementations from various python packages such as: 1. aiofiles: https://github.com/Tinche/aiofiles/blob/master/src/aiofiles/base.py#L28 2. aiohttp: https://github.com/aio-libs/aiohttp/blob/master/aiohttp/client.py#L1082 Unlike these libs though, I want my implementation to return a custom object that is a wrapper around the object returned from the underlying module I'm hiding. Example: I want to implement a DataFeeder interface that has a single method `open()`. Sub-classes of this interface will support, for example, opening an file using aiofiles package. So, AsyncFileDataFeeder.open() will call `aiofiles.open()`, but instead of returning "file-handle" from aiofiles, I want to return a custom Feed class that implements some more methods for reading -- for example: async with async_data_feeder.open() as feed: async for chunk in feed.iter_chunked(): ... To support that I'm returning an instance of the following class from DataFeeder.open(): class ContextOpener( Coroutine[Any, Any, Feed], AbstractAsyncContextManager[Feed], ): __slots__ = ("_wrapped_coro", "_feed_cls", "_feed") def __init__(self, opener_coro: Coroutine, feed_cls: Type[Feed]): self._wrapped_coro = opener_coro self._feed_cls = feed_cls self._feed: Any = None def __await__(self) -> Generator[Any, Any, Feed]: print("in await", locals()) handle = yield from self._wrapped_coro.__await__() return self._feed_cls(handle) def send(self, value: Any) -> Any: print("in send", locals()) return self._wrapped_coro.send(value) def throw(self, *args, **kwargs) -> Any: print("in throw", locals()) return self._wrapped_coro.throw(*args, **kwargs) def close(self) -> None: print("in close", locals()) self._wrapped_coro.close() async def __aenter__(self) -> feeds.Feed: handle = await self._wrapped_coro self._feed = self._feed_cls(handle) return self._feed async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: await self._feed.close() self._feed = None This code actually works! But I've noticed that when calling `await DataFeeder.open()` the event loop never calls my `send()` method. if __name__ == "__main__": async def open_test(): await asyncio.sleep(1) return 1 async def main(): c = ContextOpener(open_test(), feeds.AsyncFileFeed) ret = await c print("Finish:", ret, ret._handle) The output: in await {'self': <__main__.ContextOpener object at 0x11099cd10>} Finish: 1 >From then on a great thinking and reading on the Internet happened, trying to explain to myself how exactly coroutines are working. I suspect that the ContextOpener.__await__ is returning a generator instance and from then on, outer coroutines (eg. main in this case) are calling send()/throw()/close() on THAT generator, not on the ContextOpener "coroutine". The only way to make Python call ContextOpener send() method (and friends) is when ContextOpener is the outermost coroutine that is communicating directly with the event loop: ret = asyncio.run(ContextOpener(open_test(), feeds.AsyncFileFeed)) print("Finish:", ret) Output: in send {'self': <__main__.ContextOpener object at 0x10dcf47c0>, 'value': None} in send {'self': <__main__.ContextOpener object at 0x10dcf47c0>, 'value': None} Finish: 1 However, now I see that I have an error in my implementation that was hidden before: my send() method implementation is not complete because StopIteration case is not handled and returns 1, instead of Feed object. Since __await__() should return iterator (by PEP492) I can't figure out a way to implement what I want unless making my coroutine class an iterator itself (actually generator) by returning `self` from __await__ and add __iter__ and __next__ methods: def __await__(self): return self def __iter__(self): return self def __next__(self): return self.send(None) Is this the proper way to make a Coroutine out of a collections.abc.Coroutine? Why is then the documentation not explicitly saying that a Coroutine should inherit from collections.abc.Generator? I see this as very common misconception since every such "ContextManager" similar to ContextOpener from 3rd party packages (like the aforementioned two, aiofiles and aiohttp, but there are others as well) is subclassing collections.abc.Coroutine and implements send(), throw() and close() methods that are not actually being called. I suspect, the authors of these libraries haven't noticed that because the returned value from the __await__() and send() methods is the same in their case. ---------- components: asyncio messages: 413652 nosy: asvetlov, skrech, yselivanov priority: normal severity: normal status: open title: Proper way to inherit from collections.abc.Coroutine versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 12:26:24 2022 From: report at bugs.python.org (Cooper Lees) Date: Mon, 21 Feb 2022 17:26:24 +0000 Subject: [New-bugs-announce] [issue46819] Add an Error / Exception / Warning when contextlib.suppress() is entered with no specified exception(s) to suppress Message-ID: <1645464384.79.0.401778534641.issue46819@roundup.psfhosted.org> New submission from Cooper Lees : Today if you enter a `contextlib.suppress()` context and specify no exceptions there is no error or warning (I didn't check pywarnings to be fair). Isn't this a useless context then? If not, please explain why and close. If it is, I'd love to discuss possibly raising a new NoSupressionError or at least a warning to let people know they executing an unneeded context. Example code that 3.11 does not error on: ```python cooper at home1:~$ python3.11 Python 3.11.0a5+ (main, Feb 21 2022, 08:52:10) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import contextlib >>> with contextlib.suppress(): ... print("Foo") ... Foo ``` This was reported to `flake8-bugbear` and if this is not accepted I may accept adding this to the linter. But feel this could be fixable in cpython itself. ---------- components: Library (Lib) messages: 413663 nosy: cooperlees priority: normal severity: normal status: open title: Add an Error / Exception / Warning when contextlib.suppress() is entered with no specified exception(s) to suppress versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 12:50:03 2022 From: report at bugs.python.org (Patrick Reader) Date: Mon, 21 Feb 2022 17:50:03 +0000 Subject: [New-bugs-announce] [issue46820] SyntaxError on `1not in...` Message-ID: <1645465803.96.0.142928933167.issue46820@roundup.psfhosted.org> New submission from Patrick Reader : The following code gives a SyntaxError in 3.10, but used to work fine before (I have tested it in 2.7, 3.8, 3.9): 1not in [2, 3] It seems to be only the `not in` syntax which is affected; all other keywords still work correctly: 1in [2, 3] 1or 2 1and 2 1if 1else 1 1is 1 I know this syntax is deprecated in 3.10 (bpo43833), but it still needs to work for now, so that old code written like this can keep working. ---------- components: Parser messages: 413664 nosy: lys.nikolaou, pablogsal, pxeger priority: normal severity: normal status: open title: SyntaxError on `1not in...` type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 14:58:17 2022 From: report at bugs.python.org (Jelle Zijlstra) Date: Mon, 21 Feb 2022 19:58:17 +0000 Subject: [New-bugs-announce] [issue46821] Introspection support for typing.overload Message-ID: <1645473497.26.0.919718721464.issue46821@roundup.psfhosted.org> New submission from Jelle Zijlstra : Currently, the implementation of @overload (https://github.com/python/cpython/blob/59585d6b2ea50d7bc3a9b336da5bde61367f527c/Lib/typing.py#L2211) simply returns a dummy function and throws away the decorated function. This makes it virtually impossible for type checkers using the runtime function object to find overloads specified at runtime. In pyanalyze, I worked around this by providing a custom @overload decorator, working something like this: _overloads: dict[str, list[Callable]] = {} def _get_key(func: Callable) -> str: return f"{func.__module__}.{func.__qualname__}" def overload(func): key = _get_key(func) _overloads.setdefault(key, []).append(func) return _overload_dummy def get_overloads_for(func): key = _get_key(func) return _overloads.get(key, []) A full implementation will need more error handling. I'd like to add something like this to typing.py so that other tools can also use this information. ---------- assignee: Jelle Zijlstra components: Library (Lib) messages: 413671 nosy: AlexWaygood, Jelle Zijlstra, gvanrossum, kj, sobolevn priority: normal severity: normal status: open title: Introspection support for typing.overload type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 21 19:02:05 2022 From: report at bugs.python.org (Steve Dower) Date: Tue, 22 Feb 2022 00:02:05 +0000 Subject: [New-bugs-announce] [issue46822] test_create_server_ssl_over_ssl attempts to listen on 0.0.0.0 Message-ID: <1645488125.93.0.760341419573.issue46822@roundup.psfhosted.org> New submission from Steve Dower : This causes a failure on one of my test machines where the firewall settings forbid it. However, the test itself seems designed to only listen on localhost. Even tracing all call through socket, I don't see when or where it is attempting to listen on 0.0.0.0, and yet TCP Monitor (and my firewall) claim that it is. This seems to be fairly recent, though I haven't done a bisect yet. Anyone have any ideas? ---------- components: Tests, Windows, asyncio messages: 413687 nosy: asvetlov, paul.moore, steve.dower, tim.golden, yselivanov, zach.ware priority: normal severity: normal stage: test needed status: open title: test_create_server_ssl_over_ssl attempts to listen on 0.0.0.0 type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 00:11:54 2022 From: report at bugs.python.org (Dennis Sweeney) Date: Tue, 22 Feb 2022 05:11:54 +0000 Subject: [New-bugs-announce] [issue46823] Add LOAD_FAST__LOAD_ATTR_INSTACE_VALUE combined opcode Message-ID: <1645506714.72.0.112858305238.issue46823@roundup.psfhosted.org> New submission from Dennis Sweeney : See https://github.com/faster-cpython/ideas/discussions/291 ---------- messages: 413692 nosy: Dennis Sweeney priority: normal severity: normal status: open title: Add LOAD_FAST__LOAD_ATTR_INSTACE_VALUE combined opcode _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 05:15:28 2022 From: report at bugs.python.org (Thomas Grainger) Date: Tue, 22 Feb 2022 10:15:28 +0000 Subject: [New-bugs-announce] [issue46824] use AI_NUMERICHOST | AI_NUMERICSERV to skip getaddrinfo thread in asyncio Message-ID: <1645524928.96.0.755041008121.issue46824@roundup.psfhosted.org> New submission from Thomas Grainger : now that the getaddrinfo lock has been removed on all platforms the numeric only host resolve in asyncio could be moved back into BaseEventLoop.getaddrinfo ---------- components: asyncio messages: 413699 nosy: asvetlov, graingert, yselivanov priority: normal severity: normal status: open title: use AI_NUMERICHOST | AI_NUMERICSERV to skip getaddrinfo thread in asyncio type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 05:23:02 2022 From: report at bugs.python.org (Heran Yang) Date: Tue, 22 Feb 2022 10:23:02 +0000 Subject: [New-bugs-announce] [issue46825] slow matching on regular expression Message-ID: <1645525382.1.0.0821682540319.issue46825@roundup.psfhosted.org> New submission from Heran Yang : I'm using `re.fullmatch` to match a string that only contains 0 and 1. The regular expression is: (0+|1(01*0)*1)+ It runs rather slow with Python 3.7, but when I try using regex in C++, with std::regex_constants::__polynomial, it works well. Would someone take a look at it? Thx. ---------- components: Regular Expressions files: match.py messages: 413700 nosy: HeRaNO, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: slow matching on regular expression type: performance versions: Python 3.7 Added file: https://bugs.python.org/file50636/match.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 06:33:44 2022 From: report at bugs.python.org (Addison Snelling) Date: Tue, 22 Feb 2022 11:33:44 +0000 Subject: [New-bugs-announce] [issue46826] prefixes argument to site.getsitepackages() missing documentation Message-ID: <1645529624.68.0.789981983481.issue46826@roundup.psfhosted.org> New submission from Addison Snelling : The documentation for site.getsitepackages() makes no mention of the "prefixes" argument, introduced in v3.3. I'll put together a pull request in the next day or so to add this to the docs. ---------- assignee: docs at python components: Documentation messages: 413705 nosy: asnell, docs at python priority: normal severity: normal status: open title: prefixes argument to site.getsitepackages() missing documentation type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 10:05:31 2022 From: report at bugs.python.org (Thomas Grainger) Date: Tue, 22 Feb 2022 15:05:31 +0000 Subject: [New-bugs-announce] [issue46827] asyncio SelectorEventLoop.sock_connect fails with a UDP socket Message-ID: <1645542331.15.0.0460526883326.issue46827@roundup.psfhosted.org> New submission from Thomas Grainger : the following code: import socket import asyncio async def amain(): with socket.socket(family=socket.AF_INET, proto=socket.IPPROTO_UDP, type=socket.SOCK_DGRAM) as sock: sock.setblocking(False) await asyncio.get_running_loop().sock_connect(sock, ("google.com", "443")) asyncio.run(amain()) fails with: Traceback (most recent call last): File "/home/graingert/projects/test_foo.py", line 9, in asyncio.run(amain()) File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete return future.result() File "/home/graingert/projects/test_foo.py", line 7, in amain await asyncio.get_running_loop().sock_connect(sock, ("google.com", "443")) File "/usr/lib/python3.10/asyncio/selector_events.py", line 496, in sock_connect resolved = await self._ensure_resolved( File "/usr/lib/python3.10/asyncio/base_events.py", line 1395, in _ensure_resolved return await loop.getaddrinfo(host, port, family=family, type=type, File "/usr/lib/python3.10/asyncio/base_events.py", line 855, in getaddrinfo return await self.run_in_executor( File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -7] ai_socktype not supported ---------- components: asyncio messages: 413709 nosy: asvetlov, graingert, yselivanov priority: normal severity: normal status: open title: asyncio SelectorEventLoop.sock_connect fails with a UDP socket versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 14:04:33 2022 From: report at bugs.python.org (Neil Webber) Date: Tue, 22 Feb 2022 19:04:33 +0000 Subject: [New-bugs-announce] [issue46828] math.prod can return integers (contradicts doc) Message-ID: <1645556673.97.0.708040802519.issue46828@roundup.psfhosted.org> New submission from Neil Webber : The math module documentation says: Except when explicitly noted otherwise, all return values are floats. But this code returns an integer: from math import prod; prod((1,2,3)) Doc should "explicitly note otherwise" here, I imagine. The issue being wanting to know that the result on all-integer input will be an exact (integer) value not a floating value. ---------- assignee: docs at python components: Documentation messages: 413741 nosy: docs at python, neilwebber priority: normal severity: normal status: open title: math.prod can return integers (contradicts doc) type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 17:02:51 2022 From: report at bugs.python.org (Andrew Svetlov) Date: Tue, 22 Feb 2022 22:02:51 +0000 Subject: [New-bugs-announce] [issue46829] Confusing CancelError message if multiple cancellations are scheduled Message-ID: <1645567371.24.0.086924022035.issue46829@roundup.psfhosted.org> New submission from Andrew Svetlov : Suppose multiple `task.cancel(msg)` with different messages are called on the same event loop iteration. What message (`cancel_exc.args[0]`) should be sent on the next loop iteration? As of Python 3.10 it is the message from the *last* `task.cancel(msg)` call. The main branch changed it to the *first* message (it is a subject for discussion still). Both choices are equally bad. The order of tasks execution at the same loop iteration is weak and depends on very many circumstances. Effectively, first-cancelled-message and last-cancelled-message are equal to random-message. This makes use of cancellation message not robust: a task can be cancelled by many sources, task-groups adds even more mess. Guido van Rossum suggested that messages should be collected in a list and raised altogether. There is a possibility to do it in a backward-compatible way: construct the exception as `CancelledError(last_msg, tuple(msg_list))`. args[0] is args[1][-1]. Weird but works. `.cancel()` should add `None` to the list of cancelled messages. The message list should be cleared when a new CancelledError is constructed and thrown into cancelling task. Working with exc.args[0] / exc.args[1] is tedious and error-prone. I propose adding `exc.msgs` property. Not sure if the last added message is worth a separate attribute, a robust code should not rely on messages order as I described above. The single message is not very useful but a list of messages can be used in timeouts implementation as cancellation count alternative. I don't have a strong preference now but want to carefully discuss possible opportunities before making the final decision. ---------- components: asyncio messages: 413749 nosy: ajoino, alex.gronholm, asvetlov, chris.jerdonek, dreamsorcerer, gvanrossum, iritkatriel, jab, njs, tinchester, yselivanov priority: normal severity: normal status: open title: Confusing CancelError message if multiple cancellations are scheduled versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 19:55:59 2022 From: report at bugs.python.org (Jeff Cagle) Date: Wed, 23 Feb 2022 00:55:59 +0000 Subject: [New-bugs-announce] [issue46830] Add Find functionality to Squeezed Text viewer Message-ID: <1645577759.96.0.0764946040694.issue46830@roundup.psfhosted.org> New submission from Jeff Cagle : Squeezed text output currently opens in a viewer whose only functionality is scrolling. Adding the Find widget a la IDLE would make the viewer much more useful. ---------- assignee: terry.reedy components: IDLE messages: 413761 nosy: Jeff.Cagle, terry.reedy priority: normal severity: normal status: open title: Add Find functionality to Squeezed Text viewer type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 22 22:31:57 2022 From: report at bugs.python.org (Shantanu) Date: Wed, 23 Feb 2022 03:31:57 +0000 Subject: [New-bugs-announce] [issue46831] Outdated comment for __build_class__ in compile.c Message-ID: <1645587117.9.0.320928426769.issue46831@roundup.psfhosted.org> New submission from Shantanu : https://github.com/python/cpython/blob/cf345e945f48f54785799390c2e92c5310847bd4/Python/compile.c#L2537 ``` /* ultimately generate code for: = __build_class__(, , *, **) where: is a function/closure created from the class body; it has a single argument (__locals__) where the dict (or MutableSequence) representing the locals is passed ``` `func` currently takes zero arguments. This was changed in https://github.com/python/cpython/commit/e8e14591ebb729b4fa19626ce245fa0811cf6f32 in Python 3.4 ---------- assignee: docs at python components: Documentation messages: 413768 nosy: docs at python, hauntsaninja priority: normal severity: normal status: open title: Outdated comment for __build_class__ in compile.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 01:12:09 2022 From: report at bugs.python.org (Artyom Polkovnikov) Date: Wed, 23 Feb 2022 06:12:09 +0000 Subject: [New-bugs-announce] [issue46832] unicodeobject.c doesn't compile when defined EXPERIMENTAL_ISOLATED_SUBINTERPRETERS, variable "interned" not found Message-ID: <1645596729.1.0.194216196654.issue46832@roundup.psfhosted.org> New submission from Artyom Polkovnikov : 1) Downloaded https://www.python.org/ftp/python/3.10.2/Python-3.10.2.tar.xz 2) Compiled under MSVC 2019 with define EXPERIMENTAL_ISOLATED_SUBINTERPRETERS 3) Got compilation error of file Objects/unicodeobject.c at line 15931, about undefined variable "interned", it is function _PyUnicode_ClearInterned(), uncompilable code is "if (interned == NULL) {". This happens because when EXPERIMENTAL_ISOLATED_SUBINTERPRETERS is defined then INTERNED_STRINGS is undefined hence global variable "static PyObject *interned = NULL;" is not created, and _PyUnicode_ClearInterned() uses this global variable "interned". For reference attaching my version of "unicodeobject.c" which has compilation error at line 15931. ---------- components: Subinterpreters files: unicodeobject.c messages: 413774 nosy: artyom.polkovnikov priority: normal severity: normal status: open title: unicodeobject.c doesn't compile when defined EXPERIMENTAL_ISOLATED_SUBINTERPRETERS, variable "interned" not found type: compile error versions: Python 3.10 Added file: https://bugs.python.org/file50637/unicodeobject.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 03:22:08 2022 From: report at bugs.python.org (Christian Buhtz) Date: Wed, 23 Feb 2022 08:22:08 +0000 Subject: [New-bugs-announce] [issue46833] Installer Wizard is unclear and has redundant settings Message-ID: <1645604528.5.0.846357801282.issue46833@roundup.psfhosted.org> New submission from Christian Buhtz : Hello together, this is is about the installer of Python 3.9.10 on Windows 10 64bit. I have problems to interpret the installer wizard/dialog. And my argument is that no matter if there are good reasons for the current options some of the users are confused by it. The goal should be to make the installer more clear about what this options do. Lets see the "Install for all users" option: This appears on all three pages. I am not sure but would say that the first two options are related to the py-launcher not the the python interpreter itself. OK, but why two options? The third option is for the interpreter? And I do not see an advantage in making a difference between launcher and interpreter for that option. Lets see about PATH/environment variables: This appears on the first page ("Add Python 3.9 to PATH") and on the third page ("Add Python to environment variables"). I do not understand why. And all this options are not synchronized. It means when I Enable "Add Python 3.9 to Path" on the first page the "Add Python to environment variables" on the third page is not updated (enabled) also. Again: I am sure there are very good reasons for this separated options. But the wizard should make this reason clear to the user (or her/his admins) so that she/he can make an well informed decision. ---------- components: Installation, Windows files: python3_9_10_install_wizard_page1-3.png messages: 413777 nosy: buhtz, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Installer Wizard is unclear and has redundant settings versions: Python 3.9 Added file: https://bugs.python.org/file50638/python3_9_10_install_wizard_page1-3.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 07:09:45 2022 From: report at bugs.python.org (Nikita Sobolev) Date: Wed, 23 Feb 2022 12:09:45 +0000 Subject: [New-bugs-announce] [issue46834] test_gdb started to fail on buildbot/s390x RHEL7 Message-ID: <1645618185.12.0.728684681117.issue46834@roundup.psfhosted.org> New submission from Nikita Sobolev : Log sample: ``` ====================================================================== FAIL: test_up_then_down (test.test_gdb.StackNavigationTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z/build/Lib/test/test_gdb.py", line 782, in test_up_then_down self.assertMultilineMatches(bt, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dje/cpython-buildarea/3.x.edelsohn-rhel-z/build/Lib/test/test_gdb.py", line 297, in assertMultilineMatches self.fail(msg='%r did not match %r' % (actual, pattern)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: 'Breakpoint 1 at 0x801ff160: file Python/bltinmodule.c, line 1168.\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library "/lib64/libthread_db.so.1".\n\nBreakpoint 1, builtin_id (self=, v=<_PyRuntime+2184>) at Python/bltinmodule.c:1168\n1168\t{\n#16 Frame 0x3fffdfb1118, for file , line 9, in bar (a=1, b=2, c=3)\n#16 Frame 0x3fffdfb1090, for file , line 6, in foo (a=1, b=2, c=3)\n#16 Frame 0x3fffdfb1020, for file , line 14, in ()\nUnable to find an older python frame\n#4 Frame 0x3fffdfb11a8, for file , line 12, in baz (args=(1, 2, 3))\n' did not match '^.*\n#[0-9]+ Frame 0x-?[0-9a-f]+, for file , line 12, in baz \\(args=\\(1, 2, 3\\)\\)\n#[0-9]+ \n#[0-9]+ Frame 0x-?[0-9a-f]+, for file , line 12, in baz \\(args=\\(1, 2, 3\\)\\)\n$' ---------------------------------------------------------------------- Ran 32 tests in 15.312s FAILED (failures=53) test test_gdb failed 1 test failed again: test_gdb ``` Full log (too long): https://buildbot.python.org/all/#/builders/179/builds/1769/steps/5/logs/stdio It started to happen (at least more often - however, I cannot find any older failures at the moment) after this commit: https://github.com/python/cpython/commit/b899126094731bc49fecb61f2c1b7557d74ca839 Build link: https://buildbot.python.org/all/#/builders/402/builds/1744 Latest commits (at this moment): - Fails: https://github.com/python/cpython/commit/375a56bd4015596c0cf44129c8842a1fe7199785 - Passes: https://github.com/python/cpython/commit/424023efee5b21567b4725015ef143b627112e3c - Fails: https://github.com/python/cpython/commit/288af845a32fd2a92e3b49738faf8f2de6a7bf7c ---------- components: Tests messages: 413786 nosy: sobolevn, vstinner priority: normal severity: normal status: open title: test_gdb started to fail on buildbot/s390x RHEL7 type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 07:43:36 2022 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Wed, 23 Feb 2022 12:43:36 +0000 Subject: [New-bugs-announce] [issue46835] ImportError: bad magic number in ... does not indicate where is that file located Message-ID: <1645620216.31.0.535281755903.issue46835@roundup.psfhosted.org> New submission from Miro Hron?ok : Recently I've been debugging a very nasty bug report that looked like this: Traceback (most recent call last): File "/usr/bin/jupyter-notebook", line 5, in from notebook.notebookapp import main File "/usr/lib/python3.10/site-packages/notebook/notebookapp.py", line 78, in from .services.kernels.kernelmanager import MappingKernelManager, AsyncMappingKernelManager File "/usr/lib/python3.10/site-packages/notebook/services/kernels/kernelmanager.py", line 18, in from jupyter_client.session import Session File "/usr/lib/python3.10/site-packages/jupyter_client/session.py", line 41, in from jupyter_client.jsonutil import extract_dates, squash_dates, date_default File "/usr/lib/python3.10/site-packages/jupyter_client/jsonutil.py", line 10, in from dateutil.parser import parse as _dateutil_parse File "/usr/lib/python3.10/site-packages/dateutil/parser/__init__.py", line 2, in from ._parser import parse, parser, parserinfo, ParserError File "/usr/lib/python3.10/site-packages/dateutil/parser/_parser.py", line 42, in import six ImportError: bad magic number in 'six': b'\x03\xf3\r\n' For details, see https://bugzilla.redhat.com/2057340 and https://github.com/benjaminp/six/issues/359 What would really make things much easier to understand would be if the exception mentioned what is the path of 'six'. Consider this example: A rogue .py file in /usr/bin: $ sudo touch /usr/bin/copy.py Programs fail with: Traceback (most recent call last): File "/usr/bin/...", line ..., in ... ImportError: cannot import name 'deepcopy' from 'copy' (/usr/bin/copy.py) Immediately I can see there is /usr/bin/copy.py which is probably not supposed to be there. However, when it is a pyc instead: $ sudo touch /usr/bin/copy.pyc Programs fail with: Traceback (most recent call last): File "/usr/bin/...", line ..., in ... ImportError: bad magic number in 'copy': b'' Now I have no idea where "copy" is. The is a request for that exception to give that infomartion. ---------- components: Interpreter Core messages: 413788 nosy: hroncok, petr.viktorin priority: normal severity: normal status: open title: ImportError: bad magic number in ... does not indicate where is that file located type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 10:04:12 2022 From: report at bugs.python.org (STINNER Victor) Date: Wed, 23 Feb 2022 15:04:12 +0000 Subject: [New-bugs-announce] [issue46836] [C API] Move PyFrameObject to the internal C API Message-ID: <1645628652.12.0.0785803095185.issue46836@roundup.psfhosted.org> New submission from STINNER Victor : I propose to move the PyFrameObject structure to the internal C API. -- Between Python 3.10 and Python 3.11, the work on optimizing ceval.c modified deeply the PyFrameObject structure. Examples: * The f_code member has been removed by in bpo-44032 by the commit b11a951f16f0603d98de24fee5c023df83ea552c. * The f_frame member has been added in bpo-44590 by the commit ae0a2b756255629140efcbe57fc2e714f0267aa3. Most members have been moved to a new PyFrameObject.f_frame member which has the type "struct _interpreter_frame*". Problem: this type is only part of the *internal* C API. Moreover, accessing the few remaining members which "didn't change" became dangerous. For example, f_back can be NULL even if the frame has a previous frame: the PyFrame_GetBack() function *must* now be called. See bpo-46356 "[C API] Enforce usage of PyFrame_GetBack()". Reading directly f_lineno was already dangerous since Python 2.3: the value is only valid if the value is greater than 0. It's way safer to use the clean PyFrame_GetLineNumber() API instead. PyFrame_GetBack() was added to Python 3.9. You can use the pythoncapi_compat project to get this function on Python 3.8 and older: => https://pythoncapi-compat.readthedocs.io/ PyFrame_GetLineNumber() was added to the limited API in Python 3.10. => Documentation: https://docs.python.org/dev/c-api/reflection.html#c.PyFrame_GetBack -- There *are* projects accessing directly PyFrameObject like the gevent project which sets the f_code member (moved to f_frame.f_code in Python 3.11). It's broken on Python 3.11: https://bugs.python.org/issue40421#msg413719 Debuggers and profilers also want to read PyFrameObject directly. IMO for these *specific* use cases, using the *internal* C API is a legit use case and it's fine. Moving PyFrameObject to the internal C API would clarify the situation. Currently, What's New in Python 3.11 documents the change this with warning: "While the documentation notes that the fields of PyFrameObject are subject to change at any time, they have been stable for a long time and were used in several popular extensions. " -- I'm mostly worried about Cython which still get and set many PyFrameObject members directly (ex: f_lasti, f_lineno, f_localsplus, f_trace), since there are no public functions for that. => https://bugs.python.org/issue40421#msg367550 Right now, I would suggest Cython to use the internal C API, and *later* consider adding new getter and setter functions. I don't think that we can solve all problems at once: it takes take to design clean API and use them in Cython. Python 3.11 already broke Cython since most PyFrameObject members moved into the new "internal" PyFrameObject.f_frame API which requires using the internal C API to get "struct _interpreter_frame". => https://github.com/cython/cython/issues/4500 -- Using a frame using the *public* C API was and remains supported. Short example: -- PyThreadState *tstate = PyThreadState_Get(); PyFrameObject* frame = PyThreadState_GetFrame(tstate); int lineno = PyFrame_GetLineNumber(frame); --- The PyFrameObject structure is opaque and members are not accessed directly: it's fine. ---------- components: C API messages: 413795 nosy: vstinner priority: normal severity: normal status: open title: [C API] Move PyFrameObject to the internal C API versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 12:38:54 2022 From: report at bugs.python.org (Jigar Gajjar) Date: Wed, 23 Feb 2022 17:38:54 +0000 Subject: [New-bugs-announce] [issue46837] lstrip and strip not working as expected Message-ID: <1645637934.57.0.221676088435.issue46837@roundup.psfhosted.org> New submission from Jigar Gajjar : Code: my_string = 'Auth:AWS' print(my_string.lstrip('Auth:')) Actual Output: WS Excepted Output: AWS ---------- messages: 413831 nosy: jigar030 priority: normal severity: normal status: open title: lstrip and strip not working as expected type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 16:23:21 2022 From: report at bugs.python.org (Andrej Klychin) Date: Wed, 23 Feb 2022 21:23:21 +0000 Subject: [New-bugs-announce] [issue46838] Parameters and arguments parser syntax error improvments Message-ID: <1645651401.14.0.658565183507.issue46838@roundup.psfhosted.org> New submission from Andrej Klychin : I saw that pablogsal welcomed improvments to the parser's suggestions, so here are the messages for parameters and arguments lists I think should be written instead of the current generic "invalid syntax". >>> def foo(*arg, *arg): pass SyntaxError: * argument may appear only once >>> def foo(**arg, **arg): pass SyntaxError: ** argument may appear only once >>> def foo(arg1, /, arg2, /, arg3): pass SyntaxError: / may appear only once >>> def foo(*args, /, arg): pass SyntaxError: / must be ahead of * >>> def foo(/, arg): pass SyntaxError: at least one argument must precede / >>> def foo(arg=): pass SyntaxError: expected default value expression >>> def foo(*args=None): pass SyntaxError: * argument cannot have default value >>> def foo(**kwargs=None): pass SyntaxError: ** argument cannot have default value >>> foo(*args=[0]) SyntaxError: cannot assign to iterable argument unpacking >>> foo(**args={"a": None}) SyntaxError: cannot assign to keyword argument unpacking >>> foo(arg=) SyntaxError: expected argument value expression ---------- components: Parser messages: 413856 nosy: Andy_kl, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Parameters and arguments parser syntax error improvments type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 16:36:16 2022 From: report at bugs.python.org (sami) Date: Wed, 23 Feb 2022 21:36:16 +0000 Subject: [New-bugs-announce] [issue46839] Process finished with exit code -1073741819 (0xC0000005) Message-ID: <1645652176.24.0.399924224142.issue46839@roundup.psfhosted.org> New submission from sami : Hi, I am running my python code for large data set and currently, I received this exit code: "Process finished with exit code -1073741819 (0xC0000005)" I monitored via Event Viewer and found the following error. I wonder how I can solve this issue? Thanks Faulting application name: python.exe, version: 0.0.0.0, time stamp: 0x56abcaee Faulting module name: python27.dll, version: 2.7.11150.1013, time stamp: 0x56abcaed Exception code: 0xc0000005 Fault offset: 0x00000000000a4f81 Faulting process id: 0x29d0 Faulting application start time: 0x01d8287ff03c7476 Faulting application path: C:\Users\sami\Anaconda2\python.exe Faulting module path: C:\Users\sami\Anaconda2\python27.dll ---------- messages: 413859 nosy: fa.sami priority: normal severity: normal status: open title: Process finished with exit code -1073741819 (0xC0000005) type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 20:40:39 2022 From: report at bugs.python.org (Jerome Perrin) Date: Thu, 24 Feb 2022 01:40:39 +0000 Subject: [New-bugs-announce] [issue46840] xmlrpc.client.ServerProxy shows password in __repr__ when using basic authentication Message-ID: <1645666839.94.0.152253377842.issue46840@roundup.psfhosted.org> New submission from Jerome Perrin : >>> import xmlrpc.client >>> xmlrpc.client.ServerProxy('https://login:password at example.com') Because this repr is included in error messages, this can lead to leaking the password: >>> xmlrpc.client.ServerProxy('https://login:password at example.com').method() Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.7/xmlrpc/client.py", line 1112, in __call__ return self.__send(self.__name, args) File "/usr/lib/python3.7/xmlrpc/client.py", line 1452, in __request verbose=self.__verbose File "/usr/lib/python3.7/xmlrpc/client.py", line 1154, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib/python3.7/xmlrpc/client.py", line 1187, in single_request dict(resp.getheaders()) xmlrpc.client.ProtocolError: ---------- components: Library (Lib) messages: 413870 nosy: perrinjerome priority: normal severity: normal status: open title: xmlrpc.client.ServerProxy shows password in __repr__ when using basic authentication _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 21:17:14 2022 From: report at bugs.python.org (Brandt Bucher) Date: Thu, 24 Feb 2022 02:17:14 +0000 Subject: [New-bugs-announce] [issue46841] Inline bytecode caches Message-ID: <1645669034.5.0.77775951626.issue46841@roundup.psfhosted.org> New submission from Brandt Bucher : ...as discussed in https://github.com/faster-cpython/ideas/discussions/263. My plan is for this initial PR to lay the groundwork, then to work on porting over the existing opcode caches one-by-one. Once that's done, we can clean up lots of the "old" machinery. ---------- assignee: brandtbucher components: Interpreter Core messages: 413875 nosy: Mark.Shannon, brandtbucher priority: normal severity: normal stage: patch review status: open title: Inline bytecode caches type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 23 22:41:44 2022 From: report at bugs.python.org (benrg) Date: Thu, 24 Feb 2022 03:41:44 +0000 Subject: [New-bugs-announce] [issue46842] py to pyc location mapping with sys.pycache_prefix isn't 1-to-1 on Windows Message-ID: <1645674104.76.0.96131746117.issue46842@roundup.psfhosted.org> New submission from benrg : `importlib._bootstrap_external` contains this comment: # We need an absolute path to the py file to avoid the possibility of # collisions within sys.pycache_prefix [...] # [...] the idea here is that if we get `Foo\Bar`, we first # make it absolute (`C:\Somewhere\Foo\Bar`), then make it root-relative # (`Somewhere\Foo\Bar`), so we end up placing the bytecode file in an # unambiguous `C:\Bytecode\Somewhere\Foo\Bar\`. The code follows the comment, but doesn't achieve the goal: `C:\Somewhere\Foo\Bar` and `D:\Somewhere\Foo\Bar` collide. There is also no explicit handling of UNC paths, with the result that `\\Somewhere\Foo\Bar` maps to the same location. I think that on Windows the code should use a mapping like C:\Somewhere\Foo\Bar ==> C:\Bytecode\C\Somewhere\Foo\Bar D:\Somewhere\Foo\Bar ==> C:\Bytecode\D\Somewhere\Foo\Bar \\Somewhere\Foo\Bar ==> C:\Bytecode\UNC\Somewhere\Foo\Bar The lack of double-slash prefix handling also matters on Unixy platforms that give it a special meaning. Cygwin is probably affected by this. I don't know whether there are any others. ---------- components: Library (Lib), Windows messages: 413878 nosy: benrg, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: py to pyc location mapping with sys.pycache_prefix isn't 1-to-1 on Windows type: behavior versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 00:34:25 2022 From: report at bugs.python.org (Joongi Kim) Date: Thu, 24 Feb 2022 05:34:25 +0000 Subject: [New-bugs-announce] [issue46843] PersistentTaskGroup API Message-ID: <1645680865.21.0.230450894933.issue46843@roundup.psfhosted.org> New submission from Joongi Kim : I'm now tracking the recent addition and discussion of TaskGroup and cancellation scopes. It's interesting! :) I would like to suggest to have a different mode of operation in asyncio.TaskGroup, which I named "PersistentTaskGroup". AFAIK, TaskGroup targets to replace asyncio.gather, ensuring completion or cancellation of all tasks within the context manager scope. I believe that a "safe" asyncio application should consist of a nested tree of task groups, which allow us to explicitly state when tasks of different purposes and contexts terminate. For example, a task group for database transactions should be shutdown before a task group for HTTP handlers is shutdown. To this end, in server applications with many sporadically spawned tasks throughout the whole process lifetime, there are different requirements for a task group that manages such task sets. The tasks should *not* be cancelled upon the unhandled exceptions of sibling tasks in the task group, while we need an explicit "fallback" exception handler for those (just like "return_exceptions=True" in asyncio.gather). The tasks belong to the task group but their references should not be kept forever to prevent memory leak (I'd suggest using weakref.WeakSet). When terminating the task group itself, the ongoing tasks should be cancelled. The cancellation process upon termination may happend in two phases: cancel request with initial timeout + additional limited waiting of cancellations. (This is what Guido has mentioned in the discussion in bpo-46771.) An initial sketch of PersistentTaskGroup is on aiotools: https://github.com/achimnol/aiotools/blob/main/src/aiotools/ptaskgroup.py Currently has no two-phase cancellation because it would require Python 3.11 with asyncio.Task.uncancel(). As Andrew has left a comment (https://github.com/achimnol/aiotools/issues/29#issuecomment-997437030), I think it is the time to revisit the concrete API design and whether to include PersistentTaskGroup in the stdlib or not. ---------- components: asyncio messages: 413880 nosy: achimnol, asvetlov, gvanrossum, yselivanov priority: normal severity: normal status: open title: PersistentTaskGroup API type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 00:45:56 2022 From: report at bugs.python.org (Joongi Kim) Date: Thu, 24 Feb 2022 05:45:56 +0000 Subject: [New-bugs-announce] [issue46844] Context-based TaskGroup for legacy libraries Message-ID: <1645681556.47.0.510927748304.issue46844@roundup.psfhosted.org> New submission from Joongi Kim : Along with bpo-46843 and the new asyncio.TaskGroup API, I would like to suggest addition of context-based TaskGroup feature. Currently asyncio.create_task() just creates a new task directly attached to the event loop, while asyncio.TaskGroup.create_task() creates a new task managed by the TaskGroup instance. It would be ideal to all existing asyncio codes to migrate to use TaskGroup, but this is impractical. An alternative approach is to implicitly bind asyncio.create_task() under a specific context to a specific task group, probably using contextvars. I believe that this approach would allow more control over tasks implicitly spawned by 3rd-party libraries that cannot control. How about your thoughts? ---------- components: asyncio messages: 413881 nosy: achimnol, asvetlov, gvanrossum, yselivanov priority: normal severity: normal status: open title: Context-based TaskGroup for legacy libraries type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 02:40:55 2022 From: report at bugs.python.org (Inada Naoki) Date: Thu, 24 Feb 2022 07:40:55 +0000 Subject: [New-bugs-announce] [issue46845] dict: Use smaller entry for Unicode-key only dict. Message-ID: <1645688455.61.0.00785191075466.issue46845@roundup.psfhosted.org> New submission from Inada Naoki : Currently, PyDictKeyEntry is 24bytes (hash, key, and value). We can drop the hash from entry when all keys are unicode, because unicode objects caches hash already. This will cause some performance regression on microbenchmark because dict need one more indirect access to compare hash value. On the other hand, this will reduce some RAM usage. Additionally, unlike docstrings and annotations, this includes much **hot** RAM. It will make Python more cache efficient. This is work in progress code: https://github.com/methane/cpython/pull/43 pypeformance result is in the PR too. ---------- components: Interpreter Core messages: 413892 nosy: Mark.Shannon, methane, rhettinger priority: normal severity: normal status: open title: dict: Use smaller entry for Unicode-key only dict. type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 04:39:21 2022 From: report at bugs.python.org (Larry Hastings) Date: Thu, 24 Feb 2022 09:39:21 +0000 Subject: [New-bugs-announce] [issue46846] functools.partial objects should set __signature__ and _annotations__ Message-ID: <1645695561.69.0.398852174273.issue46846@roundup.psfhosted.org> New submission from Larry Hastings : I ran across an interesting bug in issue #46761. If you call functools.update_wrapper on a functools.partial object, inspect.signature will return the wrong (original) signature for the partial object. We're still figuring that one out. And, of course, it's telling that the bug has been there for a long time. I suspect this isn't something that has inconvenienced a lot of people. But: I suggest that it's time functools.partial participated in signature stuff. Specifically, I think functools.partial should generate a new and correct __signature__ for the partial object. And I propose it should also generate a new and correct __annotations__ for the partial, by removing all entries for parameters that are filled in by the partial object. Right now inspect.signature has special support for functools.partial objects. It finds the underlying function, and . Which means there's code in both modules that has to understand the internals of partial objects. Just from a code hygiene perspective, it'd be better if all that logic lived under functools. I wonder if functools.partial objects should generally do a better job of impersonating the original function. Should they adopt the same __name__? __file__? __qualname__? My intuition is, it'd be nice if it did. But I might be forgetting something important. (I suspect everything I said about functools.partial also applies to functools.partialmethod.) ---------- components: Library (Lib) messages: 413897 nosy: larry, rhettinger priority: normal severity: normal stage: test needed status: open title: functools.partial objects should set __signature__ and _annotations__ type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 04:47:13 2022 From: report at bugs.python.org (Larry Hastings) Date: Thu, 24 Feb 2022 09:47:13 +0000 Subject: [New-bugs-announce] [issue46847] functools.update_wrapper doesn't understand partial objects and annotations Message-ID: <1645696033.77.0.27320800273.issue46847@roundup.psfhosted.org> New submission from Larry Hastings : functools.update_wrapper currently copies over every attribute listed in the "assigned" parameter, which defaults to WRAPPER_ASSIGNMENTS, which means it copies the wrapped function's __annotations__ to the wrapper. This is slightly wrong if the wrapper occludes an annotated parameter: def foo(a: int, b: str, c: float): print(a, b, c) import functools foo_a = functools.partial(foo, 3) functools.update_wrapper(foo_a, foo) print(foo_a.__annotations__) In this case, foo_a.__annotations__ contains an annotation for a parameter named "a", even though foo_a doesn't have a parameter named "a". This problem occurred to me just after I filed #46846; the two issues are definitely related. ---------- components: Library (Lib) messages: 413898 nosy: larry, rhettinger priority: normal severity: normal stage: test needed status: open title: functools.update_wrapper doesn't understand partial objects and annotations type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 07:43:50 2022 From: report at bugs.python.org (Stefan Tatschner) Date: Thu, 24 Feb 2022 12:43:50 +0000 Subject: [New-bugs-announce] [issue46848] Use optimized string search function in mmap.find() Message-ID: <1645706630.43.0.574969248824.issue46848@roundup.psfhosted.org> New submission from Stefan Tatschner : The mmap.find() in function uses a naive loop to search string matches. This can be optimized ?for free? by using libc's memmap(3) function instead. The relevant file is Modules/mmapmodule.c, the relevant function is mmap_gfind(). ---------- messages: 413902 nosy: rumpelsepp priority: normal severity: normal status: open title: Use optimized string search function in mmap.find() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 09:11:26 2022 From: report at bugs.python.org (tongxiaoge) Date: Thu, 24 Feb 2022 14:11:26 +0000 Subject: [New-bugs-announce] [issue46849] Memory problems detected using Valgrind Message-ID: <1645711886.58.0.922948546021.issue46849@roundup.psfhosted.org> New submission from tongxiaoge : Reproduction steps: 1. Execute command: iotop -b -n 10 & 2. Execute the command in another session: valgrind /usr/sbin/iotop -b -n 5 > iotop_test The output information is as follows: [root at openEuler ~]# valgrind /usr/sbin/iotop -b -n 5 > iotop_test ==13750== Memcheck, a memory error detector ==13750== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==13750== Using Valgrind-3.16.0 and LibVEX; rerun with -h for copyright info ==13750== Command: /usr/sbin/iotop -b -n 5 ==13750== ==13750== Conditional jump or move depends on uninitialised value(s) ==13750== at 0x49B2C40: PyUnicode_Decode (unicodeobject.c:3488) ==13750== by 0x49B335B: unicode_new (unicodeobject.c:15465) ==13750== by 0x4982C07: type_call (typeobject.c:1014) ==13750== by 0x492CA17: _PyObject_MakeTpCall (call.c:191) ==13750== by 0x48F1863: _PyObject_VectorcallTstate (abstract.h:116) ==13750== by 0x48F1863: _PyObject_VectorcallTstate (abstract.h:103) ==13750== by 0x48F1863: PyObject_Vectorcall (abstract.h:127) ==13750== by 0x48F1863: call_function (ceval.c:5075) ==13750== by 0x48F1863: _PyEval_EvalFrameDefault (ceval.c:3518) ==13750== by 0x48EAEE7: _PyEval_EvalFrame (pycore_ceval.h:40) ==13750== by 0x48EAEE7: function_code_fastcall (call.c:330) ==13750== by 0x492CBE7: _PyObject_FastCallDictTstate (call.c:118) ==13750== by 0x492CEEB: _PyObject_Call_Prepend (call.c:489) ==13750== by 0x498A007: slot_tp_init (typeobject.c:6964) ==13750== by 0x4982C4F: type_call (typeobject.c:1026) ==13750== by 0x492CA17: _PyObject_MakeTpCall (call.c:191) ==13750== by 0x48F1863: _PyObject_VectorcallTstate (abstract.h:116) ==13750== by 0x48F1863: _PyObject_VectorcallTstate (abstract.h:103) ==13750== by 0x48F1863: PyObject_Vectorcall (abstract.h:127) ==13750== by 0x48F1863: call_function (ceval.c:5075) ==13750== by 0x48F1863: _PyEval_EvalFrameDefault (ceval.c:3518) ==13750== ==13751== Warning: invalid file descriptor 1024 in syscall close() ==13751== Warning: invalid file descriptor 1025 in syscall close() ==13751== Warning: invalid file descriptor 1026 in syscall close() ==13751== Warning: invalid file descriptor 1027 in syscall close() ==13751== Use --log-fd= to select an alternative log fd. ==13751== Warning: invalid file descriptor 1028 in syscall close() ==13751== Warning: invalid file descriptor 1029 in syscall close() ==13752== Warning: invalid file descriptor 1024 in syscall close() ==13752== Warning: invalid file descriptor 1025 in syscall close() ==13752== Warning: invalid file descriptor 1026 in syscall close() ==13752== Warning: invalid file descriptor 1027 in syscall close() ==13752== Use --log-fd= to select an alternative log fd. ==13752== Warning: invalid file descriptor 1028 in syscall close() ==13752== Warning: invalid file descriptor 1029 in syscall close() ==13750== ==13750== HEAP SUMMARY: ==13750== in use at exit: 1,069,715 bytes in 10,017 blocks ==13750== total heap usage: 589,638 allocs, 579,621 frees, 128,672,782 bytes allocated ==13750== ==13750== LEAK SUMMARY: ==13750== definitely lost: 0 bytes in 0 blocks ==13750== indirectly lost: 0 bytes in 0 blocks ==13750== possibly lost: 1,042,483 bytes in 9,894 blocks ==13750== still reachable: 27,232 bytes in 123 blocks ==13750== suppressed: 0 bytes in 0 blocks ==13750== Rerun with --leak-check=full to see details of leaked memory ==13750== ==13750== Use --track-origins=yes to see where uninitialised values come from ==13750== For lists of detected and suppressed errors, rerun with: -s ==13750== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) Software version: python3-3.9.9 & python3.9.10 iotop-6.0 The above stack information is in Python 3.9.10 and this problem cannot be reproduced in Python 3.7. So is it a python3 problem or an iotop problem? How to fix it. ---------- messages: 413912 nosy: sxt1001 priority: normal severity: normal status: open title: Memory problems detected using Valgrind type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 10:36:24 2022 From: report at bugs.python.org (STINNER Victor) Date: Thu, 24 Feb 2022 15:36:24 +0000 Subject: [New-bugs-announce] [issue46850] [C API] Move _PyEval_EvalFrameDefault() to the internal C API Message-ID: <1645716984.87.0.22272006941.issue46850@roundup.psfhosted.org> New submission from STINNER Victor : In Python 3.10, _PyEval_EvalFrameDefault() has the API: PyObject* _PyEval_EvalFrameDefault(PyThreadState *tstate, PyFrameObject *f, int throwflag); In Python 3.11, bpo-44590 (commit ae0a2b756255629140efcbe57fc2e714f0267aa3 "Lazily allocate frame objects (GH-27077)") changed it to: PyObject* _PyEval_EvalFrameDefault(PyThreadState *tstate, InterpreterFrame *frame, int throwflag); Problem: InterpreterFrame is part of the internal C API. By the way, PyInterpreterState.eval_frame type (_PyFrameEvalFunction) also changed. This field type already changed in Python 3.9: * ``PyInterpreterState.eval_frame`` (:pep:`523`) now requires a new mandatory *tstate* parameter (``PyThreadState*``). (Contributed by Victor Stinner in :issue:`38500`.) Maybe the Python 3.11 change should be documented in What's New in Python 3.11, as it was in What's New in Python 3.9. I propose to move most _PyEval private functions to the internal C API to clarify that they must be used. ---------- components: C API messages: 413918 nosy: vstinner priority: normal severity: normal status: open title: [C API] Move _PyEval_EvalFrameDefault() to the internal C API versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 14:12:34 2022 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Thu, 24 Feb 2022 19:12:34 +0000 Subject: [New-bugs-announce] [issue46851] Document multiprocessing.set_forkserver_preload Message-ID: <1645729954.43.0.679876278884.issue46851@roundup.psfhosted.org> New submission from G?ry : I have just notice that the multiprocessing.set_forkserver_preload (which originates from multiprocessing.forkserver.set_forkserver_preload) is not documented: https://github.com/python/cpython/blob/v3.10.2/Lib/multiprocessing/context.py#L180-L185 ---------- messages: 413934 nosy: docs at python, maggyero priority: normal severity: normal status: open title: Document multiprocessing.set_forkserver_preload versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 19:07:57 2022 From: report at bugs.python.org (STINNER Victor) Date: Fri, 25 Feb 2022 00:07:57 +0000 Subject: [New-bugs-announce] [issue46852] Remove float.__get_format__() and float.__set_format__() Message-ID: <1645747677.37.0.919337626983.issue46852@roundup.psfhosted.org> New submission from STINNER Victor : It has been decided to require IEEE 754 to build Python 3.11: https://mail.python.org/archives/list/python-dev at python.org/thread/J5FSP6J4EITPY5C2UJI7HSL2GQCTCUWN/ At Python startup, _PyFloat_InitState() checks the IEEE 754 format at runtime. It can be changed using float.__get_format__() and float.__set_format__() methods. These methods docstrings say that they only exist to test Python itself: "You probably don't want to use this function. It exists mainly to be used in Python's test suite." These methods are private and not documented. I propose to remove them. Once they will be removed, it will become possible to move the detection of the IEEE 754 format in the build step (./configure script) rather than doing the detection at runtime (slower). It would remove an "if" in _PyFloat_Pack4() and _PyFloat_Pack8(), and allow to specialize these functions for the detected format at build time. These functions are used by serialization formats: marshal, pickle and struct. ---------- components: Interpreter Core messages: 413943 nosy: vstinner priority: normal severity: normal status: open title: Remove float.__get_format__() and float.__set_format__() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 20:57:53 2022 From: report at bugs.python.org (i5-7200u) Date: Fri, 25 Feb 2022 01:57:53 +0000 Subject: [New-bugs-announce] [issue46853] Python interpreter can get code from memory, it is not secure. Message-ID: <1645754273.34.0.366451994283.issue46853@roundup.psfhosted.org> New submission from i5-7200u : Hi, Python Interpreter have a big security bug/error. My friend and l am. We can give virus code to Python Interpreter. we were looking for run a binary application from memory (byte array) Later we find python, but we got it is security bug/error example from my friend: https://www.virustotal.com/gui/file/6fc3ad98c40e6962f3c29e07735f7ae25e50092c3d7595201740a954ad5f3cf4?nocache=1 https://github.com/ArdaKC/run-python-in-java if we encrypt python virus code and give to java codes as byte array and we decrypt python virus code and give to python interpreter from memory then antiviruses never detect it (not comodo, comodo have a strong hips and auto conmaintent) but we dont it. we just want to fix this bug. for more peoples security. Please. This bug is reported by KCS Team. ---------- components: Interpreter Core files: afterexample.png messages: 413952 nosy: i5-7200u priority: normal severity: normal status: open title: Python interpreter can get code from memory, it is not secure. type: security versions: Python 3.11 Added file: https://bugs.python.org/file50642/afterexample.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 24 22:46:49 2022 From: report at bugs.python.org (aprpp) Date: Fri, 25 Feb 2022 03:46:49 +0000 Subject: [New-bugs-announce] [issue46854] Failed to compile static python3.7.12 Message-ID: <1645760809.24.0.711406085286.issue46854@roundup.psfhosted.org> New submission from aprpp <916495294 at qq.com>: I compile static version of python3.7.12, I added the static standard library that I want to compile in Modules/Setup, reference Modules/Setup.dist in python source, like this: static Modules that should always be present (non UNIX dependent): array arraymodule.c # array objects cmath cmathmodule.c _math.c # -lm # complex math library functions math mathmodule.c _math.c # -lm # math library functions, e.g. sin() _contextvars _contextvarsmodule.c # Context Variables _struct _struct.c # binary structure packing/unpacking But there are still many modules that fail to compile, these modules with no commented out build definitions in the Modules/Setup.dist file. How do I add these modules build definitions to the Modules/Setup, yes they compile successfully ? Failed to build these modules: _bz2 _ctypes _ctypes_test _decimal _hashlib _json _lsprof _lzma _multiprocessing _opcode _ssl _testbuffer _testimportmultiple _testmultiphase _uuid _xxtestfuzz ossaudiodev xxlimited ---------- components: Build files: ??.PNG messages: 413958 nosy: aprpp priority: normal severity: normal status: open title: Failed to compile static python3.7.12 type: compile error versions: Python 3.7 Added file: https://bugs.python.org/file50643/??.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 10:49:37 2022 From: report at bugs.python.org (svilen dobrev) Date: Fri, 25 Feb 2022 15:49:37 +0000 Subject: [New-bugs-announce] [issue46855] printing a string with strange characters loops forever Message-ID: <1645804177.22.0.124909459168.issue46855@roundup.psfhosted.org> New submission from svilen dobrev : $ python Python 3.10.2 (main, Jan 15 2022, 19:56:27) [GCC 11.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a= "Betrag gr\xc3\xb6\xc3\x9fer als Betrag der Original-Transaktion" >>> a 'Betrag gr???\x9fer als Betrag der Original-Transaktion' >>> print(a) Betrag gr???~ --------------- And above waits forever. Does not consume resources, but does not hear Ctrl-C. Ctrl-\ kills it. The string above is just a byte string of the utf-8 representation, with forgotten "b" infront of it. ---------- components: Interpreter Core messages: 414010 nosy: svild priority: normal severity: normal status: open title: printing a string with strange characters loops forever type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 11:10:01 2022 From: report at bugs.python.org (Joris Geysens) Date: Fri, 25 Feb 2022 16:10:01 +0000 Subject: [New-bugs-announce] [issue46856] datetime.max conversion Message-ID: <1645805401.57.0.267496020204.issue46856@roundup.psfhosted.org> New submission from Joris Geysens : Reading the documentation, I don't understand how this is not possible : # get the max utc timestamp ts = datetime.max.replace(tzinfo=timezone.utc).timestamp() # similarly ts2 = datetime(9999, 12, 31, 23, 59, 59, 999999, tzinfo=timezone.utc).timestamp() # timestamp value 253402300800 seems correct # converting back to timestamp is impossible, these all fail : dt = datetime.fromtimestamp(ts, tz=timezone.utc) dt = datetime.utcfromtimestamp(ts) It should be possible to get a datetime back from the initially converted timestamp, no? ---------- messages: 414013 nosy: joris.geysens priority: normal severity: normal status: open title: datetime.max conversion type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 11:57:11 2022 From: report at bugs.python.org (STINNER Victor) Date: Fri, 25 Feb 2022 16:57:11 +0000 Subject: [New-bugs-announce] [issue46857] Python leaks one reference at exit on Windows Message-ID: <1645808231.81.0.426207041717.issue46857@roundup.psfhosted.org> New submission from STINNER Victor : "./python -X showrefcount -I -c pass" returns "[0 refs, 0 blocks]" as expected on Linux: Python doesn't leak any reference nor memory block. But on Windows, it still leaks 1 reference (and 1 memory block)! vstinner at DESKTOP-DK7VBIL C:\vstinner\python\main>python -X showrefcount -I -c pass [1 refs, 1 blocks] I recently added a test in test_embed which now fails on Windows. See bpo-1635741 "Py_Finalize() doesn't clear all Python objects at exit" for the context. ---------- components: Interpreter Core messages: 414020 nosy: vstinner priority: normal severity: normal status: open title: Python leaks one reference at exit on Windows versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 15:47:49 2022 From: report at bugs.python.org (benrg) Date: Fri, 25 Feb 2022 20:47:49 +0000 Subject: [New-bugs-announce] [issue46858] mmap constructor resets the file pointer on Windows Message-ID: <1645822069.83.0.570318294839.issue46858@roundup.psfhosted.org> New submission from benrg : On Windows, `mmap.mmap(f.fileno(), ...)` has the undocumented side effect of setting f's file pointer to 0. The responsible code in mmapmodule is this: /* Win9x appears to need us seeked to zero */ lseek(fileno, 0, SEEK_SET); Win9x is no longer supported, and I'm quite sure that NT doesn't have whatever problem they were trying to fix. I think this code should be deleted, and a regression test added to verify that mmap leaves the file pointer alone on all platforms. (mmap also maintains its own file pointer, the `pos` field of `mmap_object`, which is initially set to zero. This issue is about the kernel file pointer, not mmap's pointer.) ---------- components: IO, Library (Lib), Windows messages: 414039 nosy: benrg, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: mmap constructor resets the file pointer on Windows type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 18:20:43 2022 From: report at bugs.python.org (Norman Fung) Date: Fri, 25 Feb 2022 23:20:43 +0000 Subject: [New-bugs-announce] [issue46859] NameError: free variable 'outer' referenced before assignment in enclosing scope Message-ID: <1645831243.13.0.138384399313.issue46859@roundup.psfhosted.org> New submission from Norman Fung : In reference to ticket (which was fix for Python 3.9 or above) https://bugs.python.org/issue46672?@ok_message=msg%20413975%20created%0Aissue%2046672%20nosy_count%2C%20nosy%2C%20messages%2C%20message_count%20edited%20ok&@template=item I encountered this problem on: a) Python 3.8.5 b) asyncio 3.4.3 Stack Exception in callback gather.._done_callback() at C:\ProgramData\Anaconda3\lib\asyncio\tasks.py:758 handle: ._done_callback() at C:\ProgramData\Anaconda3\lib\asyncio\tasks.py:758 created at C:\ProgramData\Anaconda3\lib\asyncio\futures.py:149> source_traceback: Object created at (most recent call last): File "src\xxxxx.py", line 37, in _invoke_runners one_loop.run_until_complete(runner.xxxxx) File "C:\ProgramData\Anaconda3\lib\site-packages\nest_asyncio.py", line 90, in run_until_complete self._run_once() File "C:\ProgramData\Anaconda3\lib\site-packages\nest_asyncio.py", line 127, in _run_once handle._run() File "C:\ProgramData\Anaconda3\lib\site-packages\nest_asyncio.py", line 196, in run ctx.run(self._callback, *self._args) File "C:\ProgramData\Anaconda3\lib\asyncio\futures.py", line 356, in _set_state _copy_future_state(other, future) File "C:\ProgramData\Anaconda3\lib\asyncio\futures.py", line 335, in _copy_future_state dest.set_result(result) File "C:\ProgramData\Anaconda3\lib\asyncio\futures.py", line 237, in set_result self.__schedule_callbacks() File "C:\ProgramData\Anaconda3\lib\asyncio\futures.py", line 149, in __schedule_callbacks self._loop.call_soon(callback, self, context=ctx) Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\nest_asyncio.py", line 196, in run ctx.run(self._callback, *self._args) File "C:\ProgramData\Anaconda3\lib\asyncio\tasks.py", line 762, in _done_callback if outer.done(): NameError: free variable 'outer' referenced before assignment in enclosing scope ---------- components: asyncio messages: 414048 nosy: asvetlov, miss-islington, norman.lm.fung, onerandomusername, sobolevn, yselivanov priority: normal severity: normal status: open title: NameError: free variable 'outer' referenced before assignment in enclosing scope type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 19:18:02 2022 From: report at bugs.python.org (Brett Cannon) Date: Sat, 26 Feb 2022 00:18:02 +0000 Subject: [New-bugs-announce] [issue46860] `--with-suffix` not respected on case-insensitive file systems Message-ID: <1645834682.9.0.270645499588.issue46860@roundup.psfhosted.org> New submission from Brett Cannon : If you use `--with-suffix` on a case-insensitive file system it is ultimately ignored and forced to `.exe`. PR incoming. ---------- assignee: brett.cannon components: Build messages: 414051 nosy: brett.cannon priority: normal severity: normal status: open title: `--with-suffix` not respected on case-insensitive file systems type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 22:44:08 2022 From: report at bugs.python.org (benrg) Date: Sat, 26 Feb 2022 03:44:08 +0000 Subject: [New-bugs-announce] [issue46861] os.environ forces variable names to upper case on Windows Message-ID: <1645847048.14.0.626717468993.issue46861@roundup.psfhosted.org> New submission from benrg : The Windows functions that deal with environment variables are case-insensitive and case-preserving, like most Windows file systems. Many environment variables are conventionally written in all caps, but others aren't, such as `ProgramData`, `PSModulePath`, and `windows_tracing_logfile`. os.environ forces all environment variable names to upper case when it's constructed. One consequence is that if you pass a modified environment to subprocess.Popen, you end up with variables named `PROGRAMDATA`, etc., even if you didn't modify their values. While this is unlikely to break things since other software normally ignores the case, it's nonstandard behavior, and disconcerting when the affected variable names are shown to human beings. Here's an example of someone being confused by this: https://stackoverflow.com/questions/19023238/why-python-uppercases-all-environment-variables-in-windows ---------- components: Library (Lib), Windows messages: 414064 nosy: benrg, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.environ forces variable names to upper case on Windows type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 25 23:19:39 2022 From: report at bugs.python.org (benrg) Date: Sat, 26 Feb 2022 04:19:39 +0000 Subject: [New-bugs-announce] [issue46862] subprocess makes environment blocks with duplicate keys on Windows Message-ID: <1645849179.34.0.588618338218.issue46862@roundup.psfhosted.org> New submission from benrg : On Windows, if one writes env = os.environ.copy() env['http_proxy'] = 'whatever' or either of the documented equivalents ({**os.environ, ...} or (os.environ | {...})), and passes the resulting environment to subprocess.run or subprocess.Popen, the spawned process may get an environment containing both `HTTP_PROXY` and `http_proxy`. Most Win32 software will see only the first one, which contains the unmodified value from os.environ. Because os.environ forces all keys to upper case, it's possible to work around this by using only upper case keys in the update, but that behavior of os.environ is nonstandard (issue 46861), and subprocess shouldn't depend on it always being true, nor should end users have to. Since dicts preserve order, the user's (presumable) intent is preserved in the env argument. I think subprocess should do something like env = {k.upper(): (k, v) for k, v in env.items()} env = dict(env.values()) to discard duplicate keys, keeping only the rightmost one. ---------- components: Library (Lib), Windows messages: 414068 nosy: benrg, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: subprocess makes environment blocks with duplicate keys on Windows type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 00:12:46 2022 From: report at bugs.python.org (Adam Pinckard) Date: Sat, 26 Feb 2022 05:12:46 +0000 Subject: [New-bugs-announce] [issue46863] Python 3.10 OpenSSL Configuration Issues Message-ID: <1645852366.6.0.130545163871.issue46863@roundup.psfhosted.org> New submission from Adam Pinckard : Python 3.10 does not appear to respecting the OpenSSL configuration within linux. Testing completed using Pyenv on both Ubuntu 20.04.4 and Centos-8. Note PEP 644 which requires OpenSSL >= 1.1.1 is released in Python 3.10. We operate behind a corporate proxy / firewall which causes an SSL error where the Diffie-Hellman key size is too small. In previous Python versions this is resolved by updating the OpenSSL configuration, e.g. downgrading the linux crypto policies `sudo update-crypto-policies --set LEGACY`. The issue is reproducible in both Ubuntu 20.04.4 and Centos-8. In both linux distributions the SSL error is resolvable in earlier Python version, using the OpenSSL configurations, but the configuration is not respected with Python 3.10.2. See the details below on the kernel versions, linux distributions, and Openssl versions, many thanks in advance. 1. Python 3.10.2 Error: (py_3_10_2) ? py_3_10_2 pip install --upgrade pip WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:997)'))': /simple/pip/ 2. Ubuntu details uname -a Linux Horatio 5.13.0-30-generic #33~20.04.1-Ubuntu SMP Mon Feb 7 14:25:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.4 LTS Release: 20.04 Codename: focal openssl version -a OpenSSL 1.1.1f 31 Mar 2020 built on: Wed Nov 24 13:20:48 2021 UTC platform: debian-amd64 options: bn(64,64) rc4(16x,int) des(int) blowfish(ptr) compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -fdebug-prefix-map=/build/openssl-dnfdFp/openssl-1.1.1f=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2 OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1" Seeding source: os-specific 2. Centos-8 details uname -a Linux localhost.localdomain 5.4.181-1.el8.elrepo.x86_64 #1 SMP Tue Feb 22 10:00:15 EST 2022 x86_64 x86_64 x86_64 GNU/Linux cat /etc/centos-release CentOS Stream release 8 openssl version -a OpenSSL 1.1.1k FIPS 25 Mar 2021 built on: Thu Dec 2 16:40:48 2021 UTC platform: linux-x86_64 options: bn(64,64) md2(char) rc4(16x,int) des(int) idea(int) blowfish(ptr) compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -Wa,--noexecstack -Wa,--generate-missing-build-notes=yes -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DZLIB -DNDEBUG -DPURIFY -DDEVRANDOM="\"/dev/urandom\"" -DSYSTEM_CIPHERS_FILE="/etc/crypto-policies/back-ends/openssl.config" OPENSSLDIR: "/etc/pki/tls" ENGINESDIR: "/usr/lib64/engines-1.1" Seeding source: os-specific engines: rdrand dynamic ---------- assignee: christian.heimes components: SSL messages: 414072 nosy: adam, christian.heimes priority: normal severity: normal status: open title: Python 3.10 OpenSSL Configuration Issues type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 03:49:05 2022 From: report at bugs.python.org (Inada Naoki) Date: Sat, 26 Feb 2022 08:49:05 +0000 Subject: [New-bugs-announce] [issue46864] Deprecate ob_shash in BytesObject Message-ID: <1645865345.09.0.509753942233.issue46864@roundup.psfhosted.org> New submission from Inada Naoki : Code objects have more and more bytes attributes for now. To reduce the RAM by code, I want to remove ob_shash (cached hash value) from bytes object. Sets and dicts have own hash cache. Unless checking same bytes object against dicts/sets many times, this don't cause big performance loss. ---------- components: Interpreter Core messages: 414083 nosy: methane priority: normal severity: normal status: open title: Deprecate ob_shash in BytesObject versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 04:04:32 2022 From: report at bugs.python.org (Robert Spralja) Date: Sat, 26 Feb 2022 09:04:32 +0000 Subject: [New-bugs-announce] [issue46865] *() Invalid Syntax: iterable unpacking of empty tuple Message-ID: <1645866272.11.0.779642676261.issue46865@roundup.psfhosted.org> New submission from Robert Spralja : ` >>> def foo(num=1): ... return num ... >>> foo(*(bool,) is bool else *()) File "", line 1 foo(*(bool,) is bool else *()) ^ SyntaxError: invalid syntax >>> foo(*(bool,) if bool else *()) File "", line 1 foo(*(bool,) if bool else *()) ^ SyntaxError: invalid syntax >>> def foo(num=1): ... return num ... >>> stri = '' >>> foo(*(stri,) if stri else *()) File "", line 1 foo(*(stri,) if stri else *()) ^ SyntaxError: invalid syntax >>> foo(*((stri,) if stri else ())) 1 >>> ` Iterable unpacking of empty tuple seems to not work in one example but does in another. ---------- messages: 414085 nosy: spralja priority: normal severity: normal status: open title: *() Invalid Syntax: iterable unpacking of empty tuple versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 06:37:49 2022 From: report at bugs.python.org (Sec) Date: Sat, 26 Feb 2022 11:37:49 +0000 Subject: [New-bugs-announce] [issue46866] bytes class extension with slices Message-ID: <1645875469.77.0.975365502417.issue46866@roundup.psfhosted.org> New submission from Sec : When trying to extend the builtin bytes class, slices fall back to the builtin class. ``` class my_bytes(bytes): def dummy(self): print("dummy called") x=my_bytes.fromhex("c0de c0de") print(x.__class__) print(x[1:].__class__) ``` x.__class__ returns as expected. But x[1:].__class__ returns ---------- components: Interpreter Core files: bytes_test.py messages: 414092 nosy: Sec42 priority: normal severity: normal status: open title: bytes class extension with slices type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50646/bytes_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 18:17:50 2022 From: report at bugs.python.org (grifonice99) Date: Sat, 26 Feb 2022 23:17:50 +0000 Subject: [New-bugs-announce] [issue46867] difference of work Message-ID: <1645917470.03.0.0455277149527.issue46867@roundup.psfhosted.org> New submission from grifonice99 : I was developing a ThreadPool with priority on windows, once done all the tests on windows I moved to linux and once I moved it didn't work anymore, because in the thread_start function there is the self that doesn't "update", thing that it does on windows ---------- files: ThreadPool.py messages: 414118 nosy: grifonice99 priority: normal severity: normal status: open title: difference of work type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file50648/ThreadPool.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 18:35:53 2022 From: report at bugs.python.org (benrg) Date: Sat, 26 Feb 2022 23:35:53 +0000 Subject: [New-bugs-announce] [issue46868] Improve performance of math.prod with bignums (and functools.reduce?) Message-ID: <1645918553.19.0.346410511769.issue46868@roundup.psfhosted.org> New submission from benrg : math.prod is slow at multiplying arbitrary-precision numbers. E.g., compare the run time of factorial(50000) to prod(range(2, 50001)). factorial has some special-case optimizations, but the bulk of the difference is due to prod evaluating an expression tree of depth n. If you re-parenthesize the product so that the tree has depth log n, as factorial does, it's much faster. The evaluation order of prod isn't documented, so I think the change would be safe. factorial uses recursion to build the tree, but it can be done iteratively with no advance knowledge of the total number of nodes. This trick is widely useful for turning a way of combining two things into a way of combining many things, so I wouldn't mind seeing a generic version of it in the standard library, e.g. reduce(..., order='mid'). For many specific cases there are more efficient alternatives (''.join, itertools.chain, set.unions, heapq.merge), but it's nice to have a recipe that saves you the trouble of writing special-case algorithms at the cost of a log factor that's often ignorable. ---------- components: Library (Lib) messages: 414126 nosy: benrg priority: normal severity: normal status: open title: Improve performance of math.prod with bignums (and functools.reduce?) type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 19:09:37 2022 From: report at bugs.python.org (Evernow) Date: Sun, 27 Feb 2022 00:09:37 +0000 Subject: [New-bugs-announce] [issue46869] platform.release() and sys returns wrong version on Windows 11 Message-ID: <1645920577.85.0.691364066718.issue46869@roundup.psfhosted.org> New submission from Evernow : Hello. On Windows 11 the platform module returns Windows 10 instead of Windows 11, same for the sys module. Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import platform >>> platform.release() '10' >>> import sys >>> sys.getwindowsversion().platform_version (10, 0, 22000) ---------- components: Windows messages: 414129 nosy: Evernow, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: platform.release() and sys returns wrong version on Windows 11 type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 20:12:03 2022 From: report at bugs.python.org (Pocas) Date: Sun, 27 Feb 2022 01:12:03 +0000 Subject: [New-bugs-announce] [issue46870] Improper Input Validation in urlparse Message-ID: <1645924323.46.0.342217272005.issue46870@roundup.psfhosted.org> New submission from Pocas : If http:@localhost url is entered as an argument value of the urlpasre() function, the parser cannot parse it properly. Since http:@localhost is a valid URL, the character after the @ character must be parsed as a hostname. Python 3.9.10 (main, Jan 15 2022, 11:48:04) [Clang 13.0.0 (clang-1300.0.29.3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from urllib.parse import urlparse >>> print(urlparse('http:@localhost')) ParseResult(scheme='http', netloc='', path='@localhost', params='', query='', fragment='') >>> ---------- messages: 414132 nosy: P0cas priority: normal severity: normal status: open title: Improper Input Validation in urlparse type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 26 21:58:03 2022 From: report at bugs.python.org (Kyle Smith) Date: Sun, 27 Feb 2022 02:58:03 +0000 Subject: [New-bugs-announce] [issue46871] BaseManager.register no longer supports lambda callable 3.8.12+ Message-ID: <1645930683.9.0.146416539661.issue46871@roundup.psfhosted.org> New submission from Kyle Smith : The below code works on versions 3.5.2 to 3.8.10. Higher versions tested, such as 3.9.12 and 3.10.2 result in the error: "AttributeError: Can't pickle local object". from multiprocessing import Lock from multiprocessing.managers import AcquirerProxy, BaseManager, DictProxy def get_shared_state(host, port, key): shared_dict = {} shared_lock = Lock() manager = BaseManager((host, port), key) manager.register("get_dict", lambda: shared_dict, DictProxy) manager.register("get_lock", lambda: shared_lock, AcquirerProxy) try: manager.get_server() manager.start() except OSError: # Address already in use manager.connect() return manager.get_dict(), manager.get_lock() HOST = "127.0.0.1" PORT = 35791 KEY = b"secret" shared_dict, shared_lock = get_shared_state(HOST, PORT, KEY) shared_dict["number"] = 0 shared_dict["text"] = "Hello World" This code was pulled from this article: https://stackoverflow.com/questions/57734298/how-can-i-provide-shared-state-to-my-flask-app-with-multiple-workers-without-dep/57810915#57810915 I looked around and couldn't find any open or closed bugs for this, so I'm sorry in advance if this is new expected behavior. ---------- components: Interpreter Core messages: 414137 nosy: kyle.smith priority: normal severity: normal status: open title: BaseManager.register no longer supports lambda callable 3.8.12+ type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 27 06:26:00 2022 From: report at bugs.python.org (Dan Snider) Date: Sun, 27 Feb 2022 11:26:00 +0000 Subject: [New-bugs-announce] [issue46872] Odd handling of signal raised if an illegal syscall is attempted on Android Message-ID: <1645961160.27.0.346757629998.issue46872@roundup.psfhosted.org> New submission from Dan Snider : On Android, the following calls generate a SIGSYS signal that is neither blocked by pthread_sigmask(SIG_BLOCK, {SIGSYS}) nor ignored after its handler is set to SIG_IGN: (os.chroot(path)) os.setgid(rgid) os.setuid(ruid) (os.setegid(gid)) os.setregid(rgid, egid) os.setreuid(ruid, euid) os.setresgid(rgid, egid, sgid) time.clock_settime(clock, time) time.clock_settime_ns(clock, time) (socket.sethostname(name)) On the other hand, signal(SIGSYS, lambda s, p: None) will catch the signal, but based on frame it receives (None), I suspect this is a coincidence. Also, the functions with parenthesized names in that list raise the equivalent of OSError(0, "Error", "%s"%args[0]). ---------- components: C API messages: 414148 nosy: bup priority: normal severity: normal status: open title: Odd handling of signal raised if an illegal syscall is attempted on Android type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 27 08:15:29 2022 From: report at bugs.python.org (Adam Hopkins) Date: Sun, 27 Feb 2022 13:15:29 +0000 Subject: [New-bugs-announce] [issue46873] inspect.getsource with some lambdas in decorators does not get the full source Message-ID: <1645967729.03.0.78553115173.issue46873@roundup.psfhosted.org> New submission from Adam Hopkins : I believe the following produces an unexpected behavior: from inspect import getsource def bar(*funcs): def decorator(func): return func return decorator @bar(lambda x: bool(True), lambda x: False) async def foo(): ... print(getsource(foo)) The output shows only the decorator declaration and none of the function: @bar(lambda x: bool(True), lambda x: False) >From my investigation, it seems like this requires the following conditions to be true: - lambdas are passed in decorator arguments - there is more than one lambda - at least one of the lambdas has a function call Passing the lambdas as default function arguments seems okay: async def foo(bar=[lambda x: bool(True), lambda x: False]): ... A single lambda seems okay: @bar(lambda x: bool(True)) async def foo(): ... Lambdas with no function calls also seem okay: @bar(lambda x: not x, lambda: True) async def foo(): ... Tested this on: - Python 3.10.2 - Python 3.9.9 - Python 3.8.11 - Python 3.7.12 ---------- messages: 414149 nosy: ahopkins2 priority: normal severity: normal status: open title: inspect.getsource with some lambdas in decorators does not get the full source versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 27 08:40:10 2022 From: report at bugs.python.org (Erlend E. Aasland) Date: Sun, 27 Feb 2022 13:40:10 +0000 Subject: [New-bugs-announce] [issue46874] [sqlite3] optimise user-defined functions Message-ID: <1645969210.54.0.240452719118.issue46874@roundup.psfhosted.org> New submission from Erlend E. Aasland : Currently, the `step` method of user-defined functions is looked up using `PyObject_GetAttrString`. Using an interned string and `PyObject_GetAttr`, we can speed up this a little bit. ---------- components: Library (Lib) messages: 414151 nosy: erlendaasland priority: normal severity: normal status: open title: [sqlite3] optimise user-defined functions versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 27 13:30:02 2022 From: report at bugs.python.org (Joongi Kim) Date: Sun, 27 Feb 2022 18:30:02 +0000 Subject: [New-bugs-announce] [issue46875] Missing name in TaskGroup.__repr__() Message-ID: <1645986602.91.0.613273361615.issue46875@roundup.psfhosted.org> New submission from Joongi Kim : The __repr__() method in asyncio.TaskGroup does not include self._name. I think this is a simple overlook, because asyncio.Task includes the task name in __repr__(). :wink: https://github.com/python/cpython/blob/345572a1a02/Lib/asyncio/taskgroups.py#L28-L42 I'll make a simple PR to fix it. ---------- components: asyncio messages: 414162 nosy: achimnol, asvetlov, gvanrossum, yselivanov priority: normal severity: normal status: open title: Missing name in TaskGroup.__repr__() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 27 19:01:24 2022 From: report at bugs.python.org (Mohammad Mahdi Zojaji Monfared) Date: Mon, 28 Feb 2022 00:01:24 +0000 Subject: [New-bugs-announce] [issue46876] Walrus operator not in help Message-ID: <1646006484.73.0.147239709954.issue46876@roundup.psfhosted.org> New submission from Mohammad Mahdi Zojaji Monfared : Walrus oprator := not in help("symbols") and help(":=") does not work. ---------- components: Interpreter Core files: walrus.png messages: 414168 nosy: mmahdizojajim priority: normal severity: normal status: open title: Walrus operator not in help type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file50651/walrus.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 01:41:43 2022 From: report at bugs.python.org (Martin Fischer) Date: Mon, 28 Feb 2022 06:41:43 +0000 Subject: [New-bugs-announce] [issue46877] unittest.doModuleCleanups() does not exist Message-ID: <1646030503.89.0.975314565573.issue46877@roundup.psfhosted.org> New submission from Martin Fischer : The unittest documentation[1] describes unittest.doModuleCleanups(). That function however doesn't exist since it's only in the unittest.case module and not re-exported in the unittest module (unlike addModuleCleanup). So I think either the documentation should be corrected or doModuleCleanups should be re-exported in unittest/__init__.py to match the documentation. [1]: https://docs.python.org/3.8/library/unittest.html ---------- assignee: docs at python components: Documentation messages: 414177 nosy: docs at python, lisroach, michael.foord, push-f priority: normal severity: normal status: open title: unittest.doModuleCleanups() does not exist type: enhancement versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 05:50:28 2022 From: report at bugs.python.org (Erlend E. Aasland) Date: Mon, 28 Feb 2022 10:50:28 +0000 Subject: [New-bugs-announce] [issue46878] [sqlite3] remove "non-standard" from docstrings Message-ID: <1646045428.22.0.0724822171916.issue46878@roundup.psfhosted.org> New submission from Erlend E. Aasland : Several sqlite3 methods are "marked" as non-standard in their docstrings. This is an historic artefact which (I assume) implies that a method is not a part of the DB-API defined by PEP 249. Questions regarding the "non-standard" strings arise from time to time, as the meaning is not immediately obvious. The question surfaced in a code review in October 2021[^1], and again in a more recent PR[^2]. Suggesting to purge "non-standard" from all docstrings once and for all to avoid more confusion. [^1]: https://github.com/python/cpython/pull/28463#discussion_r724371832 [^2]: https://github.com/python/cpython/pull/26728#discussion_r815523101 ---------- messages: 414186 nosy: Jelle Zijlstra, erlendaasland priority: normal severity: normal status: open title: [sqlite3] remove "non-standard" from docstrings versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 07:47:50 2022 From: report at bugs.python.org (Martin Fischer) Date: Mon, 28 Feb 2022 12:47:50 +0000 Subject: [New-bugs-announce] [issue46879] [doc] incorrect sphinx object names Message-ID: <1646052470.1.0.108518625091.issue46879@roundup.psfhosted.org> New submission from Martin Fischer : API members documented in sphinx have an object name, which allow the documentation to be linked from other projects. Sphinx calculates the object name by prefixing the current module name to the directive argument, e.g: .. module:: foo .. function:: bar.baz becomes foo.bar.baz. Since these anchors aren't displayed in the documentation, some mistakes have crept in, namely the Python stdlib documentation currently contains the objects: * asyncio.asyncio.subprocess.DEVNULL * asyncio.asyncio.subprocess.PIPE * asyncio.asyncio.subprocess.STDOUT * asyncio.asyncio.subprocess.Process * multiprocessing.sharedctypes.multiprocessing.Manager * xml.etree.ElementTree.xml.etree.ElementInclude As can be observed in the URL fragments: https://docs.python.org/3/library/asyncio-subprocess.html#asyncio.asyncio.subprocess.Process https://docs.python.org/3/library/multiprocessing.html#multiprocessing.sharedctypes.multiprocessing.Manager https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.xml.etree.ElementInclude.default_loader I have a patch, prepared, I'll send a PR straight away. ---------- assignee: docs at python components: Documentation messages: 414192 nosy: docs at python, push-f priority: normal severity: normal status: open title: [doc] incorrect sphinx object names versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 07:48:44 2022 From: report at bugs.python.org (Nimrod Fiat) Date: Mon, 28 Feb 2022 12:48:44 +0000 Subject: [New-bugs-announce] [issue46880] zipfile library doesn't extract windows zip files properly on linux Message-ID: <1646052524.98.0.628413785417.issue46880@roundup.psfhosted.org> New submission from Nimrod Fiat : Created a zip file using Powershell's Compress-Archive method. Moved the file to Debian. Used zipfile's extractall method to extract. The result was a flat directory with long file names such as: "migrated-image952821\\m4a\\runiis.ps". I would expect instead for a "migrated-image952821" directory to be created, containing an "m4a" directory which contains "runiis.ps" ---------- components: Library (Lib) messages: 414193 nosy: nimrodf priority: normal severity: normal status: open title: zipfile library doesn't extract windows zip files properly on linux type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 08:02:28 2022 From: report at bugs.python.org (Kumar Aditya) Date: Mon, 28 Feb 2022 13:02:28 +0000 Subject: [New-bugs-announce] [issue46881] Statically allocate and initialize the latin1 characters. Message-ID: <1646053348.55.0.937753092888.issue46881@roundup.psfhosted.org> New submission from Kumar Aditya : Statically allocate and initialize the latin1 characters. This *should* make iterating over a ascii strings faster as it avoids an atomic read in PyInterpreterState_GET() to get unicode state in get_latin1_char, makes get_latin1_char branchless and can be used in deepfreeze for identifiers. ---------- components: Interpreter Core messages: 414195 nosy: Mark.Shannon, eric.snow, gvanrossum, kumaraditya303 priority: normal severity: normal status: open title: Statically allocate and initialize the latin1 characters. versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 09:28:31 2022 From: report at bugs.python.org (Rotzbua) Date: Mon, 28 Feb 2022 14:28:31 +0000 Subject: [New-bugs-announce] [issue46882] Clarify argument type of platform.platform(aliased, terse) to boolean Message-ID: <1646058511.8.0.677652153168.issue46882@roundup.psfhosted.org> New submission from Rotzbua : Problem: Both arguments `aliased` and `terse` should be boolean instead of integer. Description: The function is as `platform.platform(aliased=0, terse=0)` so both arguments `aliased` and `terse` seems to be numbers. The documentation says: "If aliased is true,[..]" which gives a hint that the type should be boolean instead of an integer. Looking into the implementation both arguments used as boolean. Solution: Update documentation and set default argument values to `False` instead of `0`. Reference: Current documentation: https://docs.python.org/3.11/library/platform.html#platform.platform ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 414198 nosy: Rotzbua, docs at python priority: normal severity: normal status: open title: Clarify argument type of platform.platform(aliased, terse) to boolean type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 10:18:18 2022 From: report at bugs.python.org (Steven D'Aprano) Date: Mon, 28 Feb 2022 15:18:18 +0000 Subject: [New-bugs-announce] [issue46883] Add glossary entries to clarify the true/True and false/False distinction Message-ID: <1646061498.87.0.521811715231.issue46883@roundup.psfhosted.org> New submission from Steven D'Aprano : There is a long-standing tradition, going back to Python 1.x days before we had dedicated True and False values, to use the lowercase "true" and "false" to mean *any value that duck-types as True* and *any value that duck-types as False* in a boolean context. Other terms for this same concept include "truthy/falsey" and using true/false as adjectives rather than nouns, e.g. "a true value". But I am not sure whether this is actually written down anywhere in the documentation. It would be useful for those who are not aware of the convention (e.g. beginners and people coming from other languages) if the Glossary had entries for lowercase "true" and "false" that explained the usage and referred back to PEP 285. See for example #46882 where this came up. I suggest something like the following: boolean context Code such as ``if condition:`` and ``while condition:`` which causes the expression ``condition`` to be evaluated as if it were a :class:`bool`. false Any object which evaluates to the :class:`bool` singleton ``False`` in a :term:`boolean context`. Informally known as "falsey". See :term:`true` and :pep:`285`. Among the builtins , false values include ``None``, empty containers and strings, and zero numbers. true Any object which evaluates to the :class:`bool` singleton ``True`` in a :term:`boolean context`. Informally known as "truthy". See :term:`false` and :pep:`285`. Among the builtins , true values include non-empty containers and strings, non-zero numbers (including NANs), and all other objects by default. ---------- assignee: docs at python components: Documentation messages: 414204 nosy: docs at python, steven.daprano priority: normal severity: normal status: open title: Add glossary entries to clarify the true/True and false/False distinction type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 12:33:33 2022 From: report at bugs.python.org (Martin Fischer) Date: Mon, 28 Feb 2022 17:33:33 +0000 Subject: [New-bugs-announce] [issue46884] [doc] msilib.rst uses data directive to document modules Message-ID: <1646069613.09.0.14886937024.issue46884@roundup.psfhosted.org> New submission from Martin Fischer : As per [1] the py:data directive describes data in a module. It should not be used for submodules, that's what the module directive is for. A side-effect that this is causing is that msilib.schema, msilib.sequence and msilib.text do not show up in the Python Module Index[2] as they should. [1]: https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html [2]: file:///home/martin/repos-contrib/cpython/Doc/build/html/py-modindex.html#cap-m ---------- assignee: docs at python components: Documentation messages: 414209 nosy: docs at python, push-f priority: normal severity: normal status: open title: [doc] msilib.rst uses data directive to document modules type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 13:53:32 2022 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 28 Feb 2022 18:53:32 +0000 Subject: [New-bugs-announce] [issue46885] Ensure PEP 663 changes are reverted from 3.11 Message-ID: <1646074412.45.0.770188341113.issue46885@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : As PEP 663 https://github.com/python/steering-council/issues/76 was rejected, we need to ensure that the changes made in 3.11 (see https://github.com/python/steering-council/issues/76#issuecomment-970668967) are rejected. I am marking this as a release blocker so we don't forget. ---------- assignee: ethan.furman messages: 414214 nosy: ethan.furman, pablogsal priority: release blocker severity: normal status: open title: Ensure PEP 663 changes are reverted from 3.11 versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 28 17:53:08 2022 From: report at bugs.python.org (Eric Snow) Date: Mon, 28 Feb 2022 22:53:08 +0000 Subject: [New-bugs-announce] [issue46886] pyexpat occasionally fails to build on the ARM64 Windows Non-Debug 3.x buildbot Message-ID: <1646088788.98.0.693772071082.issue46886@roundup.psfhosted.org> New submission from Eric Snow : example: https://buildbot.python.org/all/#/builders/730/builds/4081 ---------- components: Build messages: 414223 nosy: eric.snow, vstinner priority: normal severity: normal stage: needs patch status: open title: pyexpat occasionally fails to build on the ARM64 Windows Non-Debug 3.x buildbot versions: Python 3.11 _______________________________________ Python tracker _______________________________________