From report at bugs.python.org Sun Aug 1 03:08:58 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 01 Aug 2021 07:08:58 +0000 Subject: [New-bugs-announce] [issue44801] Type expression is coerced to a list of parameter arguments in substitution of ParamSpec Message-ID: <1627801738.44.0.278142498011.issue44801@roundup.psfhosted.org> New submission from Serhiy Storchaka : Type expression is coerced to a list of parameter arguments in substitution of ParamSpec. For example: >>> from typing import * >>> T = TypeVar('T') >>> P = ParamSpec('P') >>> C = Callable[P, T] >>> C[int, str] typing.Callable[[int], str] int becomes [int]. There is even a dedicated test for this. But it is not followed from PEP 612. Furthermore, it contradicts one of examples in the PEP: >>> class X(Generic[T, P]): ... f: Callable[P, int] ... x: T ... >>> X[int, int] # Should be rejected __main__.X[int, int] It makes the implementation (at least the code in issue44796) more complex and makes the user code more errorprone. ---------- messages: 398687 nosy: gvanrossum, kj, serhiy.storchaka priority: normal severity: normal status: open title: Type expression is coerced to a list of parameter arguments in substitution of ParamSpec _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 1 07:37:51 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 01 Aug 2021 11:37:51 +0000 Subject: [New-bugs-announce] [issue44802] Substitution does not work after ParamSpec substitution of the user generic with a list of TypeVars Message-ID: <1627817871.37.0.426591576704.issue44802@roundup.psfhosted.org> New submission from Serhiy Storchaka : If the user generic with ParamSpec parameter substituted with a parametrised list containing TypeVar, that TypeVar cannot be substituted. >>> from typing import * >>> T = TypeVar("T") >>> P = ParamSpec("P") >>> class X(Generic[P]): ... f: Callable[P, int] ... >>> Y = X[[int, T]] >>> Y __main__.X[(, ~T)] >>> Y[str] Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython/Lib/typing.py", line 309, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/home/serhiy/py/cpython/Lib/typing.py", line 1028, in __getitem__ _check_generic(self, params, len(self.__parameters__)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/serhiy/py/cpython/Lib/typing.py", line 228, in _check_generic raise TypeError(f"{cls} is not a generic class") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: __main__.X[(, ~T)] is not a generic class Expected result equal to X[[int, str]]. ---------- components: Library (Lib) messages: 398694 nosy: gvanrossum, kj, serhiy.storchaka priority: normal severity: normal status: open title: Substitution does not work after ParamSpec substitution of the user generic with a list of TypeVars type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 1 10:31:36 2021 From: report at bugs.python.org (=?utf-8?q?Anton_Gr=C3=BCbel?=) Date: Sun, 01 Aug 2021 14:31:36 +0000 Subject: [New-bugs-announce] [issue44803] change tracemalloc.BaseFilter to an abstract class Message-ID: <1627828296.31.0.469663144509.issue44803@roundup.psfhosted.org> New submission from Anton Gr?bel : during some work on typeshed I found the BaseFilter class in tracemalloc and it totally looks like and is used as a typical abstract class. I will also directly create the PR :) if you think I'm missing something, I'm happy to hear some other thoughts. ---------- components: Library (Lib) messages: 398699 nosy: anton.gruebel priority: normal severity: normal status: open title: change tracemalloc.BaseFilter to an abstract class type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 01:18:58 2021 From: report at bugs.python.org (Arun) Date: Mon, 02 Aug 2021 05:18:58 +0000 Subject: [New-bugs-announce] [issue44804] Port fix of "issue44422" to Python3.6.x Message-ID: <1627881538.72.0.243584861486.issue44804@roundup.psfhosted.org> New submission from Arun : We have seen multiple occurrences of the issue reported and fixed in https://bugs.python.org/issue44422, on RHEL8.3 with Python3.6.x. I understand RHEL8.4 is also shipping with Python3.6.x as the default version and it's going to be the same with RHEL8.5 as well. This bug is to port that fix to Python3.6.x version as well. This is impacting lot of our customers running large scale enterprise application. ---------- components: Library (Lib) messages: 398725 nosy: arunshan priority: normal severity: normal status: open title: Port fix of "issue44422" to Python3.6.x versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 03:45:23 2021 From: report at bugs.python.org (Nathan Collins) Date: Mon, 02 Aug 2021 07:45:23 +0000 Subject: [New-bugs-announce] [issue44805] asyncio.StreamReader.read hangs for reused socket file descriptors when asyncio.StreamWriter.close() is not called Message-ID: <1627890323.27.0.0668623257849.issue44805@roundup.psfhosted.org> New submission from Nathan Collins : Problem ======= When using asyncio streams via (r,w) = asyncio.open_connection(sock=socket) with a already connected socket `socket`, if you call `socket.close()` but not `w.close()` when you're done, then when the OS later reuses the file descriptor of `socket` for a new socket, and that new socket is used with (r,w) = asyncio.open_connection(sock=socket) again, the `r.read(...)` for the new `r` can hang indefinitely, even when data is available on the underlying socket. When the hang happens, closing the socket on the writer side doesn't help, and the socket gets stuck forever in the CLOSE_WAIT state on the reader side. Using `strace` shows that the reader side is stuck in `epoll_wait(...)`. Client and server programs that reproduce the bug ================================================= Run the server in one shell and then run the client in the other shell. They each take one argument, that controls how they close their sockets/streams when they're done. Usage: python3 client.py CLOSE_MODE Usage: python3 server.py CLOSE_MODE Where CLOSE_MODE can be * "": don't close the socket in any way * "S": close `open_connection` socket using `socket.socket.close()` * "A": close `open_connection` socket using `asyncio.StreamWriter.close()` * "SA": close `open_connection` socket both ways These are also attached, but here's the source. The `client.py`: ``` python import asyncio, socket, sys async def client(src_ip, close_mode): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) src_ip_port = (src_ip, 0) s.bind(src_ip_port) dst_ip_port = ('127.0.0.2', 12345) s.connect(dst_ip_port) print(f'Connected from {src_ip}') print(s) try: (r,w) = await asyncio.open_connection(sock=s) print('<- ', end='', flush=True) in_line = await r.read(100) print(in_line) out_line = b'client' print('-> ', end='', flush=True) w.write(out_line) await w.drain() print(out_line) finally: if 'S' in close_mode: s.close() if 'A' in close_mode: w.close() await w.wait_closed() print('Closed socket') print() async def main(close_mode): await client('127.0.0.3', close_mode) await client('127.0.0.4', close_mode) close_mode = sys.argv[1] asyncio.run(main(close_mode)) ``` The `server.py`: ``` python import asyncio, socket, sys async def server(close_mode): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) ip = '127.0.0.2' port = 12345 print(f'Listening on {ip}:{port}') print(s) try: s.bind((ip, port)) s.listen() while True: (a, (a_ip, a_port)) = s.accept() print(f'Client connected from {a_ip}:{a_port}') print(a) try: (r,w) = await asyncio.open_connection(sock=a) print('-> ', end='', flush=True) out_line = b'server' w.write(out_line) await w.drain() print(out_line) print('<- ', end='', flush=True) in_line = await r.read(100) print(in_line) finally: if 'S' in close_mode: a.close() if 'A' in close_mode: w.close() await w.wait_closed() print('Closed client socket') print() finally: s.close() print('Closed server socket') close_mode = sys.argv[1] asyncio.run(server(close_mode)) ``` Example session: `server.py S` and `client.py A` ================================================ Note that file descriptor 7 is reused on the server side, before the server hangs on `r.read`. Run the server in one shell: $ python3 server.py S Listening on 127.0.0.2:12345 Client connected from 127.0.0.3:34135 -> b'server' <- b'client' Closed client socket Client connected from 127.0.0.4:46639 -> b'server' <- The server is hanging on `r.read` above. Run the client in another shell, after starting the server: $ python3 client.py A Connected from 127.0.0.3 <- b'server' -> b'client' Closed socket Connected from 127.0.0.4 <- b'server' -> b'client' Closed socket $ lsof -ni @127.0.0.2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME python3 26692 conathan 6u IPv4 9992763 0t0 TCP 127.0.0.2:12345 (LISTEN) python3 26692 conathan 7u IPv4 9991149 0t0 TCP 127.0.0.2:12345->127.0.0.4:46639 (CLOSE_WAIT) Example session: `server.py ''` and `client.py A` ================================================ Note that file descriptors are not reused on the server side now, and nothing hangs. Server: $ python3 server.py '' Listening on 127.0.0.2:12345 Client connected from 127.0.0.3:37833 -> b'server' <- b'client' Closed client socket Client connected from 127.0.0.4:39463 -> b'server' <- b'client' Closed client socket Client: $ python3 client.py A Connected from 127.0.0.3 <- b'server' -> b'client' Closed socket Connected from 127.0.0.4 <- b'server' -> b'client' Closed socket Behavior for different combinations of closure modes ==================================================== Perhaps this is overkill, but here's what happens for all 15 possible combinations of how the client and server close their connections. For example, "Client=S, Server=S" below means we run `$ python3 client.py S` and `$ python3 server.py S`. Sometimes multiple combinations have the same behavior, so they're grouped together below. Client=S, Server=''; Client=S, Server=S; Client=S, Server=A; Client=S, Server=SA ------------------- Client hangs on `r.read` on second connection, and killing the server on the other end has no effect, with the socket stuck in CLOSE_WAIT on the client side forever. Client='', Server=S; Client=A, Server=S ------------------ Server hangs on `r.read` on second connection, and client exits normally, with the socket stuck in CLOSE_WAIT on the server side forever. Client=SA, Server=S ------------------- Everything works the first time, but if you run the client in a loop, e.g. with $ while true; do python3 client.py SA; done then the server will eventually hang on `r.read` after ~3 client sessions. Client='', Server=''; Client='', Server=A; Client=A, Server=''; Client=SA, Server='' ------------------- Everything works! But here we see that the client and/or server (whichever side is using '' for mode) is not reusing the socket file descriptors right away (have to wait for GC). This is evidence that the problem is due to stale state in asyncio tied to the reused file descriptors. Client=A, Server=A; Client=A, Server=SA; Client=SA, Server=A; Client=SA, Server=SA -------------------- Everything works, but this is not surprising because both sides closed the `StreamWriter` with `w.close()`. Possibly related bugs ===================== https://bugs.python.org/issue43253: Windows only, calling socket.socket.close() on a socket used with asyncio.open_connection(sock=socket). https://bugs.python.org/issue41317: reused file descriptors across different connections. https://bugs.python.org/issue34795, https://bugs.python.org/issue30064: closing a socket used with asyncio. https://bugs.python.org/issue35065: reading from a closed stream can hang. Could be related if the problem is due to aliased streams from reused file descriptors. https://bugs.python.org/issue43183: sockets used with asyncio getting stuck in WAIT_CLOSED. System info =========== $ python3 --version Python 3.9.6 $ lsb_release -a LSB Version: core-9.20170808ubuntu1-noarch:printing-9.20170808ubuntu1-noarch:security-9.20170808ubuntu1-noarch Distributor ID: Ubuntu Description: Ubuntu 18.04.5 LTS Release: 18.04 Codename: bionic ---------- components: asyncio files: server.py messages: 398731 nosy: NathanCollins, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio.StreamReader.read hangs for reused socket file descriptors when asyncio.StreamWriter.close() is not called type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50198/server.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 05:20:47 2021 From: report at bugs.python.org (Yurii Karabas) Date: Mon, 02 Aug 2021 09:20:47 +0000 Subject: [New-bugs-announce] [issue44806] Subclassing Protocol get different __init__ Message-ID: <1627896047.68.0.683763116752.issue44806@roundup.psfhosted.org> New submission from Yurii Karabas <1998uriyyo at gmail.com>: When we subclassing Protocol, we get a __init__ differing from default one but the protocol in question didn't define any __init__. More information can be found here - https://github.com/python/typing/issues/644 ---------- components: Library (Lib) messages: 398736 nosy: kj, uriyyo priority: normal severity: normal status: open title: Subclassing Protocol get different __init__ type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 05:53:46 2021 From: report at bugs.python.org (Yurii Karabas) Date: Mon, 02 Aug 2021 09:53:46 +0000 Subject: [New-bugs-announce] [issue44807] typing.Protocol silently overrides __init__ method of delivered class Message-ID: <1627898026.7.0.991897199639.issue44807@roundup.psfhosted.org> New submission from Yurii Karabas <1998uriyyo at gmail.com>: typing.Protocol silently overrides __init__ method of delivered class. I think it should be forbidden to define __init__ method at Protocol delivered class in case if cls is not Protocol. ---------- components: Library (Lib) messages: 398742 nosy: kj, uriyyo priority: normal severity: normal status: open title: typing.Protocol silently overrides __init__ method of delivered class type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 05:55:50 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 02 Aug 2021 09:55:50 +0000 Subject: [New-bugs-announce] [issue44808] test_inspect fails in refleak mode Message-ID: <1627898150.7.0.730504757269.issue44808@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : >From https://bugs.python.org/issue44206 : File "/home/mark/repos/cpython/Lib/inspect.py", line 1154, in walktree classes.sort(key=attrgetter('__module__', '__name__')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: '<' not supported between instances of 'str' and 'module' I can reproduce that failure with a debug build and either of ./python -m test -R 3:3 test_inspect ./python -m test test_inspect -F ---------- messages: 398744 nosy: Mark.Shannon, pablogsal priority: normal severity: normal status: open title: test_inspect fails in refleak mode versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 08:22:17 2021 From: report at bugs.python.org (Sebastian Rittau) Date: Mon, 02 Aug 2021 12:22:17 +0000 Subject: [New-bugs-announce] [issue44809] Changelog missing removal of StrEnum etc. Message-ID: <1627906937.39.0.327433164161.issue44809@roundup.psfhosted.org> New submission from Sebastian Rittau : It seems that at some point StrEnum and a few other members were added to Python 3.10. I think they were present in 3.10 beta 2, but it seems they were removed by beta 4. While the Changelog at https://docs.python.org/3.10/whatsnew/changelog.html mentions that they were added, there is no note about their removal again. ---------- assignee: docs at python components: Documentation messages: 398760 nosy: docs at python, srittau priority: normal severity: normal status: open title: Changelog missing removal of StrEnum etc. versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 08:24:05 2021 From: report at bugs.python.org (Nick) Date: Mon, 02 Aug 2021 12:24:05 +0000 Subject: [New-bugs-announce] [issue44810] nturl2path: drive definition Message-ID: <1627907045.83.0.78613807665.issue44810@roundup.psfhosted.org> New submission from Nick : Due some problem in a third-party package the `url2path` function from `nturl2path` got `a/"https://b"` (without `, `a`,`b` are just masks ) as the first and only argument. In the function there is the following code ( https://github.com/python/cpython/blob/414dcb13aaa4fd42f264fdee47782dede5c83d6c/Lib/nturl2path.py#L30 ; current state of the `main` branch): ``` comp = url.split('|') if len(comp) != 2 or comp[0][-1] not in string.ascii_letters: error = 'Bad URL: ' + url raise OSError(error) drive = comp[0][-1].upper() ``` As a result, the function decided that the file was located on the `S:` drive and returned the `S:\b` path without a warning. To my mind, it is not right to take just the last letter as a drive letter because the returned path must be only for the specified URL and the unsupported ones must be marked as "bad" without any silent transformations. ---------- components: Windows messages: 398761 nosy: NickVeld, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: nturl2path: drive definition type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 08:25:20 2021 From: report at bugs.python.org (Anis Gandoura) Date: Mon, 02 Aug 2021 12:25:20 +0000 Subject: [New-bugs-announce] [issue44811] Change default signature algorithms for context in the ssl library Message-ID: <1627907120.03.0.784092959739.issue44811@roundup.psfhosted.org> New submission from Anis Gandoura : Expose the OpenSSL function SSL_CTX_set1_sigalgs_list to allow the user to modify the supported signature algorithms for a given SSL Context. OpenSSL documentation: https://www.openssl.org/docs/man1.1.0/man3/SSL_CTX_set1_sigalgs_list.html ---------- messages: 398762 nosy: anis.gandoura priority: normal severity: normal status: open title: Change default signature algorithms for context in the ssl library _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 12:06:04 2021 From: report at bugs.python.org (Ken Jin) Date: Mon, 02 Aug 2021 16:06:04 +0000 Subject: [New-bugs-announce] [issue44812] [docs] Document PyMember_{Get/Set}One in C API reference Message-ID: <1627920364.08.0.650340648911.issue44812@roundup.psfhosted.org> New submission from Ken Jin : I can't seem to find PyMember_GetOne or PyMember_SetOne in C API docs, yet they are in stable_abi.txt. Sending a PR shortly, please tell me if these were accidentally exposed and not supposed to be documented (I will close the PR if so). ---------- assignee: docs at python components: Documentation messages: 398777 nosy: docs at python, kj, petr.viktorin priority: normal severity: normal status: open title: [docs] Document PyMember_{Get/Set}One in C API reference versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 12:17:05 2021 From: report at bugs.python.org (Irit Katriel) Date: Mon, 02 Aug 2021 16:17:05 +0000 Subject: [New-bugs-announce] [issue44813] generate specialization stat names list into opcode.h Message-ID: <1627921025.25.0.316615064433.issue44813@roundup.psfhosted.org> New submission from Irit Katriel : The stat names are repeated in several places in the code, refactor to have this list appear only once in opcode.py. ---------- messages: 398779 nosy: iritkatriel priority: normal severity: normal status: open title: generate specialization stat names list into opcode.h type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 14:22:18 2021 From: report at bugs.python.org (nyizel) Date: Mon, 02 Aug 2021 18:22:18 +0000 Subject: [New-bugs-announce] [issue44814] python 3.9.6 installation installs 0 modules Message-ID: <1627928538.73.0.0238112884083.issue44814@roundup.psfhosted.org> New submission from nyizel : https://nizel.is-inside.me/NEi7A4aM.png ---------- messages: 398793 nosy: trambell priority: normal severity: normal status: open title: python 3.9.6 installation installs 0 modules type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 14:29:57 2021 From: report at bugs.python.org (Sam Bull) Date: Mon, 02 Aug 2021 18:29:57 +0000 Subject: [New-bugs-announce] [issue44815] asyncio.gather no DeprecationWarning if task are passed Message-ID: <1627928997.24.0.837145443801.issue44815@roundup.psfhosted.org> New submission from Sam Bull : When calling asyncio.gather() a DeprecationWarning is only emitted if no tasks are passed (which is probably the exceptional case, rather than the standard one). This has resulted in us missing this deprecated argument in aiohttp until we received a bug report from a user trying it out against the 3.10 beta. For some reason the warning only appears under a `if not coros_or_futures:` block. I think it should be run regardless: https://github.com/python/cpython/blob/3.9/Lib/asyncio/tasks.py#L757 ---------- components: asyncio messages: 398794 nosy: asvetlov, dreamsorcerer, yselivanov priority: normal severity: normal status: open title: asyncio.gather no DeprecationWarning if task are passed versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 2 19:28:31 2021 From: report at bugs.python.org (Brandt Bucher) Date: Mon, 02 Aug 2021 23:28:31 +0000 Subject: [New-bugs-announce] [issue44816] Folded constants do not trace correctly. Message-ID: <1627946911.28.0.590925278018.issue44816@roundup.psfhosted.org> New submission from Brandt Bucher : PEP 626 says that "all expressions and parts of expressions are considered to be executable code" for the purposes of tracing. However, folding constants at compile-time can lose or change tracing events. For example, these expressions (which can't be folded) generate events for two lines each: [ # <-- None # <-- ] ( # <-- foo, # <-- ) ( 1 # <-- / 0 # <-- ) ( 1 # <-- / bar # <-- ) While these (which are folded) only generate events for one line each: ( # <-- None, ) ( # <-- 1 / 42 ) Note that for the binary operation, a completely different line is traced in the optimized version. We should correctly generate events for lines which are "folded away". This *might* mean refusing to fold nodes in the AST optimizer if they span multiple lines, or including some sort of additional line-coverage metadata on the new Constant nodes to fill NOPs as appropriate (which I personally prefer). ---------- components: Interpreter Core messages: 398810 nosy: Mark.Shannon, brandtbucher priority: normal severity: normal status: open title: Folded constants do not trace correctly. type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 3 01:24:11 2021 From: report at bugs.python.org (=?utf-8?q?Michael_F=C3=B6rderer?=) Date: Tue, 03 Aug 2021 05:24:11 +0000 Subject: [New-bugs-announce] [issue44817] os.path.realpath fails with WinError 161 Message-ID: <1627968251.97.0.10381933547.issue44817@roundup.psfhosted.org> New submission from Michael F?rderer : Using os.path.realpath(...) in the MVFS of Clearcase SCM (virtual file system) in Windows 10 a exception occures: X:\my_view\tools\python\3_8>python.exe Python 3.8.5 (tags/v3.8.5:580fbb0, Jul 20 2020, 15:57:54) [MSC v.1924 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.realpath('.') Traceback (most recent call last): File "X:\my_view\tools\python\3_8\lib\ntpath.py", line 647, in realpath path = _getfinalpathname(path) OSError: [WinError 87] Falscher Parameter: 'X:\\my_view\\tools\\python\\3_8\\.' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "X:\my_view\tools\python\3_8\lib\ntpath.py", line 651, in realpath path = _getfinalpathname_nonstrict(path) File "X:\my_view\tools\python\3_8\lib\ntpath.py", line 601, in _getfinalpathname_nonstrict path = _getfinalpathname(path) FileNotFoundError: [WinError 161] Der angegebene Pfadname ist ung?ltig: 'X:\\' >>> The error 161 (ERROR_BAD_PATHNAME) should also be ignored in _getfinalpathname_nonstrict. ---------- components: Library (Lib) messages: 398814 nosy: Spacetown priority: normal severity: normal status: open title: os.path.realpath fails with WinError 161 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 3 04:25:05 2021 From: report at bugs.python.org (Pooia) Date: Tue, 03 Aug 2021 08:25:05 +0000 Subject: [New-bugs-announce] [issue44818] '\t' (tab) support Message-ID: <1627979105.66.0.152787864247.issue44818@roundup.psfhosted.org> New submission from Pooia : Python can't use '\t' character. if I want fix it I will work a long time but a shorter fashion is replacing '\t' character with 4 space (by first clause of pep: 8) ---------- components: Parser messages: 398815 nosy: Pooia, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: '\t' (tab) support type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 3 16:24:40 2021 From: report at bugs.python.org (Brian) Date: Tue, 03 Aug 2021 20:24:40 +0000 Subject: [New-bugs-announce] [issue44819] assertSequenceEqual does not use _getAssertEqualityFunc Message-ID: <1628022280.67.0.522628726203.issue44819@roundup.psfhosted.org> New submission from Brian : Like the title says, TestCase.assertSequenceEqual does not behave like TestCase.assertEqual where it uses TestCase._getAssertEqualityFunc. Instead, TestCase.assertSequenceEqual uses `item1 != item2`. That way I can do something like this: ``` def test_stuff(self): self.addTypeEqualityFunc( MyObject, comparison_method_which_compares_how_i_want, ) self.assertListEqual( get_list_of_objects(), [MyObject(...), MyObject(...)], ) ``` ---------- components: Tests messages: 398851 nosy: Rarity priority: normal severity: normal status: open title: assertSequenceEqual does not use _getAssertEqualityFunc type: behavior versions: Python 3.6, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 01:49:38 2021 From: report at bugs.python.org (jb) Date: Wed, 04 Aug 2021 05:49:38 +0000 Subject: [New-bugs-announce] [issue44820] subprocess hungs when processing value from mariadb Message-ID: <1628056178.39.0.585124478432.issue44820@roundup.psfhosted.org> New submission from jb : I am doing an insert in mariadb databases. For example, INSERT INTO t (a, b, c) VALUES (1, 3, None) RETURNING a, b. Upon execution, I get the value as (, ). When accessing the zero element, my subroutine hangs. ---------- components: Interpreter Core messages: 398862 nosy: zh.bolatbek priority: normal severity: normal status: open title: subprocess hungs when processing value from mariadb type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 03:57:16 2021 From: report at bugs.python.org (Mark Shannon) Date: Wed, 04 Aug 2021 07:57:16 +0000 Subject: [New-bugs-announce] [issue44821] Instance dictionaries should be created eagerly Message-ID: <1628063836.31.0.548816977554.issue44821@roundup.psfhosted.org> New submission from Mark Shannon : Currently, instance dictionaries (__dict__ attribute) are created lazily when the first attribute is set. This is bad for performance for a number of reasons: 1. It causes additional checks on every attribute access. 2. It causes allocation of the object and its dict to be temporarily separated, most likely leading to increased physical separation and worse cache behavior. 3. It has a large impact on specialization, as the first SET_ATTR for an object has to behave differently. Creating a __dict__ lazily does not save a significant amount of memory. If an object has a __dict__ slot, then it will end up with a dictionary before it dies in almost all cases. Many objects, e.g. ints, floats, don't have a dictionary. They are unaffected. Plain python objects that have no instance attributes are extremely rare, unless they have __slots__, in which case they don't have a dictionary anyway. The remaining case is subclasses of builtin types that do not add extra attributes, but these are rare, and the overhead of an empty dictionary is only 64 bytes (on a 64 bit machine). ---------- assignee: Mark.Shannon components: Interpreter Core messages: 398864 nosy: Mark.Shannon priority: normal severity: normal status: open title: Instance dictionaries should be created eagerly type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 04:29:38 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Wed, 04 Aug 2021 08:29:38 +0000 Subject: [New-bugs-announce] [issue44822] [sqlite3] Micro-optimisation: pass string size to sqlite3_result_text() Message-ID: <1628065778.22.0.635299320358.issue44822@roundup.psfhosted.org> New submission from Erlend E. Aasland : The third argument to sqlite3_result_text() is the length of the string passed as the second argument. Currently, we pass -1, so SQLite has to invoke strlen() to compute the length of the passed string. Suggesting to use PyUnicode_AsUTF8AndSize() iso. PyUnicode_AsUTF8() and pass the string size to avoid the superfluous strlen(). See also: - https://sqlite.org/c3ref/result_blob.html ---------- assignee: erlendaasland components: Extension Modules messages: 398865 nosy: erlendaasland priority: normal severity: normal status: open title: [sqlite3] Micro-optimisation: pass string size to sqlite3_result_text() type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 04:39:24 2021 From: report at bugs.python.org (Karolina Surma) Date: Wed, 04 Aug 2021 08:39:24 +0000 Subject: [New-bugs-announce] [issue44823] Docs fail to build - looking for the wrong interpreter (Python 3.10-rc1) Message-ID: <1628066364.71.0.124570746372.issue44823@roundup.psfhosted.org> New submission from Karolina Surma : Documentation for Python3.10-rc1 fails to build. sphinx-build looks for non-existing interpreter, which causes a crash. Looking at the tarball with Python, there is Doc/venv/ present (which probably shouldn't be included) containing executable sphinx-build with shebang " #!/home/pablogsal/github/python/3.10/3.10.0rc1/Python-3.10.0rc1/Doc/venv/bin/python3". Traceback from our Fedora RPM build: + make -C Doc html PYTHON=/usr/bin/python3 make: Entering directory '/builddir/build/BUILD/Python-3.10.0rc1/Doc' venv already exists mkdir -p build Using existing Misc/NEWS file PATH=./venv/bin:$PATH sphinx-build -b html -d build/doctrees -W . build/html /bin/sh: ./venv/bin/sphinx-build: /home/pablogsal/github/python/3.10/3.10.0rc1/Python-3.10.0rc1/Doc/venv/bin/python3: bad interpreter: No such file or directory make: Leaving directory '/builddir/build/BUILD/Python-3.10.0rc1/Doc' make: *** [Makefile:51: build] Error 126 ---------- assignee: docs at python components: Documentation messages: 398866 nosy: docs at python, ksurma priority: normal severity: normal status: open title: Docs fail to build - looking for the wrong interpreter (Python 3.10-rc1) versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 04:52:38 2021 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Wed, 04 Aug 2021 08:52:38 +0000 Subject: [New-bugs-announce] [issue44824] The 3.10.0rv1 source tarballs contain the Docs/venv directory populated with pablogsal's venv Message-ID: <1628067158.48.0.0776789887836.issue44824@roundup.psfhosted.org> New submission from Miro Hron?ok : When we download the signed Python-3.10.0rc1.tgz or Python-3.10.0rc1.tar.xz source tarball, we see that the Docs/venv directory contains teh actual virtual environment with #!/home/pablogsal/github/python/3.10/3.10.0rc1/Python-3.10.0rc1/Doc/venv/bin/python3 shebangs. That means, an attempt to build the documentation (e.g. with make html) will fail with: PATH=./venv/bin:$PATH sphinx-build -b html -d build/doctrees -W . build/html /bin/sh: ./venv/bin/sphinx-build: /home/pablogsal/github/python/3.10/3.10.0rc1/Python-3.10.0rc1/Doc/venv/bin/python3: bad interpreter: No such file or directory make: Leaving directory '/builddir/build/BUILD/Python-3.10.0rc1/Doc' make: *** [Makefile:51: build] Error 126 I believe the venv directory should not be part of the release tarball, especially since it is unusbale from different paths than pablogsal's github/python/3.10/3.10.0rc1/Python-3.10.0rc1 directory. Also, technically the entire effective license of the tarball is now hard to determine, since it contains many different packages. ---------- assignee: docs at python components: Documentation messages: 398867 nosy: docs at python, hroncok, pablogsal priority: normal severity: normal status: open title: The 3.10.0rv1 source tarballs contain the Docs/venv directory populated with pablogsal's venv versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 05:41:50 2021 From: report at bugs.python.org (Samuel Marks) Date: Wed, 04 Aug 2021 09:41:50 +0000 Subject: [New-bugs-announce] [issue44825] node.annotation is not a str in `ast`'s `class _Unparser(NodeVisitor)` Message-ID: <1628070110.26.0.654462088107.issue44825@roundup.psfhosted.org> New submission from Samuel Marks : I tried making `node.annotation` an `ast.Name("str", ast.Load())`, which worked but when the AST was unparsed to a string it shows as `# type: `. https://github.com/offscale/cdd-python/runs/3213864077 Replicate with: ``` unparse(Assign(annotation=None, simple=1, targets=[Name("foo", Store())], value=Constant(value=5, kind=None), expr=None, expr_targe ...: t=None, expr_annotation=None, type_comment=Name('str', Load()), lineno=None)) ``` Checking what it expects, it does expect a str. E.g.,: ``` $ python3.9 -c 'import ast; tc=ast.parse("foo = 5 # type: int", type_comments=True).body[0].type_comment; print("type_comment is a", type(tc).__name__, "with value", tc)' type_comment is a str with value int ``` But when I do make it a str and unparse it, I get: ``` File "/opt/python3.10/lib/python3.10/ast.py", line 1674, in unparse return unparser.visit(ast_obj) File "/opt/python3.10/lib/python3.10/ast.py", line 808, in visit self.traverse(node) File "/opt/python3.10/lib/python3.10/ast.py", line 799, in traverse super().visit(node) File "/opt/python3.10/lib/python3.10/ast.py", line 410, in visit return visitor(node) File "/opt/python3.10/lib/python3.10/ast.py", line 1005, in visit_FunctionDef self._function_helper(node, "def") File "/opt/python3.10/lib/python3.10/ast.py", line 1023, in _function_helper self._write_docstring_and_traverse_body(node) File "/opt/python3.10/lib/python3.10/ast.py", line 816, in _write_docstring_and_traverse_body self.traverse(node.body) File "/opt/python3.10/lib/python3.10/ast.py", line 797, in traverse self.traverse(item) File "/opt/python3.10/lib/python3.10/ast.py", line 799, in traverse super().visit(node) File "/opt/python3.10/lib/python3.10/ast.py", line 410, in visit return visitor(node) File "/opt/python3.10/lib/python3.10/ast.py", line 879, in visit_AnnAssign self.traverse(node.annotation) File "/opt/python3.10/lib/python3.10/ast.py", line 799, in traverse super().visit(node) File "/opt/python3.10/lib/python3.10/ast.py", line 410, in visit return visitor(node) File "/opt/python3.10/lib/python3.10/ast.py", line 414, in generic_visit for field, value in iter_fields(node): File "/opt/python3.10/lib/python3.10/ast.py", line 252, in iter_fields for field in node._fields: AttributeError: 'str' object has no attribute '_fields' ``` ---------- messages: 398878 nosy: samuelmarks priority: normal severity: normal status: open title: node.annotation is not a str in `ast`'s `class _Unparser(NodeVisitor)` versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 06:34:42 2021 From: report at bugs.python.org (Mark Shannon) Date: Wed, 04 Aug 2021 10:34:42 +0000 Subject: [New-bugs-announce] [issue44826] Specialize STORE_ATTR using PEP 659 machinery. Message-ID: <1628073282.4.0.475634168415.issue44826@roundup.psfhosted.org> New submission from Mark Shannon : Add specializations of STORE_ATTR following the pattern of LOAD_ATTR and LOAD_GLOBAL. For this to work well we need https://bugs.python.org/issue44821, otherwise the first assigned to an attribute of any object cannot be specialized. ---------- messages: 398887 nosy: Mark.Shannon priority: normal severity: normal status: open title: Specialize STORE_ATTR using PEP 659 machinery. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 07:08:19 2021 From: report at bugs.python.org (PEW's Corner) Date: Wed, 04 Aug 2021 11:08:19 +0000 Subject: [New-bugs-announce] [issue44827] Incomplete 3.10.0rc1 release info Message-ID: <1628075299.02.0.423683995444.issue44827@roundup.psfhosted.org> New submission from PEW's Corner : The "Files" section is empty on this page: https://www.python.org/downloads/release/python-3100rc1/ Also, the Python Insider blog post contains the outdated b4 text under "And now for something completely different". ---------- messages: 398888 nosy: pewscorner priority: normal severity: normal status: open title: Incomplete 3.10.0rc1 release info versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 07:21:33 2021 From: report at bugs.python.org (Nythepegasus) Date: Wed, 04 Aug 2021 11:21:33 +0000 Subject: [New-bugs-announce] [issue44828] Using tkinter.filedialog crashes on macOS Python 3.9.6 Message-ID: <1628076093.82.0.287740145135.issue44828@roundup.psfhosted.org> New submission from Nythepegasus : Using tkinter.filedialog crashes on macOS 12.0 Beta (21A5294g) on M1 when the open file dialog window is created. Full crash below: 2021-08-04 07:19:04.239 Python[40251:323363] *** Assertion failure in -[NSOpenPanel beginServicePanel:asyncExHandler:], NSVBOpenAndSavePanels.m:1910 2021-08-04 07:19:04.241 Python[40251:323363] -[NSSavePanel beginWithCompletionHandler:]_block_invoke caught non-fatal NSInternalInconsistencyException ' is attempting to advance this Open/Save panel to run phase while another self.advanceToRunPhaseCompletionHandler is in waiting for a previous attempt. An Open/Save panel cannot start to advance more than once.' with user dictionary { NSAssertFile = "NSVBOpenAndSavePanels.m"; NSAssertLine = 1910; } and backtrace ( 0 CoreFoundation 0x00000001a9d47150 __exceptionPreprocess + 240 1 libobjc.A.dylib 0x00000001a9a986e8 objc_exception_throw + 60 2 Foundation 0x00000001aac3b4a4 -[NSCalendarDate initWithCoder:] + 0 3 AppKit 0x00000001ad1f02b0 -[NSSavePanel beginServicePanel:asyncExHandler:] + 512 4 AppKit 0x00000001ad1f1708 -[NSSavePanel runModal] + 332 5 libtk8.6.dylib 0x00000001013d8c18 showOpenSavePanel + 360 6 libtk8.6.dylib 0x00000001013d99e4 Tk_ChooseDirectoryObjCmd + 992 7 libtcl8.6.dylib 0x00000001011cbafc TclNRRunCallbacks + 80 8 _tkinter.cpython-39-darwin.so 0x0000000100c111a4 Tkapp_Call + 400 9 Python 0x0000000100d66a40 cfunction_call + 96 10 Python 0x0000000100d184e0 _PyObject_Call + 128 11 Python 0x0000000100e10150 _PyEval_EvalFrameDefault + 40288 12 Python 0x0000000100e053f0 _PyEval_EvalCode + 444 13 Python 0x0000000100d1877c _PyFunction_Vectorcall + 364 14 Python 0x0000000100e12590 call_function + 128 15 Python 0x0000000100e0ff08 _PyEval_EvalFrameDefault + 39704 16 Python 0x0000000100e053f0 _PyEval_EvalCode + 444 17 Python 0x0000000100d1877c _PyFunction_Vectorcall + 364 18 Python 0x0000000100e12590 call_function + 128 19 Python 0x0000000100e0ff84 _PyEval_EvalFrameDefault + 39828 20 Python 0x0000000100e053f0 _PyEval_EvalCode + 444 21 Python 0x0000000100e5cce4 run_eval_code_obj + 136 22 Python 0x0000000100e5cbf8 run_mod + 112 23 Python 0x0000000100e5a434 pyrun_file + 168 24 Python 0x0000000100e59d58 pyrun_simple_file + 276 25 Python 0x0000000100e59c04 PyRun_SimpleFileExFlags + 80 26 Python 0x0000000100e79d2c pymain_run_file + 320 27 Python 0x0000000100e7947c Py_RunMain + 916 28 Python 0x0000000100e7a6c4 pymain_main + 36 29 Python 0x0000000100e7a93c Py_BytesMain + 40 30 dyld 0x00000001007990fc start + 520 ) . ---------- components: Tkinter files: tkinter_crash.py messages: 398889 nosy: Nythepegasus priority: normal severity: normal status: open title: Using tkinter.filedialog crashes on macOS Python 3.9.6 type: crash versions: Python 3.9 Added file: https://bugs.python.org/file50200/tkinter_crash.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 10:36:58 2021 From: report at bugs.python.org (apple502j) Date: Wed, 04 Aug 2021 14:36:58 +0000 Subject: [New-bugs-announce] [issue44829] zoneinfo.ZoneInfo does not check for Windows device names Message-ID: <1628087818.07.0.11590329108.issue44829@roundup.psfhosted.org> New submission from apple502j : Note: this issue was submitted to security@ due to its potential as a DoS vector on 2021-05-08, but I have not received a response (excluding the automated email). It is over 88 days since the report, so I am now reporting this publicly. Issue: zoneinfo.ZoneInfo does not check for Windows device names on Windows. For example, a timezone "NUL" do not raise ZoneInfoNotFoundError; instead, it raises ValueError ("Invalid TZif file: magic not found"). If the timezone passed is "CON", then the program would read the content from stdin, and parse it as tzdata file. This can be abused for a DoS attack for programs that call ZoneInfo with untrusted timezone; for example, since reading CON is a blocking operation in the asyncio world, a web server that calls ZoneInfo with untrusted timezone input would stop its job and no future connections will succeed. Note that this bug only occurs on Windows for obvious reasons. Repro case: >>> from zoneinfo import ZoneInfo >>> ZoneInfo("CON") This is related to bpo-41530 where timezone __init__.py does not raise ZoneInfoNotFoundError. And finally, this happens with other file-based operations (and they are probably intentional); however, zoneinfo is designed to be secure by default, for example by disallowing path traversals. The interactions with Windows device names are not documented at all in the references. It's a common practice to let the users choose their preferred timezone in web applications, and such programs are expected to call ZoneInfo constructor with externally provided string. Timezone calculation should never cause a web server to stop to read stdin. ---------- components: Library (Lib) messages: 398900 nosy: apple502j priority: normal severity: normal status: open title: zoneinfo.ZoneInfo does not check for Windows device names type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 12:59:27 2021 From: report at bugs.python.org (Mark Dickinson) Date: Wed, 04 Aug 2021 16:59:27 +0000 Subject: [New-bugs-announce] [issue44830] Broken Mozilla devguide link in "Dealing with Bugs" doc section Message-ID: <1628096367.53.0.828359601844.issue44830@roundup.psfhosted.org> New submission from Mark Dickinson : The "Bug Report Writing Guidelines" link in the "Dealing with Bugs" doc section (https://docs.python.org/3/bugs.html) looks broken. The linked URL is https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines, but that gives me a "Page not found" error. I tried to find equivalent content elsewhere on developer.mozilla.org, but either it's not there or my search-fu is failing me. ---------- assignee: docs at python components: Documentation messages: 398913 nosy: docs at python, mark.dickinson priority: normal severity: normal status: open title: Broken Mozilla devguide link in "Dealing with Bugs" doc section type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 13:49:51 2021 From: report at bugs.python.org (Miksus) Date: Wed, 04 Aug 2021 17:49:51 +0000 Subject: [New-bugs-announce] [issue44831] Inconsistency between datetime.now() and datetime.fromtimestamp(time.time(), None) Message-ID: <1628099391.12.0.0928824788179.issue44831@roundup.psfhosted.org> New submission from Miksus : I am trying to measure time twice and the second measurement gives a time that is 1 microsecond before the first measurement about half of the time. My experiment in short: --------------------------------------------------- import time, datetime start = time.time() end = datetime.datetime.now() start = datetime.datetime.fromtimestamp(start, None) assert end >= start # fails about half the time. --------------------------------------------------- The problem is somewhat interesting. This does not fail: --------------------------------------------------- import time, datetime start = time.time() end = time.time() start = datetime.datetime.fromtimestamp(start, None) end = datetime.datetime.fromtimestamp(end, None) assert end >= start --------------------------------------------------- And neither does this: --------------------------------------------------- import datetime start = datetime.datetime.now() end = datetime.datetime.now() assert end >= start --------------------------------------------------- And it seems datetime.datetime.now() works the same way as to how I handled the "start" time in my first experiment: https://github.com/python/cpython/blob/3.6/Lib/datetime.py#L1514 and therefore the issue seems to be under the hood. I have tested this on two Windows 10 machines (Python 3.6 & 3.8) in which cases this occurred. This did not happen on Raspberry Pi OS using Python 3.7. In short: - The time module imported in datetime.datetime.now() seems to measure time slightly differently than the time module imported by a Python user. - This seems to be Windows specific. My actual application has some code in between the measurements suffering from the same problem thus this is not an issue affecting only toy examples. ---------- components: Library (Lib), Windows messages: 398919 nosy: Miksus, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Inconsistency between datetime.now() and datetime.fromtimestamp(time.time(), None) versions: Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 13:52:17 2021 From: report at bugs.python.org (Ben Boeckel) Date: Wed, 04 Aug 2021 17:52:17 +0000 Subject: [New-bugs-announce] [issue44832] Compiler detection is not strict enough Message-ID: <1628099537.98.0.0593378255107.issue44832@roundup.psfhosted.org> New submission from Ben Boeckel : Generally, the `configure.ac` script tries to detect compilers based on the path to the compiler. This is mostly fine, but trips up when using `mpicc` as the compiler. Even if the underlying compiler is `gcc`, this gets detected as `icc` in various situations. The best solution is to do some compiler introspection like CMake does to determine what the compiler actually is, but I'm not familiar with the patterns used or available tools in autotools for such things. ---------- components: Build messages: 398920 nosy: mathstuf priority: normal severity: normal status: open title: Compiler detection is not strict enough type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 13:56:03 2021 From: report at bugs.python.org (ENG19EC0098_Swathi.M) Date: Wed, 04 Aug 2021 17:56:03 +0000 Subject: [New-bugs-announce] [issue44833] VideoCapture is not installing Message-ID: <1628099763.63.0.902079879024.issue44833@roundup.psfhosted.org> New submission from ENG19EC0098_Swathi.M : The python version is failing to install the VideoCapture of opencv despite many trials. Would request you to kindly go through this at the earliest ---------- components: Installation files: Screenshot 2021-08-04 232419.jpg messages: 398923 nosy: eng19ec0098.swathim priority: normal severity: normal status: open title: VideoCapture is not installing type: resource usage versions: Python 3.9 Added file: https://bugs.python.org/file50202/Screenshot 2021-08-04 232419.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 4 14:03:14 2021 From: report at bugs.python.org (Adrian Garcia Badaracco) Date: Wed, 04 Aug 2021 18:03:14 +0000 Subject: [New-bugs-announce] [issue44834] contextvars.Context.run w/ coroutines gives inconsistent behavior Message-ID: <1628100194.12.0.330262028027.issue44834@roundup.psfhosted.org> New submission from Adrian Garcia Badaracco : I recently tried to use `contextvars.Context.run` w/ coroutines, expecting the same behavior as with regular functions, but it seems that `contextvars.Context.run` does not work w/ coroutines. I'm sorry if this is something obvious to do with how coroutines work under the hood, if so I'd appreciate some help in understanding why this is the expected behavior. ```python import asyncio import contextvars ctxvar = contextvars.ContextVar("ctxvar", default="spam") def func(): assert ctxvar.get() == "spam" async def coro(): func() async def main(): ctx = contextvars.copy_context() ctxvar.set("ham") ctx.run(func) # works await ctx.run(coro) # breaks asyncio.run(main()) ``` Thanks! ---------- components: Library (Lib) messages: 398924 nosy: adriangb priority: normal severity: normal status: open title: contextvars.Context.run w/ coroutines gives inconsistent behavior type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 00:26:53 2021 From: report at bugs.python.org (chen-y0y0) Date: Thu, 05 Aug 2021 04:26:53 +0000 Subject: [New-bugs-announce] [issue44835] What does "Python for Windows will still be Python for DOS" mean? Message-ID: <1628137613.72.0.466224932387.issue44835@roundup.psfhosted.org> Change by chen-y0y0 : ---------- components: Installation, Windows nosy: paul.moore, prasechen, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: What does "Python for Windows will still be Python for DOS" mean? type: performance versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 04:48:32 2021 From: report at bugs.python.org (Yogendra kumar soni) Date: Thu, 05 Aug 2021 08:48:32 +0000 Subject: [New-bugs-announce] [issue44836] shutil _unpack_zipfile filename encoding issue Message-ID: <1628153312.77.0.338052443243.issue44836@roundup.psfhosted.org> New submission from Yogendra kumar soni : shutil _unpack_zipfile uses takes filename using name = info.filename if files are created in a machine that uses different encoding say utf-8 containing u'\u201c' in filename and the machine where we are extracting has a different encoding say Latin-1. creating target path using ?_ensure_directory(targetpath) ?is not able to correctly check target path and creating targetpath also fails. UnicodeEncodeError: 'latin-1' codec can't encode character u'\u201c'. ---------- components: Library (Lib) messages: 398975 nosy: yogendraksoni priority: normal severity: normal status: open title: shutil _unpack_zipfile filename encoding issue type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 05:08:14 2021 From: report at bugs.python.org (krey) Date: Thu, 05 Aug 2021 09:08:14 +0000 Subject: [New-bugs-announce] [issue44837] os.symlink arg names are bad Message-ID: <1628154494.92.0.350164707154.issue44837@roundup.psfhosted.org> New submission from krey : From: https://docs.python.org/3/library/os.html os.symlink(src, dst, target_is_directory=False, *, dir_fd=None) Create a symbolic link pointing to `src` named `dst`. It's a bit like saying find(needle, haystack) Finds `haystack` in `needle` If you look at the manpage ln, it says ln [OPTION]... [-T] TARGET LINK_NAME So os.symlink isn't consistent with ln ---------- assignee: docs at python components: Documentation messages: 398977 nosy: docs at python, krey priority: normal severity: normal status: open title: os.symlink arg names are bad type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 06:45:07 2021 From: report at bugs.python.org (Andre Roberge) Date: Thu, 05 Aug 2021 10:45:07 +0000 Subject: [New-bugs-announce] [issue44838] SyntaxError: New message "expected 'else' after 'if' expression" wrongly shown Message-ID: <1628160307.55.0.448348784951.issue44838@roundup.psfhosted.org> New submission from Andre Roberge : Given the following code containing no if expression (only if statements): if True: print('hello' if 2: print(123)) The following traceback is generated in Python 3.10.0RC1 File "...\example.py", line 2 print('hello' ^^^^^^^ SyntaxError: expected 'else' after 'if' expression ---------- components: Parser messages: 398989 nosy: aroberge, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: SyntaxError: New message "expected 'else' after 'if' expression" wrongly shown versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 07:54:52 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 05 Aug 2021 11:54:52 +0000 Subject: [New-bugs-announce] [issue44839] Convert Python exceptions to appropriate SQLite error codes Message-ID: <1628164492.41.0.705893137688.issue44839@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently, any exception raised in user-defined function set the general SQLITE_ERROR error which then produce sqlite3.OperationalError. For example, if the user function returns a string or bytes object larger than INT_MAX you get OperationalError, but if it is less than INT_MAX and larger than the SQLite limit (configurable, 1000000000 by default) you get DataError. If a memory error occurred in Python code you get OperationalError, but if it is occurred in the SQLite code you get MemoryError. The proposed PR sets corresponding SQLite error codes for MemoryError and OverflowError in user-defined functions. They will produce MemoryError and DataError in Python. ---------- components: Extension Modules messages: 398997 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: Convert Python exceptions to appropriate SQLite error codes type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 09:11:40 2021 From: report at bugs.python.org (Ned Batchelder) Date: Thu, 05 Aug 2021 13:11:40 +0000 Subject: [New-bugs-announce] [issue44840] Nested if/else gets phantom else trace again (3.10) Message-ID: <1628169100.37.0.353483963834.issue44840@roundup.psfhosted.org> New submission from Ned Batchelder : Note: this is very similar to https://bugs.python.org/issue42810 This was originally reported against coverage.py: https://github.com/nedbat/coveragepy/issues/1205 ---8<------------- import linecache, sys def trace(frame, event, arg): # The weird globals here is to avoid a NameError on shutdown... if frame.f_code.co_filename == globals().get("__file__"): lineno = frame.f_lineno print("{} {}: {}".format(event[:4], lineno, linecache.getline(__file__, lineno).rstrip())) return trace print(sys.version) sys.settrace(trace) def func(): if A: if B: if C: if D: return False else: return False elif E and F: return True A = B = True C = False func() ------------------------- This produces this trace output: 3.10.0rc1 (default, Aug 3 2021, 15:03:55) [Clang 12.0.0 (clang-1200.0.32.29)] call 13: def func(): line 14: if A: line 15: if B: line 16: if C: line 21: elif E and F: retu 21: elif E and F: The elif on line 21 is not executed, and should not be traced. Also, if I change line 21 to `elif E:`, then the trace changes to: 3.10.0rc1 (default, Aug 3 2021, 15:03:55) [Clang 12.0.0 (clang-1200.0.32.29)] call 13: def func(): line 14: if A: line 15: if B: line 16: if C: line 22: return True retu 22: return True ---------- components: Interpreter Core keywords: 3.10regression messages: 399003 nosy: Mark.Shannon, nedbat priority: normal severity: normal status: open title: Nested if/else gets phantom else trace again (3.10) type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 09:59:45 2021 From: report at bugs.python.org (Gabor Rakosy) Date: Thu, 05 Aug 2021 13:59:45 +0000 Subject: [New-bugs-announce] [issue44841] ZipInfo crashes on filemode Message-ID: <1628171985.04.0.233917766436.issue44841@roundup.psfhosted.org> New submission from Gabor Rakosy : """ ZipInfo crashes on filemode In file /usr/lib/python3.7/zipfile.py | class ZipInfo.__slots__ Does not contain keyword 'filemode'. """ import zipfile file_zip = zipfile.ZipFile("test-one-dir.zip", mode='r') res = [] info = file_zip.infolist() print("info[0]", type(info[0]), info[0]) print("\n# ## Good") for inf in info: print("\ninf", type(inf), inf) res.append(( inf.filename, ## inf.filemode, inf.compress_type, inf.compress_size, inf.file_size)) for fileinfo in res: print("\n", fileinfo) print("\n# ## Bad") for inf in info: print("\ninf", type(inf), inf) res.append(( inf.filename, inf.filemode, inf.compress_type, inf.compress_size, inf.file_size)) for fileinfo in res: print("\n", fileinfo) ---------- components: Library (Lib) messages: 399006 nosy: G.Rakosy priority: normal severity: normal status: open title: ZipInfo crashes on filemode type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 12:15:08 2021 From: report at bugs.python.org (Manish Satwani) Date: Thu, 05 Aug 2021 16:15:08 +0000 Subject: [New-bugs-announce] [issue44842] String conversion of Path removes '/' from original url Message-ID: <1628180108.46.0.518546871425.issue44842@roundup.psfhosted.org> New submission from Manish Satwani : import pathlib p = pathlib.Path('adl://myblob.azuredatalakestore.net/local/abc/xyz') s = str(p) print(s) what you expect s to be?? There is a bug in path.Path.str(conversion to string) and it remove a slash s is 'adl:/myblob.azuredatalakestore.net/local/abc/xyz' <-- this is getting print....plz fix it ---------- components: Library (Lib) messages: 399008 nosy: manish.satwani priority: normal severity: normal status: open title: String conversion of Path removes '/' from original url type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 14:28:53 2021 From: report at bugs.python.org (=?utf-8?q?Filipe_La=C3=ADns?=) Date: Thu, 05 Aug 2021 18:28:53 +0000 Subject: [New-bugs-announce] [issue44843] Add CLI flag to disable hash randomization Message-ID: <1628188133.12.0.736934579893.issue44843@roundup.psfhosted.org> New submission from Filipe La?ns : There are select use-cases where hash randomization is undesirable, having a CLI option to switch it off would be very helpful. One example would be packaging, where hash randomization will make the bytecode unreproducible. Currently, we have to set PYTHONHASHSEED to a constant value. Having a CLI option (lets say -Z) would allow use to do python -Zm install artifact.whl instead of PYTHONHASHSEED=0 python -m install artifact.whl Which is something that I have to do lots of places. ---------- messages: 399026 nosy: FFY00 priority: normal severity: normal status: open title: Add CLI flag to disable hash randomization versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 14:58:31 2021 From: report at bugs.python.org (Ray Luo) Date: Thu, 05 Aug 2021 18:58:31 +0000 Subject: [New-bugs-announce] [issue44844] The command line of launching Edge on Linux hangs Message-ID: <1628189911.42.0.278044129333.issue44844@roundup.psfhosted.org> New submission from Ray Luo : Launching Chrome on Linux from command line: $ export BROWSER=google-chrome; python -m webbrowser https://httpbin.org/delay/10 It can successfully launch Chrome with the specified web page opened in a new tab. And the console command line finishes BEFORE the web page being fully loaded in the browser. That is the desirable behavior. Launching Edge on Linux from command line: $ export BROWSER=microsoft-edge; python -m webbrowser https://httpbin.org/delay/10 The command line hangs until the Edge window is closed. That hanging symptom can be resolved by writing a deliberate script to webbrowser.register("...", None, webbrowser.BackgroundBrowser("microsoft-edge")) and then use that registered browser. But it was not obvious, and it took trial-and-error to reach that solution. Could it be possible to have the "BROWSER=microsoft-edge; python -m webbrowser https://httpbin.org/delay/10" work out of the box, without hanging? Is it because Edge is not currently predefined and handled inside webbrowser.py? It seems not an easy decision to add new browser into webbrowser.py, though, based on the 2nd and 3rd comments in this old issue: https://bugs.python.org/issue42330 ---------- components: Library (Lib) messages: 399030 nosy: rayluo priority: normal severity: normal status: open title: The command line of launching Edge on Linux hangs type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 15:55:55 2021 From: report at bugs.python.org (Patrick Reader) Date: Thu, 05 Aug 2021 19:55:55 +0000 Subject: [New-bugs-announce] [issue44845] Allow keyword arguments in code.__new__ Message-ID: <1628193355.65.0.685014983693.issue44845@roundup.psfhosted.org> New submission from Patrick Reader : Per bpo-41263, code.__new__ now uses Argument Clinic. However, it still has a / marker which prevents the use of keyword arguments (https://github.com/python/cpython/pull/21426/files#diff-6f869eb8beb7cbe4bc6817584b99ad567f88962fa67f7beca25d009dc401234dR465). It seems entirely unnecessary to have this, so could it be removed to allow easier construction of code objects from user code, or is it there for some specific reason? I can do a PR - it's a 1 line change (+ clinic output changes) (+ tests?). I don't imagine backwards-compatibility is a concern here given it's implementation-specific and basically private. Note that prior to that fix, keyword arguments were allowed in the constructor but completely ignored. ---------- components: Interpreter Core messages: 399034 nosy: pxeger priority: normal severity: normal status: open title: Allow keyword arguments in code.__new__ type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 5 19:39:22 2021 From: report at bugs.python.org (Joel Puig Rubio) Date: Thu, 05 Aug 2021 23:39:22 +0000 Subject: [New-bugs-announce] [issue44846] zipfile: cannot create zip file from files with non-utf8 filenames Message-ID: <1628206762.04.0.365517684452.issue44846@roundup.psfhosted.org> New submission from Joel Puig Rubio : I'm attempting to make a script to create ZIP archives from files which filenames are encoded in Shift-JIS. However, Python seems to limit its filenames to ASCII or UTF-8, which means that attempting to archive said files will raise an exception. This is very inconvenient. joel at bliss:~/test$ python3 -m zipfile -c mojibake.zip . Traceback (most recent call last): File "/usr/lib/python3.8/zipfile.py", line 457, in _encodeFilenameFlags return self.filename.encode('ascii'), self.flag_bits UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-7: ordinal not in range(128) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/lib/python3.8/zipfile.py", line 2441, in main() File "/usr/lib/python3.8/zipfile.py", line 2437, in main addToZip(zf, path, zippath) File "/usr/lib/python3.8/zipfile.py", line 2426, in addToZip addToZip(zf, File "/usr/lib/python3.8/zipfile.py", line 2421, in addToZip zf.write(path, zippath, ZIP_DEFLATED) File "/usr/lib/python3.8/zipfile.py", line 1775, in write with open(filename, "rb") as src, self.open(zinfo, 'w') as dest: File "/usr/lib/python3.8/zipfile.py", line 1517, in open return self._open_to_write(zinfo, force_zip64=force_zip64) File "/usr/lib/python3.8/zipfile.py", line 1614, in _open_to_write self.fp.write(zinfo.FileHeader(zip64)) File "/usr/lib/python3.8/zipfile.py", line 447, in FileHeader filename, flag_bits = self._encodeFilenameFlags() File "/usr/lib/python3.8/zipfile.py", line 459, in _encodeFilenameFlags return self.filename.encode('utf-8'), self.flag_bits | 0x800 UnicodeEncodeError: 'utf-8' codec can't encode characters in position 0-7: surrogates not allowed The zip command from the Linux Info-ZIP package is able to create the same archive with no issues, which I've attached to this issue. Here you can see how the proper filenames are shown in WinRAR once the right encoding is selected: https://i.imgur.com/TVcI95A.png The same should be seen on any computer using Shift-JIS as their locale. ---------- components: Library (Lib) files: mojibake.zip messages: 399049 nosy: joelpuig priority: normal severity: normal status: open title: zipfile: cannot create zip file from files with non-utf8 filenames versions: Python 3.8 Added file: https://bugs.python.org/file50204/mojibake.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 02:56:59 2021 From: report at bugs.python.org (Graham Dumpleton) Date: Fri, 06 Aug 2021 06:56:59 +0000 Subject: [New-bugs-announce] [issue44847] ABCMeta.__subclasscheck__() doesn't support duck typing. Message-ID: <1628233019.39.0.0281534175475.issue44847@roundup.psfhosted.org> New submission from Graham Dumpleton : The Python standard library has two effective implementations of helpers for the ABCMeta class. A C implementation, and a pure Python version which is only used if the C implementation isn't available (perhaps for PyPy). * https://github.com/python/cpython/blob/3.9/Lib/abc.py#L89 * https://github.com/python/cpython/blob/3.9/Lib/_py_abc.py * https://github.com/python/cpython/blob/3.9/Modules/_abc.c These two implementations behave differently. Specifically, the ABCMeta.__subclasscheck__() implementation for the C version doesn't support duck typing for the subclass argument to issubclass() when this delegates to ABCMeta.__subclasscheck__(). The Python implementation for this has no problems though. In the pure Python version it uses isinstance(). * https://github.com/python/cpython/blob/3.9/Lib/_py_abc.py#L110 In the C implementation it uses PyType_Check() which doesn't give the same result. * https://github.com/python/cpython/blob/3.9/Modules/_abc.c#L610 The consequence of this is that transparent object proxies used as decorators on classes (eg., as wrapt uses) will break when the C implementation us used with an error of: # def __subclasscheck__(cls, subclass): # """Override for issubclass(subclass, cls).""" # > return _abc_subclasscheck(cls, subclass) # E TypeError: issubclass() arg 1 must be a class Example of tests from wrapt and how tests using C implementation must be disabled can be found at: * https://github.com/GrahamDumpleton/wrapt/blob/develop/tests/test_inheritance_py37.py If instead of using PyType_Check() the C implementation used PyObject_IsInstance() at that point it is possible that wrapt may then work if the remainder of the C implementation is true to how the pure Python version works (not been able to test if that is the case or not as yet). ---------- components: Library (Lib) messages: 399060 nosy: grahamd priority: normal severity: normal status: open title: ABCMeta.__subclasscheck__() doesn't support duck typing. type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 04:08:14 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Fri, 06 Aug 2021 08:08:14 +0000 Subject: [New-bugs-announce] [issue44848] Upgrade macOS and Windows installers to use SQLite 3.36.0 Message-ID: <1628237294.86.0.651703128522.issue44848@roundup.psfhosted.org> New submission from Erlend E. Aasland : Upgrade macOS and Windows installers to use SQLite 3.36.0. SQLite 3.36.0 was released June 18 2021. https://www.sqlite.org/releaselog/3_36_0.html ---------- components: Windows, macOS messages: 399061 nosy: erlendaasland, ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Upgrade macOS and Windows installers to use SQLite 3.36.0 type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 06:40:46 2021 From: report at bugs.python.org (STINNER Victor) Date: Fri, 06 Aug 2021 10:40:46 +0000 Subject: [New-bugs-announce] [issue44849] test_os: test_get_set_inheritable_o_path() failed on AMD64 FreeBSD Shared 3.x Message-ID: <1628246446.17.0.0445752107297.issue44849@roundup.psfhosted.org> New submission from STINNER Victor : Since build 655 (commit 6871fd0e8e5257f3ffebd1a1b2ca50e5f494e7f6), test_os failed on AMD64 FreeBSD Shared 3.x: https://buildbot.python.org/all/#/builders/483/builds/655 ====================================================================== ERROR: test_get_set_inheritable_o_path (test.test_os.FDInheritanceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_os.py", line 3898, in test_get_set_inheritable_o_path os.set_inheritable(fd, True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 9] Bad file descriptor ---------- components: Tests messages: 399065 nosy: erlendaasland, lukasz.langa, pablogsal, vstinner priority: normal severity: normal status: open title: test_os: test_get_set_inheritable_o_path() failed on AMD64 FreeBSD Shared 3.x versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 06:42:14 2021 From: report at bugs.python.org (Antony Lee) Date: Fri, 06 Aug 2021 10:42:14 +0000 Subject: [New-bugs-announce] [issue44850] Could operator.methodcaller be optimized using LOAD_METHOD? Message-ID: <1628246534.03.0.387871214532.issue44850@roundup.psfhosted.org> New submission from Antony Lee : Currently, methodcaller is not faster than a plain lambda: ``` In [1]: class T: ...: a = 1 ...: def f(self): pass ...: In [2]: from operator import * In [3]: %%timeit t = T(); mc = methodcaller("f") ...: mc(t) ...: ...: 83.1 ns ? 0.862 ns per loop (mean ? std. dev. of 7 runs, 10000000 loops each) In [4]: %%timeit t = T(); mc = lambda x: x.f() ...: mc(t) ...: ...: 81.4 ns ? 0.0508 ns per loop (mean ? std. dev. of 7 runs, 10000000 loops each) ``` (on some machines, I find that it is even slower). Compare with attrgetter, which *is* faster: ``` In [5]: %%timeit t = T(); ag = attrgetter("a") ...: ag(t) ...: ...: 33.7 ns ? 0.0407 ns per loop (mean ? std. dev. of 7 runs, 10000000 loops each) In [6]: %%timeit t = T(); ag = lambda x: x.a ...: ag(t) ...: ...: 50.1 ns ? 0.057 ns per loop (mean ? std. dev. of 7 runs, 10000000 loops each) ``` Given that the operator module explicitly advertises itself as being "efficient"/"fast", it seems reasonable to try to optimize methodcaller. Looking at its C implementation, methodcaller currently uses PyObject_GetAttr followed by PyObject_Call; I wonder whether this can be optimized using a LOAD_METHOD-style approach to avoid the construction of the bound method (when applicable)? ---------- components: Library (Lib) messages: 399066 nosy: Antony.Lee priority: normal severity: normal status: open title: Could operator.methodcaller be optimized using LOAD_METHOD? _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 07:13:31 2021 From: report at bugs.python.org (Tzu-ping Chung) Date: Fri, 06 Aug 2021 11:13:31 +0000 Subject: [New-bugs-announce] [issue44851] Update bundled pip to 21.2.3 and setuptools to 57.4.0 Message-ID: <1628248411.95.0.443315719327.issue44851@roundup.psfhosted.org> New submission from Tzu-ping Chung : PR coming soon ---------- components: Distutils messages: 399072 nosy: dstufft, eric.araujo, uranusjr priority: normal severity: normal status: open title: Update bundled pip to 21.2.3 and setuptools to 57.4.0 versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 12:34:02 2021 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Fri, 06 Aug 2021 16:34:02 +0000 Subject: [New-bugs-announce] [issue44852] Add ability to wholesale silence DeprecationWarnings while running the test suite Message-ID: <1628267642.59.0.0108096773806.issue44852@roundup.psfhosted.org> New submission from ?ukasz Langa : Sometimes we have the following problem: - there are way too many deprecations raised from a given library to silence them one by one; - the deprecations are different between maintenance branches and we don't want to make tests between the branches hopelessly conflicting. In particular, there is such a case right now with asyncio in 3.9 vs. later branches. 3.8 deprecated the loop= argument in a bunch of functions but due to poor warning placement, most of them were silent. This is being fixed in BPO-44815 which would go out to users in 3.9.7 but that created 220 new warnings when running test_asyncion in regression tests. Fixing them one by one would be both tedious, and would make the 3.9 branch forever conflicting with newer branches in many asyncio test files. In 3.11 there's a new round of deprecations raised in test_asyncio, making the branches different. Moreover, those warnings are typically silenced by `assertWarns` context managers which should only be used when actually testing the warnings, *not* to silence irrelevant warnings. So, what the PR does is it introduces: - `support.ignore_deprecations_from("path.to.module", like=".*msg regex.*")`, and - `support.clear_ignored_deprecations()` The former adds a new filter to warnings, the message regex is mandatory. The latter removes only the filters that were added by the former, leaving all other filters alone. Example usage is in `test_support`, and later, should this be merged, will be in asyncio tests on the 3.9 branch. ---------- assignee: lukasz.langa components: Tests messages: 399102 nosy: lukasz.langa priority: normal severity: normal stage: patch review status: open title: Add ability to wholesale silence DeprecationWarnings while running the test suite type: enhancement versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 12:54:10 2021 From: report at bugs.python.org (Joshua Root) Date: Fri, 06 Aug 2021 16:54:10 +0000 Subject: [New-bugs-announce] [issue44853] 3.1.10rc1 published md5 and size do not match for source archives Message-ID: <1628268850.54.0.892251234836.issue44853@roundup.psfhosted.org> New submission from Joshua Root : The download page lists the following: Filename md5 size Python-3.10.0rc1.tgz c051bf7a52a45cb0ec2cefbe915417e1 40764776 Python-3.10.0rc1.tar.xz 2861cdd4cf71c6425fde1fedc14bb283 28197832 The downloaded files instead have these properties: Python-3.10.0rc1.tgz d23c2a8228705b17e8414f1660e4bb73 24955561 Python-3.10.0rc1.tar.xz edd2eb2f7f4a932ed59196cbe373e5fb 18680452 The gpg signatures do verify ok however. The md5 and size listed for the macOS installer seem to be correct. I didn't check the Windows installers. ---------- components: Installation messages: 399103 nosy: jmr, pablogsal priority: normal severity: normal status: open title: 3.1.10rc1 published md5 and size do not match for source archives type: security versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 14:37:22 2021 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Fri, 06 Aug 2021 18:37:22 +0000 Subject: [New-bugs-announce] [issue44854] Add .editorconfig to the root directory Message-ID: <1628275042.37.0.932347169049.issue44854@roundup.psfhosted.org> New submission from ?ukasz Langa : EditorConfig is a cross-editor configuration file that is pretty widely adopted: https://editorconfig.org/ Adding this to the root directory will allow editors that use the file to automatically format a few details which we already enforce with `make patchcheck`: - always put an empty line at the end of the file; - remove trailing whitespace; - disallow tabs in Python (PEP 8) and C (PEP 7) files. ---------- messages: 399123 nosy: lukasz.langa priority: normal severity: normal status: open title: Add .editorconfig to the root directory type: enhancement versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 16:12:24 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Fri, 06 Aug 2021 20:12:24 +0000 Subject: [New-bugs-announce] [issue44855] [DOC] some sqlite3 exceptions are not documented Message-ID: <1628280744.2.0.733600605983.issue44855@roundup.psfhosted.org> New submission from Erlend E. Aasland : sqlite3.InterfaceError, sqlite3.DataError, and sqlite3.InternalError are not documented. ---------- assignee: docs at python components: Documentation messages: 399139 nosy: docs at python, erlendaasland priority: normal severity: normal status: open title: [DOC] some sqlite3 exceptions are not documented _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 20:25:24 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 07 Aug 2021 00:25:24 +0000 Subject: [New-bugs-announce] [issue44856] Possible reference leak in error paths of update_bases() Message-ID: <1628295924.58.0.701091485763.issue44856@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Here: https://github.com/python/cpython/blob/17c23167942498296f0bdfffe52e72d53d66d693/Python/bltinmodule.c#L60-L88 Seems that new_base is not properly cleaned on the error paths ---------- messages: 399161 nosy: pablogsal priority: normal severity: normal status: open title: Possible reference leak in error paths of update_bases() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 6 22:20:59 2021 From: report at bugs.python.org (Cliff Cordeiro) Date: Sat, 07 Aug 2021 02:20:59 +0000 Subject: [New-bugs-announce] [issue44857] class member varibles assigned member functions create a circular reference Message-ID: <1628302859.95.0.618033141977.issue44857@roundup.psfhosted.org> New submission from Cliff Cordeiro : This class is not collected by the gc without a custom __del__ method to del or assign None to self.fn: import gc class Leak: def __init__(self): self.fn = self.x def x(self): pass gc.set_debug(gc.DEBUG_SAVEALL) l = Leak() del l gc.collect() for item in gc.garbage: print(item) ---------- components: Interpreter Core messages: 399165 nosy: cliff.cordeiro priority: normal severity: normal status: open title: class member varibles assigned member functions create a circular reference type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 7 10:02:24 2021 From: report at bugs.python.org (Tzu-ping Chung) Date: Sat, 07 Aug 2021 14:02:24 +0000 Subject: [New-bugs-announce] [issue44858] sysconfig's posix_user scheme has different platlib value to distutils' Message-ID: <1628344944.12.0.584247923318.issue44858@roundup.psfhosted.org> Change by Tzu-ping Chung : ---------- nosy: uranusjr priority: normal severity: normal status: open title: sysconfig's posix_user scheme has different platlib value to distutils' _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 7 10:14:16 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 07 Aug 2021 14:14:16 +0000 Subject: [New-bugs-announce] [issue44859] Improve some sqlite3 errors Message-ID: <1628345656.7.0.124562565005.issue44859@roundup.psfhosted.org> New submission from Serhiy Storchaka : * MemoryError is now raised instead of sqlite3.Warning when memory is not enough for encoding statement to UTF-8 in Connection.__call__() and Cursor.execute(). * UnicodEncodeError is now raised instead of sqlite3.Warning when statement contains surrogate characters in Connection.__call__() and Cursor.execute(). * TypeError is now raised instead of ValueError for non-string script in Cursor.execute(). * ValueError is now raised for script containing NUL instead of truncating it in Cursor.execute(). * Correctly handled exceptions raised when getting boolean value of the result of the progress handler to bool. Also added many tests which cover different exceptional cases. ---------- components: Extension Modules messages: 399183 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: Improve some sqlite3 errors versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 7 10:29:18 2021 From: report at bugs.python.org (Tzu-ping Chung) Date: Sat, 07 Aug 2021 14:29:18 +0000 Subject: [New-bugs-announce] [issue44860] sysconfig's posix_user scheme has different platlib value to distutils's unix_user Message-ID: <1628346558.35.0.0800108572532.issue44860@roundup.psfhosted.org> New submission from Tzu-ping Chung : On POSIX, the user scheme has a different 'platlib' location between distutils and sysconfig, dispite the comment claiming they should be the same. This can be reproduced on Fedora 34's stock Python 3.9: $ docker run -it --rm -h=p fedora:34 bash ... [root at p /]# yum install python3 -y ... [root at p /]# type python3 python3 is hashed (/usr/bin/python3) [root at p /]# python3 -V Python 3.9.6 [root at p /]# python3.9 -q >>> from distutils.command.install import install >>> from distutils.dist import Distribution >>> c = install(Distribution()) >>> c.user = True >>> c.finalize_options() >>> c.install_platlib '/root/.local/lib/python3.9/site-packages' >>> import sysconfig >>> sysconfig.get_path('platlib', 'posix_user') '/root/.local/lib64/python3.9/site-packages' This issue was introduced by the sys.platlibdir value, and its usage in distutils and sysconfig. sysconfig sets posix_user's lib paths like this: 'purelib': '{userbase}/lib/python{py_version_short}/site-packages', 'platlib': '{userbase}/{platlibdir}/python{py_version_short}/site-packages', https://github.com/python/cpython/blob/a40675c659cd8c0699f85ee9ac31660f93f8c2f5/Lib/sysconfig.py#L100-L108 But distutils naively sets both to the same value that does not account for platlibdir: 'purelib': '$usersite', 'platlib': '$usersite', https://github.com/python/cpython/blob/a40675c659cd8c0699f85ee9ac31660f93f8c2f5/Lib/distutils/command/install.py#L68-L87 causing the mismatch, dispite the comment above clearly indicating the values are supposed to be the same. This was introduced in bpo-1294959 which changed the platlib template to depend on sys.platlibdir, so a mismatch happens when the value of sys.platlibdir is not 'lib'. (Adding frenzy and vstinner to the nosy list since you introduced the comment in distutils and the sys.platlibdir change, respectively.) ---------- components: Distutils messages: 399186 nosy: dstufft, eric.araujo, frenzy, uranusjr, vstinner priority: normal severity: normal status: open title: sysconfig's posix_user scheme has different platlib value to distutils's unix_user versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 7 10:57:05 2021 From: report at bugs.python.org (Sebastian Bank) Date: Sat, 07 Aug 2021 14:57:05 +0000 Subject: [New-bugs-announce] [issue44861] csv.writer stopped to quote values with escapechar with csv.QUOTE_MINIMAL in Python 3.10 Message-ID: <1628348225.56.0.708449688444.issue44861@roundup.psfhosted.org> New submission from Sebastian Bank : AFAICT there was an undocumented change in behaviour related to the fix of https://bugs.python.org/issue12178 (also reported in https://bugs.python.org/issue12178#msg397440): Python 3.9 quotes values with escapechar: ``` import csv import io kwargs = {'escapechar': '\\'} value = 'spam\\eggs' print(value) with io.StringIO() as buf: writer = csv.writer(buf, **kwargs) writer.writerow([value]) line = buf.getvalue() print(line.strip()) with io.StringIO(line) as buf: reader = csv.reader(buf, **kwargs) (new_value,), = reader print(new_value) spam\eggs "spam\eggs" spameggs ``` - quotes escapechar - fails to double the escapechar (https://bugs.python.org/issue12178) >From https://docs.python.org/3/library/csv.html#csv.QUOTE_MINIMAL > only quote those fields which contain special characters > such as delimiter, quotechar or any of the characters in > lineterminator. The previous behaviour seems incorrect because escapechar is not explicitly mentioned, but at the same time the docs says 'such as'. The new might better matching the name 'minimal', but at the same time one might regard 'quote when in doubt' as a safer behaviour for the default quoting rule. Python 3.10: https://github.com/python/cpython/blob/5c0eed7375fdd791cc5e19ceabfab4170ad44062/Lib/test/test_csv.py#L207-L208 See also https://github.com/xflr6/csv23/actions/runs/1027687524 ---------- components: Library (Lib) messages: 399188 nosy: ebreck, taleinat, xflr6 priority: normal severity: normal status: open title: csv.writer stopped to quote values with escapechar with csv.QUOTE_MINIMAL in Python 3.10 type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 7 14:34:20 2021 From: report at bugs.python.org (=?utf-8?q?Vin=C3=ADcius_Gubiani_Ferreira?=) Date: Sat, 07 Aug 2021 18:34:20 +0000 Subject: [New-bugs-announce] [issue44862] [docs] make "Deprecated since version {deprecated}, will be removed in version {removed}" translation available Message-ID: <1628361260.67.0.944506647918.issue44862@roundup.psfhosted.org> New submission from Vin?cius Gubiani Ferreira : If we access https://docs.python.org/3.8/library/asyncio-queue.html#asyncio.Queue we can see the text Make Deprecated since version 3.8, will be removed in version 3.10 is perfectly visible in english. However if we change the language to pt-br by using the link https://docs.python.org/pt-br/3.8/library/asyncio-queue.html#asyncio.Queue We can see that the whole page is in brazilian portuguese, except that text, and we are not able to translate it in anyway that is visible in the page. What should we do in CPython so we can translate this string? ---------- assignee: docs at python components: Documentation messages: 399197 nosy: docs at python, vini.g.fer priority: normal severity: normal status: open title: [docs] make "Deprecated since version {deprecated}, will be removed in version {removed}" translation available type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 7 15:04:47 2021 From: report at bugs.python.org (Samodya Abey) Date: Sat, 07 Aug 2021 19:04:47 +0000 Subject: [New-bugs-announce] [issue44863] Allow TypedDict to inherit from Generics Message-ID: <1628363087.29.0.710236107674.issue44863@roundup.psfhosted.org> New submission from Samodya Abey : TypedDict PEP-589 says: A TypedDict cannot inherit from both a TypedDict type and a non-TypedDict base class. So the current implementation has: `if type(base) is not _TypedDictMeta: raise TypeError(...)` This restricts the user from defining generic TypedDicts in the natural class based syntax: `class Pager(TypedDict, Generic[T]): ...` Although PEP 589 doesn't explicitly state generic support, I believe it is complete in covering the specification even if generics were involved (at least for the class based syntax). I have tried putting together a PEP from guidance of typing-sig . There is not much new contributions by that draft, except for specifying the alternative syntax and being more explicit about Generics. So I'm wondering if it would be possible to relax the constraint: TypedDict inheritance to include Generic. In my point of view `Generic` is more of a mixin, so it doesn't go against the PEP 589. Or is this change big enough to warrant a PEP? ---------- components: Library (Lib) messages: 399201 nosy: sransara priority: normal severity: normal status: open title: Allow TypedDict to inherit from Generics type: enhancement versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 8 04:42:38 2021 From: report at bugs.python.org (=?utf-8?q?J=C3=A9r=C3=A9mie_Detrey?=) Date: Sun, 08 Aug 2021 08:42:38 +0000 Subject: [New-bugs-announce] [issue44864] [argparse] Do not translate user-provided strings in `ArgumentParser.add_subparsers()` Message-ID: <1628412158.14.0.0114857952802.issue44864@roundup.psfhosted.org> New submission from J?r?mie Detrey : Dear all, In the `argparse` module, the `ArgumentParser.add_subparsers()` method may call the `_()` translation function on user-provided strings. See e.g. 3.9/Lib/argparse.py:1776 and 3.9/Lib/argparse.py:L1777: def add_subparsers(self, **kwargs): # [...] if 'title' in kwargs or 'description' in kwargs: title = _(kwargs.pop('title', 'subcommands')) description = _(kwargs.pop('description', None)) When elements `'title'` and/or `'description'` are set in `kwargs`, they will be popped from the dictionary and then fed to `_()`. However, these are user-provided strings, and it seems to me that translating them should be the user's responsibility. This seems to be the expected behavior for all other user-provided strings in the `argparse` module: see e.g. the `ArgumentParser`'s `description` parameter (in 3.9/Lib/argparse.py:1704 then 3.9/Lib/argparse.py:1312), which never gets translated by the `argparse` module. However, the default title string `'subcommands'` should still be localized. Therefore, I'd suggest restricting the call to `_()` to this string only, as in the following: title = kwargs.pop('title', _('subcommands')) description = kwargs.pop('description', None) I'll submit a pull request with this change. Kind regards, J?r?mie. ---------- components: Library (Lib) messages: 399212 nosy: jdetrey priority: normal severity: normal status: open title: [argparse] Do not translate user-provided strings in `ArgumentParser.add_subparsers()` type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 8 10:12:29 2021 From: report at bugs.python.org (=?utf-8?q?J=C3=A9r=C3=A9mie_Detrey?=) Date: Sun, 08 Aug 2021 14:12:29 +0000 Subject: [New-bugs-announce] [issue44865] [argparse] Missing translations Message-ID: <1628431949.27.0.450799323412.issue44865@roundup.psfhosted.org> New submission from J?r?mie Detrey : Dear all, There are a few strings in the `argparse` module which are not translatable through the `gettext` API. Some have already been reported: - the "--version" help text at 3.9/Lib/argparse.py:1105 (reported in issue 16786, fixed by PR 12711); - the "default" help text at 3.9/Lib/argparse.py:697 (reported in 33775, fixed by PR 12711). However, some others remain: - the "default" help text for `BooleanOptionalAction` at 3.9/Lib/argparse.py:878 (which, incidentally, will be duplicated when used with `ArgumentDefaultsHelpFormatter`); - the "argument %(argument_name)s: %(message)s" error message at 3.9/Lib/argparse.py:751; - the formatted section heading at 3.9/Lib/argparse.py:225: if the heading itself is translatable, the string "%(heading)s:" is not. More precisely, the colon right after the heading might also require localization, as some languages (e.g., French) typeset colons with a preceding non-breaking space (i.e., "%(heading)s?:"). (Okay, I'll admit that this is nitpicking!) I'll submit a pull request with proposed fixes for these strings. Kind regards, J?r?mie. ---------- components: Library (Lib) messages: 399216 nosy: jdetrey priority: normal severity: normal status: open title: [argparse] Missing translations type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 8 13:49:51 2021 From: report at bugs.python.org (John Joseph Morelli) Date: Sun, 08 Aug 2021 17:49:51 +0000 Subject: [New-bugs-announce] [issue44866] Inconsistent Behavior of int() Message-ID: <1628444991.92.0.991529189283.issue44866@roundup.psfhosted.org> New submission from John Joseph Morelli : I first noticed this and reported it on the W3 Schools Tutorial, the section entitled "Add Two Numbers with User Input" There were many behaviors that I did not understand, but for this bug report, I will state that the input statements present seem to return a string and under most situations will return an error if the user inputs a real number like 2.8. However, under a very specific situation, it will truncate 2.8 to 2 without error. After further investigation, I believe the following session in the IDLE's output window and editor illustrates this inconsistent behavior. Note that I have added comments after copying the session here... >>> print(x) #want to ensure new session has x as undefined Traceback (most recent call last): File "", line 1, in print(x) NameError: name 'x' is not defined # confirmed x is undefined >>> x="2" # define x as the string "2" >>> print(x) 2 >>> print(type(x)) # confirm that x is a string value of "2" >>> y=int(x) # convert string value of "2" to integer of 2 - # according to documentation this should work - see "If x is not a # number or if base is given, then x must be a string, bytes, or # bytearray instance representing an integer literal in radix base." # at link --> https://docs.python.org/3.9/library/functions.html#int >>> print(type(y)) # ensure y is type int >>> print(y) 2 >>> z=x+".8" # create z to be the concatenation of two strings "2" and ".8" = "2.8", a string representation of the real number 2.8 >>> print(z) 2.8 >>> print(type(z)) # ensure z is a string >>> aa=int(z) # convert z to an integer (as descried in the link # above, this should NOT work Traceback (most recent call last): File "", line 1, in aa=int(z) ValueError: invalid literal for int() with base 10: '2.8' >>> w="2.8" # Define w directly as the string value of 2.8 = "2.8" >>> bb=int(w) # This should also produce an error Traceback (most recent call last): File "", line 1, in bb=int(w) ValueError: invalid literal for int() with base 10: '2.8' >>> a='2.8' >>> b=int(a) Traceback (most recent call last): File "", line 1, in b=int(a) ValueError: invalid literal for int() with base 10: '2.8' >>> print(type(a)) # Ensure a is a string >>> w="2" >>> bb=int(w) >>> print(bb) 2 >>> print(type(bb)) >>> test=int(input("What is test value? ")) #lets try inputting a # real number but as an argument to int and assigning it to test What is test value? 2.8 # this should not work either Traceback (most recent call last): File "", line 1, in test=int(input("What is test value? ")) ValueError: invalid literal for int() with base 10: '2.8' >>> # So everything here is working as expected, but... Here is code from the IDLE editor... a file called testinput1.py x = int(1) y = input("Type a number: ") print(type(y)) int_y = int(2.8) #conver y to an integer 2 and assign to int_y z = int("3") print(x) print(y) print(int_y) print(z) # I can find no documentation to suggest this should work, but it does. Here is the output in IDLE's shell Type a number: 2.8 1 2.8 2 3 Now, if I immediately go into the shell while the variables remain untouched and defined... >>> a=int(y) # Correctly, this produces the expected error Traceback (most recent call last): File "", line 1, in a=int(y) ValueError: invalid literal for int() with base 10: '2.8' After extensive testing, I conclude that after input, you may immediately apply the int() function to the resulting string, but you quickly lose that ability, resulting in the expected error. I can find no documentation to explain this behavior. If I am not overlooking something, I think this should either be in the documentation of the function int(), if it is intended to behaviour this way, or as a bug, should be corrected. NOTE, I just started learning Pytyon this weekend, so I may be just ignorant of the behavior, but I have searched a good bit and found nothing suggesting this is how int() should behalf. I have also not studied the other constructor functions. ---------- assignee: docs at python components: Build, Documentation, IDLE, Library (Lib), Windows files: function_int_08Aug21.txt messages: 399224 nosy: TheDoctor165, docs at python, paul.moore, steve.dower, terry.reedy, tim.golden, zach.ware priority: normal severity: normal status: open title: Inconsistent Behavior of int() type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50205/function_int_08Aug21.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 8 15:16:15 2021 From: report at bugs.python.org (Guo Ci Teo) Date: Sun, 08 Aug 2021 19:16:15 +0000 Subject: [New-bugs-announce] [issue44867] types.MappingProxyType and collections.defaultdict Message-ID: <1628450175.86.0.732506438572.issue44867@roundup.psfhosted.org> New submission from Guo Ci Teo : `types.MappingProxyType` is documented as 'Read-only proxy of a mapping'. But if used with a `collections.defaultdict` mapping, it can modify the underlying mapping. ``` import collections, types dd = collections.defaultdict(set) mpt = types.MappingProxyType(dd) mpt['__getitem__'] # key inserted mpt.get('get') # key not inserted print(dd.items()) # dict_items([('__getitem__', set())]) ``` ---------- components: Library (Lib) messages: 399234 nosy: guoci priority: normal severity: normal status: open title: types.MappingProxyType and collections.defaultdict type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 8 16:18:13 2021 From: report at bugs.python.org (Andrei Kulakov) Date: Sun, 08 Aug 2021 20:18:13 +0000 Subject: [New-bugs-announce] [issue44868] misleading error about fd / follow_symlinks from os.stat() Message-ID: <1628453893.12.0.662297515485.issue44868@roundup.psfhosted.org> New submission from Andrei Kulakov : (note the actual relevant code is in posixmodule.c) os.stat() error can be confusing and misleading when given an fd and with follow_symlinks=False: ValueError: stat: cannot use fd and follow_symlinks together It's less bad when os.stat() is used directly because the user would look at the signature and would have provided the follow_symlinks=False directly, but it's confusing when used indirectly by other function. I've ran into this when reviewing https://github.com/python/cpython/pull/27524 list(os.fwalk(1)) /usr/local/Cellar/python at 3.9/3.9.1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/os.py in fwalk(top, topdown, onerror, follow_symlinks, dir_fd) 467 # lstat()/open()/fstat() trick. 468 if not follow_symlinks: --> 469 orig_st = stat(top, follow_symlinks=False, dir_fd=dir_fd) 470 topfd = open(top, O_RDONLY, dir_fd=dir_fd) 471 try: ValueError: stat: cannot use fd and follow_symlinks together ---- A few things are confusing here: I did not use follow_symlinks argument; I can see from traceback that the arg is used but set to False, which is the usual meaning of "do not use follow_symlinks". In addition, fwalk() can probably check that `top` arg is a string and raise an error stating that it should be a string if it's not. If that's done, this issue will no longer happen for current code anywhere in os module, but stat(follow_symlinks=False) is also used in shutil and pathlib (I didn't check if fd may be passed in those cases), but also in 3rd party libraries. So I think it would be clearer to rephrase the error to say stat: cannot use fd and follow_symlinks set to False or NULL together (adding NULL as it may be used from C code). ---------- components: C API, Library (Lib) messages: 399237 nosy: andrei.avk priority: normal severity: normal status: open title: misleading error about fd / follow_symlinks from os.stat() type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 8 23:30:23 2021 From: report at bugs.python.org (Eduardo Morales) Date: Mon, 09 Aug 2021 03:30:23 +0000 Subject: [New-bugs-announce] [issue44869] MacOS Monterrey malloc issue Message-ID: <1628479823.5.0.245309193527.issue44869@roundup.psfhosted.org> New submission from Eduardo Morales : Running on MacOS Monterrey throws following error: ```malloc: *** error for object 0x7ffb5ea1a120: pointer being freed was not allocatedPython(4899,0x1061a8600)``` This started happening right after upgrading to the new MacOS Beta. ---------- messages: 399247 nosy: edumorlom priority: normal severity: normal status: open title: MacOS Monterrey malloc issue versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 07:49:52 2021 From: report at bugs.python.org (Christian Degenkolb) Date: Mon, 09 Aug 2021 11:49:52 +0000 Subject: [New-bugs-announce] [issue44870] email.message_from_bytes not working on BytesIO() object Message-ID: <1628509792.47.0.257700542933.issue44870@roundup.psfhosted.org> New submission from Christian Degenkolb : Hi, the following minimal working example of the problem from io import BytesIO from os import read import email fp = BytesIO() with open('mail.eml', 'rb') as f: filecontent = f.read() print("type(filecontent)= ", type(filecontent)) fp.write(filecontent) mailobj = email.message_from_bytes(fp) produces the following exception $ python testparser.py type(filecontent)= Traceback (most recent call last): File "testparser.py", line 16, in mailobj = email.message_from_bytes(fp) File "/usr/lib/python3.8/email/__init__.py", line 46, in message_from_bytes return BytesParser(*args, **kws).parsebytes(s) File "/usr/lib/python3.8/email/parser.py", line 122, in parsebytes text = text.decode('ASCII', errors='surrogateescape') AttributeError: '_io.BytesIO' object has no attribute 'decode' This is a python 3.8.10 on an Ubuntu 20.04 LTS installed from the packages. The documentation for message_from_bytes https://docs.python.org/3.10/library/email.parser.html#email.message_from_bytes mentions bytes-like objects which links to https://docs.python.org/3.10/glossary.html#term-bytes-like-object and references object of type byte. Shouldn't this work with BytesIO()? A'm I missing something? with regards Christian ---------- components: email messages: 399259 nosy: barry, cd311, r.david.murray priority: normal severity: normal status: open title: email.message_from_bytes not working on BytesIO() object type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 10:14:54 2021 From: report at bugs.python.org (Andrei Zene) Date: Mon, 09 Aug 2021 14:14:54 +0000 Subject: [New-bugs-announce] [issue44871] Threading memory leak Message-ID: <1628518494.18.0.597101898312.issue44871@roundup.psfhosted.org> New submission from Andrei Zene : In an application where we were handling incoming requests in new threads, we noticed that the memory usage grew over time. After trying to understand what's going on, i was able to reproduce this with a smaller python script that i've attached. What we do: - start a thread - the thread allocates some memory - at some point later we join the thread Notice that this seems to be more like a race-condition because it doesn't reproduce without adding some delays between the creation of different threads. I've added a comment in the file that basically commenting one time.sleep makes the leak to not reproduce anymore. On the other side, I was able to reproduce this consistently with every version of python on mulitple systems but only on Linux. On windows it doesn't reproduce. ---------- files: threading_leak.py messages: 399267 nosy: andzn priority: normal severity: normal status: open title: Threading memory leak type: performance versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50206/threading_leak.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 12:12:13 2021 From: report at bugs.python.org (Irit Katriel) Date: Mon, 09 Aug 2021 16:12:13 +0000 Subject: [New-bugs-announce] [issue44872] FrameObject uses the old trashcan macros Message-ID: <1628525533.23.0.954402533507.issue44872@roundup.psfhosted.org> New submission from Irit Katriel : Py_TRASHCAN_SAFE_BEGIN/END are the old macros, and they are currently broken (see Issue40608). We should use Py_TRASHCAN_BEGIN/END instead. ---------- components: Interpreter Core messages: 399274 nosy: iritkatriel priority: normal severity: normal status: open title: FrameObject uses the old trashcan macros versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 12:42:51 2021 From: report at bugs.python.org (Andrei Kulakov) Date: Mon, 09 Aug 2021 16:42:51 +0000 Subject: [New-bugs-announce] [issue44873] base64 RFC4648 test cases Message-ID: <1628527371.14.0.0350168122984.issue44873@roundup.psfhosted.org> New submission from Andrei Kulakov : RFC 4648 [1] has added a list of encoding test cases -- see section 10 of the RFC. It might be nice to add a test function that is a direct copy of these test cases. This will make conformance to this RFC clearer (actually right now we don't state conformance with 4648 in the docs except for b32 hex encode/decode, but I'm planning to update the docs in a separate issue). This will also let us easily keep test cases in sync with future updates in the RFC. The tests are similar to what we already have except our test cases go from '' => 'abc', and their test cases from '' => 'foobar' (by adding a character at a time in succession). [also some of our test cases go from '' to 'abcde', so a bit inconsistent.] I've confirmed that all of their test cases pass with Python dev branch code. I can put up a PR if this sounds good. [1] https://datatracker.ietf.org/doc/html/rfc4648.html ---------- components: Library (Lib) messages: 399276 nosy: andrei.avk priority: normal severity: normal status: open title: base64 RFC4648 test cases type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 14:58:05 2021 From: report at bugs.python.org (Irit Katriel) Date: Mon, 09 Aug 2021 18:58:05 +0000 Subject: [New-bugs-announce] [issue44874] Deprecate Py_TRASHCAN_SAFE_BEGIN/END Message-ID: <1628535485.5.0.0461415222233.issue44874@roundup.psfhosted.org> New submission from Irit Katriel : The old trashcan macros - Py_TRASHCAN_SAFE_BEGIN/END are unsafe (see Issue40608). They were removed from the limited C API in 3.9: https://github.com/python/cpython/blob/main/Doc/whatsnew/3.9.rst#removed-1 They should be removed altogether, in favour of Py_TRASHCAN_BEGIN/END. Since they are not documented, I think this would be done by changing the comment before their definition in Include/cpython/object.h. ---------- components: Interpreter Core messages: 399285 nosy: iritkatriel priority: normal severity: normal status: open title: Deprecate Py_TRASHCAN_SAFE_BEGIN/END versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 18:33:48 2021 From: report at bugs.python.org (George King) Date: Mon, 09 Aug 2021 22:33:48 +0000 Subject: [New-bugs-announce] [issue44875] Update dis.findlinestarts documentaiton to reflect new usage of `co_lines` (PEP 626) Message-ID: <1628548428.41.0.569263847778.issue44875@roundup.psfhosted.org> New submission from George King : `dis.findlinestarts()` has been changed to use the no `co_lines()` function. (Blame indicates commit 877df851c3e by Mark Shannon.) However the docs currently state that it uses the older `co_firstlineno` and `co_lnotab`: https://docs.python.org/3.10/library/dis.html#dis.findlinestarts. My cursory understanding of `dis.py` internals is that `get_instructions` relies on `findlinestarts`, implying that both of these APIs are going to return different line numbers than they did previously. I am perfectly fine with this, and hopeful that the PEP 626 changes will improve tool accuracy. At minimum the `dis` docs should be updated. I also suggest that some kind of note about this be added to the PEP 626 text, because the way it reads now suggests that it avoids breakage by creating the new `co_lines` API. However it seems that users of the higher level dis APIs are going to see subtly different behavior. FWIW I am fine with the change, and I hope this doesn't instigate a reversion to the old behavior. `lnotabs` semantics were very cryptic and seemed rather broken when I attempted to use it years ago. I am revisiting an experimental code coverage tool in part because of the PEP. ---------- assignee: docs at python components: Documentation messages: 399290 nosy: Mark.Shannon, docs at python, gwk priority: normal severity: normal status: open title: Update dis.findlinestarts documentaiton to reflect new usage of `co_lines` (PEP 626) versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 18:58:31 2021 From: report at bugs.python.org (Joel Gibson) Date: Mon, 09 Aug 2021 22:58:31 +0000 Subject: [New-bugs-announce] [issue44876] .replace functions in datetime do not call __new__ Message-ID: <1628549911.12.0.643069299871.issue44876@roundup.psfhosted.org> New submission from Joel Gibson : I've come here while investigating a segfault when datetime.replace(...) was called with a Pandas Timestamp object: https://github.com/pandas-dev/pandas/issues/42305 The Python implementation of datetime.replace https://github.com/python/cpython/blob/03e5647ab07c7d2a05094fc3b5ed6eba6fc01349/Lib/datetime.py#L1823 is polymorphic, in the sense that if a custom object like pd.Timestamp is passed in, a pd.Timestamp will come out rather than a datetime. This works just fine (copying and using Python code makes this segfault disappear). The C implementation is also polymorphic https://github.com/python/cpython/blob/03e5647ab07c7d2a05094fc3b5ed6eba6fc01349/Modules/_datetimemodule.c#L5845 but I think that something in the C implementation is wrong: eventually tp_alloc gets called for the passed type, but never tp_new, and then afterwards the custom type is treated just like a regular datetime. In the pd.Timestamp case, there are some extra fields that only get set in the __new__ method, leading to a segfault later when they're accessed. I'm not familiar enough with the C/CPython interface (especially object setup and initialisation) to tell where a fix for this should go, I would assume that the line https://github.com/python/cpython/blob/03e5647ab07c7d2a05094fc3b5ed6eba6fc01349/Modules/_datetimemodule.c#L5872 should be replaced by a call to PyObject_new(Py_TYPE(self), ...), or something similar. This also affects date.replace and time.replace. ---------- components: Library (Lib) messages: 399295 nosy: joelgibson priority: normal severity: normal status: open title: .replace functions in datetime do not call __new__ type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 9 19:37:13 2021 From: report at bugs.python.org (Quellyn Snead) Date: Mon, 09 Aug 2021 23:37:13 +0000 Subject: [New-bugs-announce] [issue44877] Python > 3.7 build fails with IBM XL compiler Message-ID: <1628552233.98.0.792897981293.issue44877@roundup.psfhosted.org> New submission from Quellyn Snead : Python 3.8 and above fails to build with the IBM XL compiler on Power9 platforms. ## System and Environment Information ``` $ arch ppc64le ``` ``` $ echo $CC xlc -F/projects/opt/ppc64le/ibm/xlc-16.1.1.7/xlC/16.1.1/etc/xlc.cfg.rhel.7.8.gcc.8.3.0.cuda.10.1 ``` ``` $ xlc --version IBM XL C/C++ for Linux, V16.1.1 (5725-C73, 5765-J13) Version: 16.01.0001.0007 ``` ## Test Procedure ``` $ git clone git at github.com:quellyn/cpython.git $ cd cpython $ checkout 3.8 $ ./configure --with-pydebug $ make -j2 | tee -a make.out ``` I tested for 3.7, 3.8, 3.9, 3.10, and master. In all cases the `make` failed for versions 3.8+. I've attached both the `config.log` and the make output logs for all. - Power9 architecture (ppc64le) - IBM XL C/C++ 16.1.1.7 Python 3.7 builds without issue. ---------- components: Installation files: cpython.tar.gz messages: 399296 nosy: quellyn priority: normal severity: normal status: open title: Python > 3.7 build fails with IBM XL compiler type: compile error versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50207/cpython.tar.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 10 09:42:38 2021 From: report at bugs.python.org (Mark Shannon) Date: Tue, 10 Aug 2021 13:42:38 +0000 Subject: [New-bugs-announce] [issue44878] Clumsy dispatching on interpreter entry. Message-ID: <1628602958.73.0.911240136542.issue44878@roundup.psfhosted.org> New submission from Mark Shannon : On entering the interpreter (_PyEval_EvalFrameDefault) we need to check for tracing in order to record the call. However, we don't do this cleanly resulting in slow dispatch to the non-quickened instruction on every call/next. ---------- assignee: Mark.Shannon messages: 399324 nosy: Mark.Shannon priority: normal severity: normal status: open title: Clumsy dispatching on interpreter entry. type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 10 10:28:16 2021 From: report at bugs.python.org (chen-y0y0) Date: Tue, 10 Aug 2021 14:28:16 +0000 Subject: [New-bugs-announce] [issue44879] How to insert newline characters as normal characters while input()? Message-ID: <1628605696.03.0.149874041325.issue44879@roundup.psfhosted.org> New submission from chen-y0y0 : # ? know, if ? press enter key while input(), the method will be completed and return a str value. # ? am trying to insert newline characters as normal characters while input() method. # The ?normal characters? means: # 1. It can be deleted by backspace(?\x7b?) or EOF or delete key. # 2. It can be interprered as a normal byte. # ? tried by the readline module, it did work. But it may crash, like: # Traceback (most recent call last): # File "", line 1, in # SyntaxError: multiple statements found while compiling a single statement ---------- components: Argument Clinic, FreeBSD, IO, Interpreter Core, Windows messages: 399327 nosy: koobs, larry, paul.moore, prasechen, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: How to insert newline characters as normal characters while input()? type: crash versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 10 12:24:09 2021 From: report at bugs.python.org (Irit Katriel) Date: Tue, 10 Aug 2021 16:24:09 +0000 Subject: [New-bugs-announce] [issue44880] Document code.replace() Message-ID: <1628612649.5.0.768299740297.issue44880@roundup.psfhosted.org> New submission from Irit Katriel : Core.replace was added in issue37032. Needs to be documented. ---------- messages: 399336 nosy: iritkatriel, vstinner priority: normal severity: normal status: open title: Document code.replace() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 10 17:07:59 2021 From: report at bugs.python.org (Neil Schemenauer) Date: Tue, 10 Aug 2021 21:07:59 +0000 Subject: [New-bugs-announce] [issue44881] Consider integration of GC_UNTRACK with TRASHCAN Message-ID: <1628629679.48.0.759166908477.issue44881@roundup.psfhosted.org> New submission from Neil Schemenauer : The fix for bpo-33930 includes a somewhat mysterious comment: // The Py_TRASHCAN mechanism requires that we be able to // call PyObject_GC_UnTrack twice on an object. I wonder if we can just integrate the untrack into the body of the trashcan code. Then, the explicit call to untrack in the dealloc function body can be removed. That removes the risk of incorrectly using the macro version. I suspect the reason this was not done originally is because the original trashcan mechanism did not use the GC header pointers to store objects. Now, any object that uses the trashcan *must* be a GC object. ---------- messages: 399356 nosy: nascheme priority: normal severity: normal stage: needs patch status: open title: Consider integration of GC_UNTRACK with TRASHCAN type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 10 18:22:25 2021 From: report at bugs.python.org (Samwyse) Date: Tue, 10 Aug 2021 22:22:25 +0000 Subject: [New-bugs-announce] [issue44882] add .rollback() for in-place filters Message-ID: <1628634145.27.0.557852545267.issue44882@roundup.psfhosted.org> New submission from Samwyse : Sometimes bad things happen when processing an in-place filter, leaving an empty or incomplete input file and a backup file that needs to recovered. The FileInput class has all the information needed to do this, but it is in private instance variables. A .rollback() method could close the current file and rename the backup file to its original name. For example: for line in fileinput.input(inplace=True): try: ... except SomeError: fileinput.rollback(close=False) # continue with next file A simplistic implementation could be: def rollback(self, close=True): if self._backupfilename: os.rename(self._backupfilename, self.filename) self._backupfilename = None if close: self.close() else: self.nextfile() ---------- components: Library (Lib) messages: 399361 nosy: samwyse priority: normal severity: normal status: open title: add .rollback() for in-place filters type: enhancement versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 10 23:07:44 2021 From: report at bugs.python.org (Elvis Pranskevichus) Date: Wed, 11 Aug 2021 03:07:44 +0000 Subject: [New-bugs-announce] [issue44883] configure --with-openssl-rpath=DIR too eager about existence of DIR Message-ID: <1628651264.32.0.578906783546.issue44883@roundup.psfhosted.org> New submission from Elvis Pranskevichus : https://bugs.python.org/issue43466 added a way to set OpenSSL rpath explicitly via --with-openssl-rpath=DIR, which is cool! However, the current configuration code checks for the presence of the specified directory eagerly, which breaks setups where both OpenSSL and Python are being built at the same time, but not necessarily installed to the runtime location (think omnibus debs). Unless there's a good reason why an eager check is needed, I think it should be dropped to ease packaging. ---------- assignee: christian.heimes components: Installation, SSL messages: 399368 nosy: Elvis.Pranskevichus, christian.heimes priority: normal severity: normal status: open title: configure --with-openssl-rpath=DIR too eager about existence of DIR versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 01:43:18 2021 From: report at bugs.python.org (francois-xavier callewaert) Date: Wed, 11 Aug 2021 05:43:18 +0000 Subject: [New-bugs-announce] [issue44884] logging Formatter behavior when using msecs and braces : '{' Message-ID: <1628660598.09.0.631024760388.issue44884@roundup.psfhosted.org> New submission from francois-xavier callewaert : ``` >>> import logging >>> logging.getLogger().handlers[0].setFormatter(logging.Formatter(fmt='{asctime} {message}', style='{')) >>> logging.error("hello") 2021-08-11 01:04:54,972 hello ``` Wait. I come from a place where we use '.' as a decimal separator ... ``` >>> logging.getLogger().handlers[0].setFormatter(logging.Formatter(fmt='{asctime}.{msecs:03.0f} {message}', style='{', datefmt="%Y-%m-%d %H:%M:%S")) >>> logging.error("hello") 2021-08-11 01:06:27.471 hello ``` All very reasonable. I know my date time formatting and my brace formatting so I'm good or am I ... ``` >>> import time, math >>> for i in range(2500): a= (lambda : (time.sleep(0.0004), (logging.error("Whaaat!") )if math.modf(time.time())[0]>0.9995 else 0))() ... 2021-08-11 01:26:40.1000 Whaaat! ``` You'll hopefully agree that formatting a msecs as 1000 is plain wrong. Can I get around this ? the best / simplest, I've found is ``` >>> logging.Formatter.default_msec_format = "%s.%03d" >>> logging.getLogger().handlers[0].setFormatter(logging.Formatter(fmt='{asctime} {message}', style='{')) >>> for i in range(2500): a= (lambda : (time.sleep(0.0004), (logging.error("Now that's ok") )if math.modf(time.time())[0]>0.9995 else 0))() ... 2021-08-11 01:33:46.999 Now that's ok ``` Having to rely / teach /learn about "Old string formatting" in 2021 is not ideal. Can you suggest something better ? or would it be palatable to make a "careful" modification in logging/__init__.py (see below) ? replace ``` self.msecs = (ct - int(ct)) * 1000 ``` by ``` self.msecs = math.floor((ct - int(ct)) * 1000) #requires importing math ``` or ``` self.msecs = int((ct - int(ct)) * 1000) + 0.0 ``` ---------- components: Library (Lib) messages: 399371 nosy: fxcallewaert priority: normal severity: normal status: open title: logging Formatter behavior when using msecs and braces : '{' type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 03:59:02 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Wed, 11 Aug 2021 07:59:02 +0000 Subject: [New-bugs-announce] [issue44885] Incorrect exception highlighting for fstring format Message-ID: <1628668742.09.0.246164919692.issue44885@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Given this code: print(f"Here is that pesky {xxx/2:.3f} again") The traceback prints: Traceback (most recent call last): File "/home/pablogsal/github/python/main/lel.py", line 1, in print(f"Here is that pesky {xxx/2:.3f} again") ^^^ NameError: name 'xxx' is not defined Removing the formatting part ":.3f" makes it work as expected ---------- components: Interpreter Core messages: 399372 nosy: BTaskaya, ammar2, pablogsal priority: normal severity: normal status: open title: Incorrect exception highlighting for fstring format type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 04:38:49 2021 From: report at bugs.python.org (Thomas Trummer) Date: Wed, 11 Aug 2021 08:38:49 +0000 Subject: [New-bugs-announce] [issue44886] asyncio: create_datagram_endpoint() does not return a DatagramTransport Message-ID: <1628671129.22.0.924495128156.issue44886@roundup.psfhosted.org> New submission from Thomas Trummer : According to the documentation[1] loop.create_datagram_endpoint() returns an asyncio.DatagramTransport. However on Windows this is not the case when the ProactorEventLoop is used (which seems to be the default since Python 3.8). This is a problem because a DatagramProtocol subclass needs a downcast in order to satisfy the type system (or mypy for that matter). [1] https://docs.python.org/3/library/asyncio-protocol.html#asyncio.DatagramTransport --- # Will print: False import asyncio class EchoServerProtocol(asyncio.DatagramProtocol): def connection_made(self, transport): print(type(transport), isinstance(transport, asyncio.DatagramTransport)) async def main(): transport, protocol = await asyncio.get_running_loop().create_datagram_endpoint( lambda: EchoServerProtocol(), local_addr=('127.0.0.1', 9999)) try: await asyncio.sleep(5) finally: transport.close() asyncio.run(main()) ---------- components: asyncio messages: 399376 nosy: Thomas Trummer, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio: create_datagram_endpoint() does not return a DatagramTransport versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 06:04:26 2021 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Wed, 11 Aug 2021 10:04:26 +0000 Subject: [New-bugs-announce] [issue44887] test_input_tty hangs when run multiple times in the same process on macOS 10.15 Message-ID: <1628676266.46.0.173635281643.issue44887@roundup.psfhosted.org> New submission from ?ukasz Langa : (I'm still investigating at the moment whether something changed in my environment.) Running the following right now hangs on test_input_tty for me: ./python.exe -m test test_builtin test_builtin -v This fails on all branches up to and including 3.7, so I assume this is environment-specific unless it's a regression due to a change that was backported all the way back to 3.7, which is out of the question as the last functional commit on 3.7 was back in June. Things I tried so far: - rebooting; - using another terminal app (I use iTerm2 by default, tried Terminal.app too); - another shell (I use fish by default, tried bash 5.0 as well); - a non-pydebug build (I use pydebug builds by default to run -R:) The test in question is using deadline if available and `sysconfig.get_config_vars()['HAVE_LIBREADLINE']` returns 1. I'll be trying to check if that works for me next. ---------- messages: 399380 nosy: lukasz.langa priority: low severity: normal stage: test needed status: open title: test_input_tty hangs when run multiple times in the same process on macOS 10.15 versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 07:35:40 2021 From: report at bugs.python.org (Tee KOBAYASHI) Date: Wed, 11 Aug 2021 11:35:40 +0000 Subject: [New-bugs-announce] [issue44888] ssl.OP_LEGACY_SERVER_CONNECT missing Message-ID: <1628681740.09.0.0670758507991.issue44888@roundup.psfhosted.org> New submission from Tee KOBAYASHI : Please implement ssl.OP_LEGACY_SERVER_CONNECT constant that corresponds to SSL_OP_LEGACY_SERVER_CONNECT in C. This is required to make OpenSSL 3.0.0 behave like 1.1.1. ---------- assignee: christian.heimes components: SSL messages: 399386 nosy: christian.heimes, xtkoba priority: normal severity: normal status: open title: ssl.OP_LEGACY_SERVER_CONNECT missing _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 08:43:20 2021 From: report at bugs.python.org (Ken Jin) Date: Wed, 11 Aug 2021 12:43:20 +0000 Subject: [New-bugs-announce] [issue44889] Specialize LOAD_METHOD with PEP 659 adaptive interpreter Message-ID: <1628685800.71.0.329647990646.issue44889@roundup.psfhosted.org> New submission from Ken Jin : Possible specializations: - LOAD_METHOD_CACHED Cache the method. We only need to check that type(o) and o.__dict__ was not modified. - LOAD_METHOD_CLASS For classmethods. Less speedup expected. - LOAD_METHOD_MODULE For module methods. Uncommon (<10%). Please see https://github.com/faster-cpython/ideas/issues/81 for more details. ---------- components: Interpreter Core messages: 399388 nosy: kj priority: normal severity: normal status: open title: Specialize LOAD_METHOD with PEP 659 adaptive interpreter versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 11:32:36 2021 From: report at bugs.python.org (Irit Katriel) Date: Wed, 11 Aug 2021 15:32:36 +0000 Subject: [New-bugs-announce] [issue44890] Enable serialisation stats collection when Py_Debug Message-ID: <1628695956.18.0.391304040066.issue44890@roundup.psfhosted.org> New submission from Irit Katriel : Always collect stats under Py_Debug, which makes them available through the python api. Printing at interpreter exit is still disabled by default. ---------- components: Interpreter Core messages: 399401 nosy: iritkatriel priority: normal severity: normal status: open title: Enable serialisation stats collection when Py_Debug type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 16:01:19 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Wed, 11 Aug 2021 20:01:19 +0000 Subject: [New-bugs-announce] [issue44891] Tests for `id(a) == id(a * 1)` for `bytes` and `str` Message-ID: <1628712079.84.0.245411182001.issue44891@roundup.psfhosted.org> New submission from Nikita Sobolev : While working on `RustPython` (original issue: https://github.com/RustPython/RustPython/issues/2840), I've noticed that `tuple` in CPython has explicit tests that `id` does not change when multiplied by `1`, related: - https://github.com/python/cpython/blob/64a7812c170f5d46ef16a1517afddc7cd92c5240/Lib/test/seq_tests.py#L322 - https://github.com/python/cpython/blob/64a7812c170f5d46ef16a1517afddc7cd92c5240/Lib/test/seq_tests.py#L286-L287 But, I cannot find similar tests for `str` and `bytes` which also have the same behavior: - `str`: https://github.com/python/cpython/blob/64a7812c170f5d46ef16a1517afddc7cd92c5240/Objects/unicodeobject.c#L12709-L12710 - `bytes`: https://github.com/python/cpython/blob/64a7812c170f5d46ef16a1517afddc7cd92c5240/Objects/bytesobject.c#L1456-L1458 Code: ```python >>> b = b'abc' >>> id(b), id(b * 1), id(b) == id(b * 1) (4491073360, 4491073360, True) >>> s = 'abc' >>> id(s), id(s * 1), id(s) == id(s * 1) (4489513776, 4489513776, True) ``` If tests are indeed missing and should be added, I would love to contribute them. ---------- components: Tests messages: 399414 nosy: sobolevn priority: normal severity: normal status: open title: Tests for `id(a) == id(a * 1)` for `bytes` and `str` type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 16:32:21 2021 From: report at bugs.python.org (Diego Ramirez) Date: Wed, 11 Aug 2021 20:32:21 +0000 Subject: [New-bugs-announce] [issue44892] Percentage character (%) inside a comment is badly recognized when using configparser Message-ID: <1628713941.43.0.342717269726.issue44892@roundup.psfhosted.org> New submission from Diego Ramirez : On the Pip GitHub issue tracker (https://github.com/pypa/pip/issues/10348), a user reported a strange behaviour when using a config file (setup.cfg) on its project. The config file had a percentage character ("%") inside a commentary. But the module "configparser" failed with traceback: configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: "%' in string formatting We consider that the character was badly recognized as a part of the file, when it was just a part of an inline comment. Is there any way to fix this bug? ---------- components: Library (Lib) messages: 399415 nosy: DiddiLeija priority: normal severity: normal status: open title: Percentage character (%) inside a comment is badly recognized when using configparser type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 17:33:03 2021 From: report at bugs.python.org (Ronny Pfannschmidt) Date: Wed, 11 Aug 2021 21:33:03 +0000 Subject: [New-bugs-announce] [issue44893] importlib.metadata Entrypoint has a broken _asdict Message-ID: <1628717583.56.0.19105113048.issue44893@roundup.psfhosted.org> New submission from Ronny Pfannschmidt : due to ``` def __iter__(self): """ Supply iter so one may construct dicts of EntryPoints easily. """ return iter((self.name, self)) ``` the default namedtuple asdict method is broken instead of returning the fields, recursive objects are returned as ``` (Pdb) v EntryPoint(name='.git', value='setuptools_scm.git:parse', group='setuptools_scm.parse_scm') (Pdb) v._asdict() {'name': '.git', 'value': EntryPoint(name='.git', value='setuptools_scm.git:parse', group='setuptools_scm.parse_scm')} (Pdb) type(v) (Pdb) ---------- components: Library (Lib) messages: 399419 nosy: Ronny.Pfannschmidt priority: normal severity: normal status: open title: importlib.metadata Entrypoint has a broken _asdict type: behavior versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 19:56:49 2021 From: report at bugs.python.org (Xiaoling Bao) Date: Wed, 11 Aug 2021 23:56:49 +0000 Subject: [New-bugs-announce] [issue44894] HTTP request handler should check sys.stderr for None before use for logging Message-ID: <1628726209.14.0.0889982500329.issue44894@roundup.psfhosted.org> New submission from Xiaoling Bao : This is about HTTP server library (found on Windows with python 3.9, not sure other platforms). In file Lib\http\server.py, we define: class BaseHTTPRequestHandler(...): def log_message(self, format, *args): sys.stderr.write(...) In certain cases, sys.stderr could be None and thus this function call will throw exception. My use case: I created an XMLRPC server (SimpleXMLRPCRequestHandler derives from BaseHTTPRequestHandler) within a Windows service. I guess with that combination, sys.stderr will be None. When this issue happens, the client got empty response and not much error log for debugging. I can upload sample source code files if needed. ---------- components: Library (Lib) messages: 399423 nosy: xiaolingbao priority: normal severity: normal status: open title: HTTP request handler should check sys.stderr for None before use for logging versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 20:13:38 2021 From: report at bugs.python.org (Irit Katriel) Date: Thu, 12 Aug 2021 00:13:38 +0000 Subject: [New-bugs-announce] [issue44895] refleak test failure in test_exceptions Message-ID: <1628727218.93.0.270541838756.issue44895@roundup.psfhosted.org> New submission from Irit Katriel : For background see https://bugs.python.org/issue33930#msg399403 iritkatriel at Irits-MBP cpython % repeat 10 ./python.exe -m test -R 3:3 test_exceptions -m test_no_hang_on_context_chain_cycle2 -m test_recursion_normalizing_infinite_exception -m test_recursion_in_except_handler -m test_recursion_normalizing_with_no_memory 0:00:00 load avg: 2.32 Run tests sequentially 0:00:00 load avg: 2.32 [1/1] test_exceptions beginning 6 repetitions 123456 ...... == Tests result: SUCCESS == 1 test OK. Total duration: 5.9 sec Tests result: SUCCESS 0:00:00 load avg: 2.22 Run tests sequentially 0:00:00 load avg: 2.22 [1/1] test_exceptions beginning 6 repetitions 123456 ...... == Tests result: SUCCESS == 1 test OK. Total duration: 5.8 sec Tests result: SUCCESS 0:00:00 load avg: 2.20 Run tests sequentially 0:00:00 load avg: 2.20 [1/1] test_exceptions beginning 6 repetitions 123456 ...... == Tests result: SUCCESS == 1 test OK. Total duration: 5.8 sec Tests result: SUCCESS 0:00:00 load avg: 2.17 Run tests sequentially 0:00:00 load avg: 2.17 [1/1] test_exceptions beginning 6 repetitions 123456 ...... test_exceptions leaked [6, 6, 4] references, sum=16 test_exceptions leaked [6, 6, 4] memory blocks, sum=16 test_exceptions failed (reference leak) == Tests result: FAILURE == 1 test failed: test_exceptions 1 re-run test: test_exceptions Total duration: 5.9 sec Tests result: FAILURE 0:00:00 load avg: 2.08 Run tests sequentially 0:00:00 load avg: 2.08 [1/1] test_exceptions beginning 6 repetitions 123456 ...... test_exceptions leaked [6, 6, 6] references, sum=18 test_exceptions leaked [6, 6, 6] memory blocks, sum=18 test_exceptions failed (reference leak) == Tests result: FAILURE == 1 test failed: test_exceptions 1 re-run test: test_exceptions Total duration: 5.8 sec Tests result: FAILURE 0:00:00 load avg: 2.39 Run tests sequentially 0:00:00 load avg: 2.39 [1/1] test_exceptions beginning 6 repetitions 123456 ...... test_exceptions leaked [6, 6, 6] references, sum=18 test_exceptions leaked [6, 6, 6] memory blocks, sum=18 test_exceptions failed (reference leak) == Tests result: FAILURE == 1 test failed: test_exceptions 1 re-run test: test_exceptions Total duration: 6.0 sec Tests result: FAILURE 0:00:00 load avg: 2.36 Run tests sequentially 0:00:00 load avg: 2.36 [1/1] test_exceptions beginning 6 repetitions 123456 ...... test_exceptions leaked [6, 6, 6] references, sum=18 test_exceptions leaked [6, 6, 6] memory blocks, sum=18 test_exceptions failed (reference leak) == Tests result: FAILURE == 1 test failed: test_exceptions 1 re-run test: test_exceptions Total duration: 6.0 sec Tests result: FAILURE 0:00:00 load avg: 2.31 Run tests sequentially 0:00:00 load avg: 2.31 [1/1] test_exceptions beginning 6 repetitions 123456 ...... == Tests result: SUCCESS == 1 test OK. Total duration: 6.3 sec Tests result: SUCCESS 0:00:00 load avg: 2.20 Run tests sequentially 0:00:00 load avg: 2.20 [1/1] test_exceptions beginning 6 repetitions 123456 ...... test_exceptions leaked [6, 6, 6] references, sum=18 test_exceptions leaked [6, 6, 6] memory blocks, sum=18 test_exceptions failed (reference leak) == Tests result: FAILURE == 1 test failed: test_exceptions 1 re-run test: test_exceptions Total duration: 6.1 sec Tests result: FAILURE 0:00:00 load avg: 2.35 Run tests sequentially 0:00:00 load avg: 2.35 [1/1] test_exceptions ---------- components: Interpreter Core messages: 399424 nosy: iritkatriel priority: normal severity: normal status: open title: refleak test failure in test_exceptions versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 11 21:26:20 2021 From: report at bugs.python.org (Kai Xia) Date: Thu, 12 Aug 2021 01:26:20 +0000 Subject: [New-bugs-announce] [issue44896] Issue with unparse in ast module Message-ID: <1628731580.95.0.847749574283.issue44896@roundup.psfhosted.org> New submission from Kai Xia : I was trying to construct an ast object dynamically and I think I can identify some potential issue. With the following snippet: ``` #!/usr/bin/env python3 import ast import sys print(sys.version) good = ast.Assign( targets=[ast.Name(id="hello", ctx=ast.Store())], value=ast.Constant(value="world"), lineno=1 ) print(ast.unparse(good)) bad = ast.Assign( targets=[ast.Name(id="hello", ctx=ast.Store())], value=ast.Constant(value="world"), ) print(ast.unparse(bad)) ``` On my box the output looks like: ``` 3.9.6 (default, Jun 29 2021, 05:25:02) [Clang 12.0.5 (clang-1205.0.22.9)] hello = 'world' Traceback (most recent call last): File "/Users/xiaket/py.py", line 19, in print(ast.unparse(bad)) File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ast.py", line 1572, in unparse return unparser.visit(ast_obj) File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ast.py", line 801, in visit self.traverse(node) File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ast.py", line 795, in traverse super().visit(node) File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ast.py", line 407, in visit return visitor(node) File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ast.py", line 858, in visit_Assign if type_comment := self.get_type_comment(node): File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ast.py", line 786, in get_type_comment comment = self._type_ignores.get(node.lineno) or node.type_comment AttributeError: 'Assign' object has no attribute 'lineno' ``` As I can understand, when we need to construct the Assign object, we'll need to provide two keyword arguments, targets and value. We don't need to provide the `lineno` as it should be an attribute of the statement node. Also, if we don't run `unparse` against the object, apparently it works fine. I think in the `get_type_comment` method, we are making the assumption that the lineno is set automatically, this is true when we are parsing python source code as string. But when we are creating the object from scratch, we don't have that `lineno` attribute and it will fail. ---------- components: Library (Lib) messages: 399427 nosy: xiaket priority: normal severity: normal status: open title: Issue with unparse in ast module type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 02:12:33 2021 From: report at bugs.python.org (Neil Schemenauer) Date: Thu, 12 Aug 2021 06:12:33 +0000 Subject: [New-bugs-announce] [issue44897] Integrate trashcan mechanism into _Py_Dealloc Message-ID: <1628748753.21.0.569730676893.issue44897@roundup.psfhosted.org> New submission from Neil Schemenauer : This is a WIP/proof-of-concept of doing away with Py_TRASHCAN_BEGIN and Py_TRASHCAN_END and instead integrating the functionality into _Py_Dealloc. There are a few advantages: - all container objects have the risk of overflowing the C stack if a long reference chain of them is created and then deallocated. So, to be safe, the tp_dealloc methods for those objects should be protected from overflowing the stack. - the Py_TRASHCAN_BEGIN and Py_TRASHCAN_END macros are hard to understand and a bit hard to use correctly. Making the mechanism internal avoids confusion. The code can be slightly simplified as well. This proof-of-concept seems to pass tests but it will need some careful review. The exact rules related to calling GC Track/Untrack are subtle and this changes things a bit. I.e. tp_dealloc is called with GC objects already untracked. For 3rd party extensions, they are calling PyObject_GC_UnTrack() and so I believe they should still work. The fact that PyObject_CallFinalizerFromDealloc() wants GC objects to definitely be tracked is a bit of a mystery to me (there is an assert to check that). I changed the code to track objects if they are not already tracked but I'm not sure that's correct. There could be a performance hit, due to the _PyType_IS_GC() test that was added to the _Py_Dealloc() function. For non-GC objects, that's going to be a new branch and I'm worried it might hurt a bit. OTOH, maybe it's just in the noise. Profiling will need to be done. ---------- components: Interpreter Core messages: 399433 nosy: nascheme priority: normal severity: normal stage: patch review status: open title: Integrate trashcan mechanism into _Py_Dealloc type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 05:45:35 2021 From: report at bugs.python.org (russiavk) Date: Thu, 12 Aug 2021 09:45:35 +0000 Subject: [New-bugs-announce] [issue44898] Path.read_bytes() failed when path contains chinese character Message-ID: <1628761535.09.0.889249928246.issue44898@roundup.psfhosted.org> New submission from russiavk : Path.read_bytes() failed when this path contains chinese character ---------- components: IO messages: 399435 nosy: russiavk priority: normal severity: normal status: open title: Path.read_bytes() failed when path contains chinese character type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 07:43:12 2021 From: report at bugs.python.org (Marko Tuononen) Date: Thu, 12 Aug 2021 11:43:12 +0000 Subject: [New-bugs-announce] [issue44899] tarfile: add support for creating an archive of potentially changing files Message-ID: <1628768592.88.0.0306570478346.issue44899@roundup.psfhosted.org> New submission from Marko Tuononen : I have a use case where I need to create a tar archive from a collection of potentially changing files. I need to use system resources sparingly and because of that it is not possible to first make a copy of the files. Current state of the tarfile library: Creating a tar archive is interrupted with an OSError "unexpected end of data" (example below), if any of the files changes when it is collected. Using the tarfile library in streaming mode does not work either. You might find this bug report relevant: https://bugs.python.org/issue26877 File "/usr/lib64/python3.7/tarfile.py", line 1946, in add self.addfile(tarinfo, f) File "/usr/lib64/python3.7/tarfile.py", line 1974, in addfile copyfileobj(fileobj, self.fileobj, tarinfo.size, bufsize=bufsize) File "/usr/lib64/python3.7/tarfile.py", line 249, in copyfileobj raise exception("unexpected end of data") OSError: unexpected end of data Target state of the tarfile library: Creating a tar archive is not interrupted even if a file changes while collected. The tarfile library's add() method would just return an exit value indicating that some files were changed while being archived. See e.g. how GNU tar handles similar situation: https://man7.org/linux/man-pages/man1/tar.1.html#RETURN_VALUE ---------- components: Library (Lib) messages: 399443 nosy: marko-tuononen priority: normal severity: normal status: open title: tarfile: add support for creating an archive of potentially changing files type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 07:44:41 2021 From: report at bugs.python.org (Mark Shannon) Date: Thu, 12 Aug 2021 11:44:41 +0000 Subject: [New-bugs-announce] [issue44900] Implement superinstructions Message-ID: <1628768681.09.0.627092423081.issue44900@roundup.psfhosted.org> New submission from Mark Shannon : PEP 659 quickening provides a mechanism for replacing instructions. We should exploit this to implement superinstructions when quickening. See https://github.com/faster-cpython/ideas/issues/16 ---------- messages: 399444 nosy: Mark.Shannon priority: normal severity: normal status: open title: Implement superinstructions type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 09:33:02 2021 From: report at bugs.python.org (Christian Buhtz) Date: Thu, 12 Aug 2021 13:33:02 +0000 Subject: [New-bugs-announce] [issue44901] Info about used pickle protocol used by multiprocessing.Queue Message-ID: <1628775182.23.0.332446987158.issue44901@roundup.psfhosted.org> New submission from Christian Buhtz : I read some of the PEPs about pickeling. But I would not say that I understood everything. Of course I checked the docu about multiprocessing.Queue. Currently it is not clear for me which pickle protocol is used by multiprocessing.Queue. Maybe I missed something in the docu or the docu can be improved? - Is there a fixed default - maybe different between the Python versions? - Or is the pickle protocol version dynamicly selected depending on the kind/type/size of data put() into the Queue? Is there a way to find out at runtime which protocol version is used for a specific Queue instance with a specific piece of data? Background: I use Python 3.7 and 3.9 with Pandas 1.3.5. I parallelize work with hugh(?) pandas.DataFrame objects. I simply cut them into pieces (on row axis) which number is limited to the machines CPU cores (minus 1). The cutting happens several times in my sripts because for some things I need the data as one complete DataFrame. Just for example here is one of such pieces which is given to a worker by argument and send back via Queue - 7 workers! RangeIndex: 226687 entries, 0 to 226686 Data columns (total 38 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 HASH_ .... .... 37 NAME_ORG 226687 non-null object dtypes: datetime64[ns](6), float64(1), int64(1), object(30) memory usage: 65.7+ MB I am a bit "scared" that Python wasting my CPU time and does some compression on that data. ;) I just want to get a better idea what is done in the background. ---------- messages: 399447 nosy: buhtz priority: normal severity: normal status: open title: Info about used pickle protocol used by multiprocessing.Queue versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 09:41:00 2021 From: report at bugs.python.org (meowmeowcat) Date: Thu, 12 Aug 2021 13:41:00 +0000 Subject: [New-bugs-announce] [issue44902] [Doc] Changing 'Mac OS X'/'OS X' to 'macOS' Message-ID: <1628775660.76.0.290058998171.issue44902@roundup.psfhosted.org> New submission from meowmeowcat : Changing 'Mac OS X'/'OS X' to 'macOS' in docs. https://www.python.org has already changed to 'macOS'. ---------- assignee: docs at python components: Documentation messages: 399448 nosy: docs at python, meowmeowmeowcat priority: normal severity: normal status: open title: [Doc] Changing 'Mac OS X'/'OS X' to 'macOS' type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 11:22:38 2021 From: report at bugs.python.org (PySimpleGUI) Date: Thu, 12 Aug 2021 15:22:38 +0000 Subject: [New-bugs-announce] [issue44903] [Doc] How does one to about getting onto the "Other Graphical User Interface Packages" page? Message-ID: <1628781758.89.0.888842647561.issue44903@roundup.psfhosted.org> New submission from PySimpleGUI : Some time ago I noticed that the Python documentation has a list of GUI packages that are not part of the Python standard library. https://docs.python.org/3/library/othergui.html The title of the page I'm talking about says: Other Graphical User Interface Packages Major cross-platform (Windows, macOS, Unix-like) GUI toolkits are available for Python: What are the criteria for being listed? PySimpleGUI has more monthly installs than Kivy & WxPython. They're not due to being bundled with a Linux distribution. There is another page with a longer list: https://docs.python.org/3/faq/gui.html ---------- assignee: docs at python components: Documentation messages: 399465 nosy: PySimpleGUI, docs at python priority: normal severity: normal status: open title: [Doc] How does one to about getting onto the "Other Graphical User Interface Packages" page? versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 17:16:41 2021 From: report at bugs.python.org (Tomasz Rzepecki) Date: Thu, 12 Aug 2021 21:16:41 +0000 Subject: [New-bugs-announce] [issue44904] Erroneous behaviour for abstract class properties Message-ID: <1628803001.12.0.882298478545.issue44904@roundup.psfhosted.org> New submission from Tomasz Rzepecki : Subclassing an abc with an abstract class property yields to unexpected behaviour: the class property is called, and an abstract class may be erroneously considered concrete. See https://stackoverflow.com/a/68763572/4434666 for details. ---------- files: bug_report.py messages: 399482 nosy: rzepecki.t priority: normal severity: normal status: open title: Erroneous behaviour for abstract class properties type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50212/bug_report.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 19:48:29 2021 From: report at bugs.python.org (Tomasz Rzepecki) Date: Thu, 12 Aug 2021 23:48:29 +0000 Subject: [New-bugs-announce] [issue44905] Abstract instance and class attributes for abstract base classes Message-ID: <1628812109.02.0.042736265222.issue44905@roundup.psfhosted.org> New submission from Tomasz Rzepecki : There seems to be no way to transparently make an abstract base class enforce instance attributes for subclasses (without creating a custom metaclass, see e.g. https://newbedev.com/python-abstract-class-shall-force-derived-classes-to-initialize-variable-in-init). The analogous problem for enforcing *class* attributes in subclasses can be solved by creating an abstract class property (which can then be overridden by a class attribute), but this feels like a hack and possibly a bug (see https://bugs.python.org/issue44904 for a related bug). The corresponding "solution" for instance attributes does not work (see attached file), and probably rightly so. This seems like an oversight to me. ---------- files: example.py messages: 399486 nosy: rzepecki.t priority: normal severity: normal status: open title: Abstract instance and class attributes for abstract base classes type: enhancement versions: Python 3.9 Added file: https://bugs.python.org/file50213/example.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 12 20:17:17 2021 From: report at bugs.python.org (Alejandro Reimondo) Date: Fri, 13 Aug 2021 00:17:17 +0000 Subject: [New-bugs-announce] [issue44906] Crash on deep call stack under Windows Message-ID: <1628813837.02.0.46640078544.issue44906@roundup.psfhosted.org> New submission from Alejandro Reimondo : The py8.py file starts a S8 system, a Smalltalk system running on Python runtime, I am actually developing (in Beta). The system is running w/o problems on OSX systems, but crash (fast exit w/o any information) when running on Windows. The crash occurs while compiling a simple expression (simple but produce a deep recursion on parsing stage). The expression is shown in "fileMeIn.st". The issue happens on Windows and python version Python 3.9.2 The stack depth is aprox 1800 frames. Steps to reproduce the crash: 1.- decompress the zip file in a folder 2.- on command prompt "python -i py8.py" 3.- "Image loaded" must be shown in console 4.- evaluate "t()" to run tests that fileIn the code in "fileMeIn.st" 5.- after aprox. one minute working, the fast exit occurs and the Python VM exits without reporting anything on output ---------- components: Windows files: crashWin3.9-2021-08-12.zip messages: 399487 nosy: aleReimondo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Crash on deep call stack under Windows type: crash versions: Python 3.9 Added file: https://bugs.python.org/file50214/crashWin3.9-2021-08-12.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 13 02:32:57 2021 From: report at bugs.python.org (=?utf-8?b?5p2o6Z2S?=) Date: Fri, 13 Aug 2021 06:32:57 +0000 Subject: [New-bugs-announce] [issue44907] examples code output do not macth the current version 3.9 Message-ID: <1628836377.64.0.917738508338.issue44907@roundup.psfhosted.org> New submission from ?? : https://docs.python.org/3/tutorial/controlflow.html I got like this following: TypeError: function() got multiple values for argument 'a' not: TypeError: function() got multiple values for keyword argument 'a' >>> def function(a): ... pass ... >>> function(0, a=0) Traceback (most recent call last): File "", line 1, in TypeError: function() got multiple values for keyword argument 'a' ---------- assignee: docs at python components: Documentation messages: 399493 nosy: docs at python, yangqing priority: normal severity: normal status: open title: examples code output do not macth the current version 3.9 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 13 02:41:10 2021 From: report at bugs.python.org (Thomas Grainger) Date: Fri, 13 Aug 2021 06:41:10 +0000 Subject: [New-bugs-announce] [issue44908] recommend httpx as well as requests in http.client/urllib.request docs Message-ID: <1628836870.78.0.245525781075.issue44908@roundup.psfhosted.org> New submission from Thomas Grainger : HTTPX is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2. It's also broadly compatible and inspired by the requests API: https://github.com/encode/httpx/blob/master/docs/compatibility.md#requests-compatibility-guide Currently the project is looking to get a link from the docs to https://www.python-httpx.org/ here's the upstream issue https://github.com/encode/httpx/issues/1772 ---------- assignee: docs at python components: Documentation messages: 399494 nosy: docs at python, graingert priority: normal severity: normal status: open title: recommend httpx as well as requests in http.client/urllib.request docs versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 13 03:31:29 2021 From: report at bugs.python.org (Paul Menzel) Date: Fri, 13 Aug 2021 07:31:29 +0000 Subject: [New-bugs-announce] [issue44909] configure should pick /usr/bin/g++ automatically if present Message-ID: <1628839889.47.0.177188937689.issue44909@roundup.psfhosted.org> New submission from Paul Menzel : [copied from closed (out of date) issue https://bugs.python.org/issue25946] Reproduced with Python 3.9.6. ./configure` both prints `checking for g++... no` and WARNING: By default, distutils will build C++ extension modules with "g++". If this is not intended, then set CXX on the configure command line. if `/usr/bin/g++` is present and executable which doesn't seem to be constructive because it's quite common that one wants to use `/usr/bin/g++` as CXX compiler if available. In case incompatibilities exists with other C++ compilers there should a check and more detailed error message. Furthermore the error message doesn't explain if a part of distutils won't be build because the message sounds like the C++ extension is built, but does it work without a C++ compiler? Specifying `CXX` environment variable or `--with-cxx-main=/usr/bin/g++` `configure` option works fine. ---------- components: Build messages: 399497 nosy: iritkatriel, krichter, pmenzel priority: normal severity: normal status: open title: configure should pick /usr/bin/g++ automatically if present versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 13 14:26:29 2021 From: report at bugs.python.org (A wilson) Date: Fri, 13 Aug 2021 18:26:29 +0000 Subject: [New-bugs-announce] [issue44910] Floating point issue Message-ID: <1628879189.27.0.388910502701.issue44910@roundup.psfhosted.org> New submission from A wilson : 0.01 + 273.15 should equal 273.16 but in python 3.9.5 or earlier report as 273.15999999999997. ---------- messages: 399550 nosy: afw2alan priority: normal severity: normal status: open title: Floating point issue type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 13 20:36:43 2021 From: report at bugs.python.org (Bar Harel) Date: Sat, 14 Aug 2021 00:36:43 +0000 Subject: [New-bugs-announce] [issue44911] Leaked tasks cause IsolatedAsyncioTestCase to throw an exception Message-ID: <1628901403.12.0.195708767229.issue44911@roundup.psfhosted.org> New submission from Bar Harel : Writing a test that leaks a running asyncio task will cause IsolatedAsyncioTestCase to crash while attempting to cancel. Seems like the loop argument wasn't removed from the usage of asyncio.gather() in IsolatedAsyncioTestCase._tearDownAsyncioLoop Pushing a fix as we speak ---------- components: Library (Lib) messages: 399577 nosy: bar.harel priority: normal severity: normal status: open title: Leaked tasks cause IsolatedAsyncioTestCase to throw an exception type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 14 01:59:40 2021 From: report at bugs.python.org (Ma Lin) Date: Sat, 14 Aug 2021 05:59:40 +0000 Subject: [New-bugs-announce] [issue44912] doc: macOS supports os.fsync(fd) Message-ID: <1628920780.23.0.904216312785.issue44912@roundup.psfhosted.org> New submission from Ma Lin : The doc of os.fsync() said: Availability: Unix, Windows. https://docs.python.org/3.11/library/os.html#os.fsync But it seems that macOS supports fsync. (I'm not a macOS user) ---------- assignee: docs at python components: Documentation, macOS messages: 399583 nosy: docs at python, malin, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: doc: macOS supports os.fsync(fd) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 14 10:28:01 2021 From: report at bugs.python.org (Madhu) Date: Sat, 14 Aug 2021 14:28:01 +0000 Subject: [New-bugs-announce] [issue44913] segfault in call to embedded PyModule_New Message-ID: <1628951281.11.0.836111617554.issue44913@roundup.psfhosted.org> New submission from Madhu : Attached zip file has a test case which illustrate the problem: A python process (`dload.py') loads up a shared library (`libfoo1.so') and makes a call to a foreign function `foo'. `foo' Initializes Python and creates makes a call to PyModule_New at which point dload.py crashes. If the calling process is not python(`dload.c'), there is no crash This sort of situation occurs with python-pam. I'm not sure if this is a programmer error and would welcome correction [I'm supplying a zip file because I can't attach multiple files Steps to repeat 1. compile libfoo1.so according to comment 2. Run ./dload.py 3. Optionally compile and run dload.c ---------- components: Extension Modules files: test-case-embedded-1.zip messages: 399591 nosy: enometh priority: normal severity: normal status: open title: segfault in call to embedded PyModule_New versions: Python 3.9 Added file: https://bugs.python.org/file50217/test-case-embedded-1.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 14 11:17:09 2021 From: report at bugs.python.org (Ken Jin) Date: Sat, 14 Aug 2021 15:17:09 +0000 Subject: [New-bugs-announce] [issue44914] tp_version_tag is not unique when test runs with -R : Message-ID: <1628954229.73.0.911721360588.issue44914@roundup.psfhosted.org> New submission from Ken Jin : tp_version_tag is supposed to be unique for different class objects. Under normal circumstances, everything works properly: def good(): class C: def __init__(self): # Just to force `tp_version_tag` to update pass cls_id = hex(id(C)) tp_version_tag_before = C.v # v is tp_version_tag of C, exposed to Python x = C() # tp_new requires a _PyType_Lookup for `__init__`, updating `tp_version_tag` tp_version_tag_after = C.v print(f'C ID: {cls_id}, before: {tp_version_tag_before} after: {tp_version_tag_after}') for _ in range(100): good() Result: C ID: 0x2920c2d58d0, before: 0 after: 115 C ID: 0x2920c2d6170, before: 0 after: 116 C ID: 0x2920c2d65c0, before: 0 after: 117 C ID: 0x2920c8f2800, before: 0 after: 118 C ID: 0x2920c8f7150, before: 0 after: 119 C ID: 0x2920c8f6010, before: 0 after: 120 C ID: 0x2920c8f6460, before: 0 after: 121 C ID: 0x2920c8f3d90, before: 0 after: 122 C ID: 0x2920c8f0e20, before: 0 after: 123 C ID: 0x2920c8f41e0, before: 0 after: 124 C ID: 0x2920c8f4a80, before: 0 after: 125 C ID: 0x2920c8f1270, before: 0 after: 126 C ID: 0x2920c8f16c0, before: 0 after: 127 C ID: 0x2920c8f34f0, before: 0 after: 128 C ID: 0x2920c8f5770, before: 0 after: 129 C ID: 0x2920c8f30a0, before: 0 after: 130 ... However, wrapping in a unittest and run under -R : suddenly changes things: class BadTest(unittest.TestCase): def test_bad(self): class C: def __init__(self): pass cls_id = hex(id(C)) tp_version_tag_before = C.v x = C() tp_version_tag_after = C.v print(f'C ID: {cls_id}, before: {tp_version_tag_before} after: {tp_version_tag_after}') Result: "python_d.exe" -m test test_bad -R 10:10 C ID: 0x1c4c59354b0, before: 0 after: 78 .C ID: 0x1c4c59372e0, before: 0 after: 82 .C ID: 0x1c4c5934370, before: 0 after: 82 .C ID: 0x1c4c5934370, before: 0 after: 82 .C ID: 0x1c4c5933680, before: 0 after: 82 .C ID: 0x1c4c5938cc0, before: 0 after: 82 .C ID: 0x1c4c59354b0, before: 0 after: 82 .C ID: 0x1c4c5935900, before: 0 after: 82 .C ID: 0x1c4c5933680, before: 0 after: 82 .C ID: 0x1c4c59354b0, before: 0 after: 82 .C ID: 0x1c4c59354b0, before: 0 after: 82 .C ID: 0x1c4c59361a0, before: 0 after: 82 .C ID: 0x1c4c5933680, before: 0 after: 82 .C ID: 0x1c4c5931400, before: 0 after: 82 .C ID: 0x1c4c5938cc0, before: 0 after: 82 .C ID: 0x1c4c5938cc0, before: 0 after: 82 .C ID: 0x1c4c5933680, before: 0 after: 82 .C ID: 0x1c4c5936a40, before: 0 after: 82 .C ID: 0x1c4c5931400, before: 0 after: 82 .C ID: 0x1c4c5935900, before: 0 after: 82 Somehow the class is occasionally occupying the same address, and tp_version_tag didn't update properly. tp_version_tag being unique is an important invariant required for LOAD_ATTR and LOAD_METHOD specialization. I bumped into this problem after LOAD_METHOD specialization kept failing magically in test_descr. I think this is related to issue43636 and issue43452, but I ran out of time to bisect after spending a day chasing this down. I'll try to bisect soon. ---------- components: Interpreter Core messages: 399594 nosy: Mark.Shannon, kj, pablogsal, vstinner priority: normal severity: normal status: open title: tp_version_tag is not unique when test runs with -R : versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 14 14:28:18 2021 From: report at bugs.python.org (Doug Hoskisson) Date: Sat, 14 Aug 2021 18:28:18 +0000 Subject: [New-bugs-announce] [issue44915] Python keywords as string keys in TypedDict Message-ID: <1628965698.89.0.982806951929.issue44915@roundup.psfhosted.org> New submission from Doug Hoskisson : I'm running into an issue with the syntax of https://www.python.org/dev/peps/pep-0589/ ``` class C(TypedDict): to: int from: int SyntaxError: invalid syntax ``` I'm not sure any change needs to be made to the specification. But the interpreter needs to recognize that `from` is a string key to a `TypedDict`, not the keyword `from`. Or if you don't want to have to recognize `from` as a string instead of a keyword, we need a specification that allows us to put keywords as keys in `TypedDict`. I was thinking maybe something like: ``` class C(TypedDict): "to": int "from": int ``` as an optional way to write the same thing. ---------- messages: 399595 nosy: Doug Hoskisson priority: normal severity: normal status: open title: Python keywords as string keys in TypedDict type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 14 16:22:30 2021 From: report at bugs.python.org (George) Date: Sat, 14 Aug 2021 20:22:30 +0000 Subject: [New-bugs-announce] [issue44916] Undefined/random behaviour when importing two modules with the same name but different source files Message-ID: <1628972550.44.0.0656223373352.issue44916@roundup.psfhosted.org> New submission from George : Warning: There's a higher probability this is "expected" undefined behaviour or not even undefined and I'm just a moron. In addtion, I couldn't actually replicate it outside of the specific context it happened. But if it sounds plausible and it's something that shouldn't happen I can spend more time trying to replicate. 1. In two different python processes I'm "dynamically" creating a module named `M` using a file `m1.py` that contains a class `C`. Then I create an object of tpye `C` and pickle it. (let's call this object `c1`) 2. In a different thread I do the exact same thing, but the file is `m2.py` then I create an object of type `C` and pickle it. (call this one `c2`) 3. Then, in the same thread, I recreate the module named `M` from `m1.py` and unpickle `c1`, second I create a module named `M` from `m2.py` (this doesn't cause an error) and unpickle `c2`. 4. This (spurprisingly?) seems to basically work fine in most cases. Except for one (and I can't find why it's special) where for some reason `c2` starts calling the methods from a class that's not it's own. In other words `c1` usually maps ot `M.C --> m1.py` and `c2` to `M.C --> m2.py` | But randomly `c2` will start looking up methods in `M.C --> m1.py`, or at least that's what stack traces & debuggers seem to indicate. The way I create the module `M` in all cases: ``` with open(`m1.py`, 'wb') as fp: fp.write(code.encode('utf-8')) spec = importlib.util.spec_from_file_location('M', fp.name) temp_module = importlib.util.module_from_spec(spec) sys.modules['M] = temp_module spec.loader.exec_module(temp_module) # Note: Same for the other module but using `m2.py`, the code I use here contains a class `C` in both cases ``` This seems, unexpected. I wouldn't expect the recreation to cause a crash, but I'd expect it to either override the previous `M` for all existing objects instantiated from that module in all cases, or in no cases... currently it seems that both modules stay loaded and lookups are made randomly. ---------- components: Interpreter Core messages: 399596 nosy: George3d6 priority: normal severity: normal status: open title: Undefined/random behaviour when importing two modules with the same name but different source files versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 14 18:17:05 2021 From: report at bugs.python.org (Irit Katriel) Date: Sat, 14 Aug 2021 22:17:05 +0000 Subject: [New-bugs-announce] [issue44917] interpreter hangs on recursion in both body and handler of a try block Message-ID: <1628979425.1.0.358414735966.issue44917@roundup.psfhosted.org> New submission from Irit Katriel : This was found while investigating issue44895. It may or may not be the cause of that issue. The script below hangs on a mac (it's an extract from test_exceptions.test_recursion_in_except_handler). ----------- import sys count = 0 def main(): def f(): global count count += 1 try: f() except RecursionError: f() sys.setrecursionlimit(30) try: f() except RecursionError: pass main() print(count) ----------- When I kill it the traceback shows it alternating between the two recursive calls, but not in a regular pattern: ... [snipped a lot] File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ [Previous line repeated 2 more times] RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ [Previous line repeated 1 more time] RecursionError: maximum recursion depth exceeded During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/tt.py", line 22, in main() ^^^^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 18, in main f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ [Previous line repeated 10 more times] File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 11, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 13, in f f() ^^^ File "/Users/iritkatriel/src/cpython/tt.py", line 7, in f def f(): KeyboardInterrupt ---------- components: Interpreter Core messages: 399599 nosy: iritkatriel priority: normal severity: normal status: open title: interpreter hangs on recursion in both body and handler of a try block type: crash versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 15 06:36:20 2021 From: report at bugs.python.org (Mark Deen) Date: Sun, 15 Aug 2021 10:36:20 +0000 Subject: [New-bugs-announce] [issue44918] Unhandled Exception (Not Implemented) in HTMLParser().feed Message-ID: <1629023780.24.0.906396765279.issue44918@roundup.psfhosted.org> New submission from Mark Deen : The hexadecimal sequence '3c215b02634717' when passed as an argument to HTMLParser()'s feed function results in the exception noted below. The code example below illustrates this exception. from html.parser import HTMLParser parser = HTMLParser() parser.feed(bytearray.fromhex('3c215b02634717').decode('ascii')) Traceback (most recent call last): File "poc.py", line 5, in parser.feed(bytearray.fromhex('3c215b02634717').decode('ascii')) File "/usr/lib/python3.9/html/parser.py", line 110, in feed self.goahead(0) File "/usr/lib/python3.9/html/parser.py", line 178, in goahead k = self.parse_html_declaration(i) File "/usr/lib/python3.9/html/parser.py", line 263, in parse_html_declaration return self.parse_marked_section(i) File "/usr/lib/python3.9/_markupbase.py", line 149, in parse_marked_section sectName, j = self._scan_name( i+3, i ) File "/usr/lib/python3.9/_markupbase.py", line 390, in _scan_name self.error("expected name token at %r" File "/usr/lib/python3.9/_markupbase.py", line 33, in error raise NotImplementedError( NotImplementedError: subclasses of ParserBase must override error() ---------- components: Parser messages: 399611 nosy: lys.nikolaou, md103, pablogsal priority: normal severity: normal status: open title: Unhandled Exception (Not Implemented) in HTMLParser().feed type: security versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 15 08:23:36 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Sun, 15 Aug 2021 12:23:36 +0000 Subject: [New-bugs-announce] [issue44919] TypedDict subtypes ignore any other metaclasses in 3.9+ Message-ID: <1629030216.33.0.128972085225.issue44919@roundup.psfhosted.org> New submission from Nikita Sobolev : Some context. I have a `User` class defined as a `TypedDict`: ```python from typing import TypedDict class User(TypedDict): name: str registered: bool ``` Now, I want to check if some `dict` is an instance of `User` like so: `isinstance(my_dict, User)`. But, I can't. Because it raises `TypeError('TypedDict does not support instance and class checks')`. Ok. Let's change `__instancecheck__` method then. We can only do this in a metaclass: ```python from typing import _TypedDictMeta class UserDictMeta(_TypedDictMeta): def __instancecheck__(cls, arg: object) -> bool: return ( isinstance(arg, dict) and isinstance(arg.get('name'), str) and isinstance(arg.get('registered'), bool) ) class UserDict(User, metaclass=UserDictMeta): ... ``` It looks like it should work. It used to work like this in Python3.7 and Python3.8. But since Python3.9 it does not work. Let's try to debug what happens: ```python print(type(UserDict)) # print(UserDict.__instancecheck__(UserDict, {})) # TypeError: TypedDict does not support instance and class checks ``` It looks like my custom `UserDictMeta` is completely ignored. And I cannot change how `__instancecheck__` behaves. I suspect that the reason is in these 2 lines: https://github.com/python/cpython/blob/ad0a8a9c629a7a0fa306fbdf019be63c701a8028/Lib/typing.py#L2384-L2385 What's the most unclear in this behavior is that it does not match regular Python subclass patterns. Simplified example of the same behavior, using only primite types: ```python class FirstMeta(type): def __instancecheck__(cls, arg: object) -> bool: raise TypeError('You cannot use this type in isinstance() call') class First(object, metaclass=FirstMeta): ... # User space: class MyClass(First): # this looks like a user-define TypedDict subclass ... class MySubClassMeta(FirstMeta): def __instancecheck__(cls, arg: object) -> bool: return True # just an override example class MySubClass(MyClass, metaclass=MySubClassMeta): ... print(isinstance(1, MySubClass)) # True print(isinstance(1, MyClass)) # TypeError ``` As you can see our `MySubClassMeta` works perfectly fine this way. I suppose that this is a bug in `TypedDict`, not a desired behavior. Am I correct? ---------- components: Library (Lib) messages: 399615 nosy: sobolevn priority: normal severity: normal status: open title: TypedDict subtypes ignore any other metaclasses in 3.9+ type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 15 13:26:05 2021 From: report at bugs.python.org (Steve Simmons) Date: Sun, 15 Aug 2021 17:26:05 +0000 Subject: [New-bugs-announce] [issue44920] Support UUIDv6, UUIDv7, and UUIDv8 from the new version of RFC4122 Message-ID: <1629048365.97.0.101119142277.issue44920@roundup.psfhosted.org> New submission from Steve Simmons : Three new types of UUIDs have been proposed in the latest draft of the next version of RFC4122. Full text of that draft is in [1] (published 21 April 2021; draft period ends 21 Oct 2021). Support for these should be included in uuid.py for Python 3.11, with backport for 3.9 and 3.10. The timetable for Python 3.11 should fit with the end of the IETF draft period. Implementation should be similar to the existing UUID classes in uuid.py, the prototypes in [2], or even parts of my own uuid6 version [3]. [1] https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format [2] https://github.com/uuid6/prototypes/tree/main/python [3] https://github.com/stevesimmons/pyuuid6/blob/main/uuid6.py ---------- components: Library (Lib) messages: 399624 nosy: stevesimmons priority: normal severity: normal status: open title: Support UUIDv6, UUIDv7, and UUIDv8 from the new version of RFC4122 type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 15 16:10:26 2021 From: report at bugs.python.org (Marco Sulla) Date: Sun, 15 Aug 2021 20:10:26 +0000 Subject: [New-bugs-announce] [issue44921] dict subclassing is slow Message-ID: <1629058226.73.0.105626301201.issue44921@roundup.psfhosted.org> New submission from Marco Sulla : I asked on SO why subclassing dict makes the subclass much slower in some operations. This is the answer by Monica (https://stackoverflow.com/a/59914459/1763602): Indexing and in are slower in dict subclasses because of a bad interaction between a dict optimization and the logic subclasses use to inherit C slots. This should be fixable, though not from your end. The CPython implementation has two sets of hooks for operator overloads. There are Python-level methods like __contains__ and __getitem__, but there's also a separate set of slots for C function pointers in the memory layout of a type object. Usually, either the Python method will be a wrapper around the C implementation, or the C slot will contain a function that searches for and calls the Python method. It's more efficient for the C slot to implement the operation directly, as the C slot is what Python actually accesses. Mappings written in C implement the C slots sq_contains and mp_subscript to provide in and indexing. Ordinarily, the Python-level __contains__ and __getitem__ methods would be automatically generated as wrappers around the C functions, but the dict class has explicit implementations of __contains__ and __getitem__, because the explicit implementations (https://github.com/python/cpython/blob/v3.8.1/Objects/dictobject.c) are a bit faster than the generated wrappers: static PyMethodDef mapp_methods[] = { DICT___CONTAINS___METHODDEF {"__getitem__", (PyCFunction)(void(*)(void))dict_subscript, METH_O | METH_COEXIST, getitem__doc__}, ... (Actually, the explicit __getitem__ implementation is the same function as the mp_subscript implementation, just with a different kind of wrapper.) Ordinarily, a subclass would inherit its parent's implementations of C-level hooks like sq_contains and mp_subscript, and the subclass would be just as fast as the superclass. However, the logic in update_one_slot (https://github.com/python/cpython/blob/v3.8.1/Objects/typeobject.c#L7202) looks for the parent implementation by trying to find the generated wrapper methods through an MRO search. dict doesn't have generated wrappers for sq_contains and mp_subscript, because it provides explicit __contains__ and __getitem__ implementations. Instead of inheriting sq_contains and mp_subscript, update_one_slot ends up giving the subclass sq_contains and mp_subscript implementations that perform an MRO search for __contains__ and __getitem__ and call those. This is much less efficient than inheriting the C slots directly. Fixing this will require changes to the update_one_slot implementation. Aside from what I described above, dict_subscript also looks up __missing__ for dict subclasses, so fixing the slot inheritance issue won't make subclasses completely on par with dict itself for lookup speed, but it should get them a lot closer. As for pickling, on the dumps side, the pickle implementation has a dedicated fast path (https://github.com/python/cpython/blob/v3.8.1/Modules/_pickle.c#L4291) for dicts, while the dict subclass takes a more roundabout path through object.__reduce_ex__ and save_reduce. On the loads side, the time difference is mostly just from the extra opcodes and lookups to retrieve and instantiate the __main__.A class, while dicts have a dedicated pickle opcode for making a new dict. If we compare the disassembly for the pickles: In [26]: pickletools.dis(pickle.dumps({0: 0, 1: 1, 2: 2, 3: 3, 4: 4})) 0: \x80 PROTO 4 2: \x95 FRAME 25 11: } EMPTY_DICT 12: \x94 MEMOIZE (as 0) 13: ( MARK 14: K BININT1 0 16: K BININT1 0 18: K BININT1 1 20: K BININT1 1 22: K BININT1 2 24: K BININT1 2 26: K BININT1 3 28: K BININT1 3 30: K BININT1 4 32: K BININT1 4 34: u SETITEMS (MARK at 13) 35: . STOP highest protocol among opcodes = 4 In [27]: pickletools.dis(pickle.dumps(A({0: 0, 1: 1, 2: 2, 3: 3, 4: 4}))) 0: \x80 PROTO 4 2: \x95 FRAME 43 11: \x8c SHORT_BINUNICODE '__main__' 21: \x94 MEMOIZE (as 0) 22: \x8c SHORT_BINUNICODE 'A' 25: \x94 MEMOIZE (as 1) 26: \x93 STACK_GLOBAL 27: \x94 MEMOIZE (as 2) 28: ) EMPTY_TUPLE 29: \x81 NEWOBJ 30: \x94 MEMOIZE (as 3) 31: ( MARK 32: K BININT1 0 34: K BININT1 0 36: K BININT1 1 38: K BININT1 1 40: K BININT1 2 42: K BININT1 2 44: K BININT1 3 46: K BININT1 3 48: K BININT1 4 50: K BININT1 4 52: u SETITEMS (MARK at 31) 53: . STOP highest protocol among opcodes = 4 we see that the difference between the two is that the second pickle needs a whole bunch of opcodes to look up __main__.A and instantiate it, while the first pickle just does EMPTY_DICT to get an empty dict. After that, both pickles push the same keys and values onto the pickle operand stack and run SETITEMS ---------- components: C API messages: 399625 nosy: Marco Sulla priority: normal severity: normal status: open title: dict subclassing is slow type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 15 18:19:35 2021 From: report at bugs.python.org (Oleg Baskakov) Date: Sun, 15 Aug 2021 22:19:35 +0000 Subject: [New-bugs-announce] [issue44922] isinstance breaks on imported dataclasses Message-ID: <1629065975.65.0.328459121322.issue44922@roundup.psfhosted.org> New submission from Oleg Baskakov : Hey I was trying to import dataclasses from another file and somehow isinstance doesn't work anymore: main.py: ``` import codegen from dataclasses import dataclass @dataclass class AtomX: my_symbol: str quantity: str = "" codegen.inheritance_map(AtomX("qwerty")) ``` codegen.py: ``` from main import AtomX def inheritance_map(candidate): assert isinstance(candidate, AtomX) ``` PS the same code with `assert candidate.__class__.__name__ == "AtomX"` works fine ---- Python 3.9.6 (v3.9.6:db3ff76da1, Jun 28 2021, 11:49:53) [Clang 6.0 (clang-600.0.57)] on darwin I'm running inside of PyCharm ---------- messages: 399628 nosy: baskakov priority: normal severity: normal status: open title: isinstance breaks on imported dataclasses type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 05:52:33 2021 From: report at bugs.python.org (Troulet-lambert Odile) Date: Mon, 16 Aug 2021 09:52:33 +0000 Subject: [New-bugs-announce] [issue44923] Unittest incorrect result with argparse.ArgumentError in self asserRaises context Message-ID: <1629107553.76.0.661365207866.issue44923@roundup.psfhosted.org> New submission from Troulet-lambert Odile : When passed an Argparse.ArgumentError in the self.assertRaises context uniittest does not recognize the exception and raises an exception. As a consequence the test fails whereas it should pass ---------- components: Tests files: bug_unittest_exception.py messages: 399640 nosy: piscvau priority: normal severity: normal status: open title: Unittest incorrect result with argparse.ArgumentError in self asserRaises context type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file50222/bug_unittest_exception.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 06:11:14 2021 From: report at bugs.python.org (Mikael Koli) Date: Mon, 16 Aug 2021 10:11:14 +0000 Subject: [New-bugs-announce] [issue44924] logging.handlers.QueueHandler does not maintain the exc_text Message-ID: <1629108674.13.0.869133594401.issue44924@roundup.psfhosted.org> New submission from Mikael Koli : The reason why logging.handlers.QueueHandler does not maintain exc_text is obvious: def prepare(self, record): ... record = copy.copy(record) record.message = msg record.msg = msg record.args = None record.exc_info = None record.exc_text = None return record The record.exc_text is set to None. The reason for this is to prevent the exception text showing up multiple times to the message. See https://bugs.python.org/issue34334. However, there are a couple of situations this may cause a problem. First, it's not currently possible to format the exception of the record in a handler on the other side of the queue. Second, it's not possible to let the handler on the other side of the queue utilize exc_text. The default handlers do not behave in such a way but one could prefer to create their own handler that does so, such as log the records to a database with a column for the exception text. Possible solution: Don't override the record.msg and don't set the record.exc_text to None. I think it could be done simply: def prepare(self, record): ... record = copy.copy(record) record.message = msg # record.msg = msg record.args = None record.exc_info = None # record.exc_text = None return record This way one can format the record later again without multiple exception text showing up in the message. Doing so will fail the test 'test_logging.QueueHandlerTest.test_formatting' as this tests the record.msg is the same as record.message. This may cause issues if someone relies on record.msg. On the other hand, now other formatters and handlers down the line could use the exc_text attribute. I'm not sure if this is too breaking change or not. The failing test: def test_formatting(self): msg = self.next_message() levelname = logging.getLevelName(logging.WARNING) log_format_str = '{name} -> {levelname}: {message}' formatted_msg = log_format_str.format(name=self.name, levelname=levelname, message=msg) formatter = logging.Formatter(self.log_format) self.que_hdlr.setFormatter(formatter) self.que_logger.warning(msg) log_record = self.queue.get_nowait() self.assertEqual(formatted_msg, log_record.msg) # self.assertEqual(formatted_msg, log_record.message) I tested this issue with the following test (which is a pass with the current build): class QueueHandlerTest(BaseTest): def test_formatting_exc_text(self): formatter = logging.Formatter(self.log_format) self.que_hdlr.setFormatter(formatter) try: raise RuntimeError('deliberate mistake') except: self.que_logger.exception('failed', stack_info=True) log_record = self.queue.get_nowait() self.assertTrue(log_record.exc_text.startswith('Traceback (most recent ' 'call last):\n')) ---------- components: Library (Lib) messages: 399642 nosy: Miksus priority: normal severity: normal status: open title: logging.handlers.QueueHandler does not maintain the exc_text type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 09:22:03 2021 From: report at bugs.python.org (Jesse Rittner) Date: Mon, 16 Aug 2021 13:22:03 +0000 Subject: [New-bugs-announce] [issue44925] Confusing deprecation notice for typing.IO Message-ID: <1629120123.92.0.171925756948.issue44925@roundup.psfhosted.org> New submission from Jesse Rittner : The docs for typing.IO, typing.TextIO, and typing.BinaryIO include a confusing deprecation notice. https://docs.python.org/3/library/typing.html#typing.IO > Deprecated since version 3.8, will be removed in version 3.12: These types are also in the typing.io namespace, which was never supported by type checkers and will be removed. As per the discussion on https://github.com/python/typing/issues/834, this deprecation notice only refers to the typing.io package, which is confusing. It would be helpful to rephrase it for clarity. ---------- components: Library (Lib) messages: 399655 nosy: rittneje priority: normal severity: normal status: open title: Confusing deprecation notice for typing.IO type: enhancement versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 13:15:16 2021 From: report at bugs.python.org (Maximilian Hils) Date: Mon, 16 Aug 2021 17:15:16 +0000 Subject: [New-bugs-announce] [issue44926] typing.get_type_hints() raises for type aliases with forward references Message-ID: <1629134116.67.0.00330387851755.issue44926@roundup.psfhosted.org> New submission from Maximilian Hils : Someone reported this rather interesting issue where typing.get_type_hints crashes on type aliases with forward references. The original report is at https://github.com/mitmproxy/pdoc/issues/290. Here's an extended minimal example: foo.py: ``` import typing FooList1: typing.TypeAlias = list["Foo"] FooList2: typing.TypeAlias = typing.List["Foo"] class Foo: pass ``` bar.py: ``` import typing import foo def func1(x: foo.FooList1): pass def func2(x: foo.FooList2): pass print(typing.get_type_hints(func1)) # {'x': list['Foo']} print(typing.get_type_hints(func2)) # NameError: name 'Foo' is not defined. ``` Observations: 1. func1 doesn't crash, but also doesn't resolve the forward reference. I am not sure if this expected behavior. If it isn't, this should eventually run in the same problem as func2. 2. func2 crashes because "Foo" is evaluated in the context of bar.py (where class Foo is unknown) and not in the context of foo.py. ForwardRef._evaluate would somehow need to know in which context it was defined. #41249 (TypedDict inheritance doesn't work with get_type_hints) introduced ForwardRef.__forward_module__, which would be a logical place for that information. I'm not sure if it is a good idea to use __forward_module__ more widely. 3. This may end up as quite a bit of complexity for an edge case, I'm fine if it is considered wontfix. The reason I'm bringing it up is that PEP 613 (Explicit Type Aliases) decidedly allows forward references in type aliases. For the record, PEP 563 (postponed evaluations) does not change the outcome here. However, postponed evaluations often make it possible to avoid the forward references by declaring the aliases last. ---------- components: Library (Lib) messages: 399660 nosy: mhils priority: normal severity: normal status: open title: typing.get_type_hints() raises for type aliases with forward references type: crash versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 14:01:00 2021 From: report at bugs.python.org (AR) Date: Mon, 16 Aug 2021 18:01:00 +0000 Subject: [New-bugs-announce] [issue44927] [sqlite3] insert Message-ID: <1629136860.31.0.191643980598.issue44927@roundup.psfhosted.org> New submission from AR : I propose to add an insert method to a SQLite cursor object, believe this could greatly improve readability of python code when you have to do a lot of insert operations to a different tables. Currently we have to create a dictionary with SQL's for each table: insert_sql = {'form_a': """INSERT INTO table_a (field_a1, field_a2) VALUES(:f1, :f2)""", 'form_b': """INSERT INTO table_b (field_b1, field_b2, field_b3) VALUES(:f1, :f2, :f3), ....other SQL statements }""" or we may use version with unnamed parameters: insert_sql = {'form_a': """INSERT INTO table_a (field_a1, field_a2) VALUES(?, ?)""", 'form_b': """INSERT INTO table_b (field_b1, field_b2, field_b3) VALUES(?, ?, ?), ....other SQL statements }""" The first one is conveniently compatible with cx_Oracle bind variable syntax, rows that are inserted are essentially dictionary. As dictionary is a mutable type, some extra freedom during construction of the row is allowed. The second syntax(qmark style) is specific to SQLite, rows that are inserted should have tuple type (namedtuple for us to be able to extract field names at later stage). entries_dict = [{'field_a1': 100, 'field_a2': 'sample1'}, {'field_a1': 500, 'field_a2': 'sample2'}] DataRow = namedtuple('DataRow', ['field_a1', 'field_a2']) entries_namedtuple = [DataRow(101, 'sample3'), DataRow(505, 'sample4')] In order to do an insert, you have to use either execute, or executemany: cursor.executemany(insert_sql['form_a'], entries_dict) or cursor.execute(insert_sql['form_a'], entries_dict[0]) Now let's move towards future insert method of cursor. As a first step, lets create SQL statement on the fly: table_name = 'table_a' #in case of a list of dictionaries: sql = """INSERT INTO {} ({}) VALUES({})""".format(table_name, ', '.join([str(key) for key in entries_dict[0]]), ', '.join([':' + str(key) for key in entries_dict[0]])) #currently, to do an insert operation, we have to: cursor.executemany(sql, entries_dict) #in case of a list of namedtuples: sql = """INSERT INTO {} ({}) VALUES({})""".format(table_name, ', '.join([str(field) for field in entries_namedtuple[0]._fields]), ', '.join(['?' for field in entries_namedtuple[0]._fields])) #currently, to do an insert operation, we have to: cursor.executemany(sql, entries_namedtuple) Now back to the proposal of insert method with unified syntax (one/many and dict/namedtuple). Let's do a second step and add an Insert method to a Cursor. The idea is to provide this method with table name, extract column names from supplied dict/namedtuple and use SQL generators from above. Than we could replace old cursor.executemany syntax with: cursor.insert(table_name, entries_dict) or cursor.insert(table_name, entries_dict[0]) or cursor.insert(table_name, entries_tuple) Since we may insert all, or any row of two types, this looks even more pythonic than pymongo(MongoDB) approach: collection.insert_many(entries_dict) Actually, the fact that pymongo insert code is so much cleaner and concise drew my attention. Other aspects of that lib are totally different story. I do not propose to generalize, or to move towards ORM or pymongo way of doing things. The scope is limited - lets do a convenient insert. Simplified implementation could be like this: def insert(self, table_name, entries): if(type(entries) == list): # several records(rows) need to be inserted do_insert = self.executemany if(hasattr(entries[0], '_fields')): #NamedTuple sql = "INSERT INTO {} ({}) VALUES({})".format(table_name, ', '.join([str(field) for field in entries[0]._fields]), ', '.join(['?' for field in entries[0]._fields])) elif(type(entries[0] == dict)): #dict sql = "INSERT INTO {} ({}) VALUES({})".format(table_name, ', '.join([str(key) for key in entries[0]]), ', '.join([':' + str(key) for key in entries[0]])) else: #just one record(row) do_insert = self.execute if(hasattr(entries, '_fields')): #NamedTuple sql = "INSERT INTO {} ({}) VALUES({})".format(table_name, ', '.join([str(field) for field in entries._fields]), ', '.join(['?' for field in entries._fields])) elif(type(entries == dict)): sql = "INSERT INTO {} ({}) VALUES({})".format(table_name, ', '.join([str(key) for key in entries]), ', '.join([':' + str(key) for key in entries])) do_insert(sql, entries) If proposal is not feasible/doesn?t fit to a broad concept, I suggest to mention in documentation - list comprehension one-line SQL-generators (see above) - remind users who create list of dictionaries for bulk insert that a copy of the dict should be used. Otherwise all dicts inside a list would be the same entries_dict.append(entry_dict.copy()). Definitely, as namedtuple is immutable, no need for extra steps for a list of namedtuples. ---------- components: Extension Modules messages: 399663 nosy: AR priority: normal severity: normal status: open title: [sqlite3] insert type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 17:29:12 2021 From: report at bugs.python.org (Thomas Grainger) Date: Mon, 16 Aug 2021 21:29:12 +0000 Subject: [New-bugs-announce] [issue44928] async generator missing unawaited coroutine warning Message-ID: <1629149352.38.0.0667554084574.issue44928@roundup.psfhosted.org> New submission from Thomas Grainger : demo program: ``` def test_async_fn(): async def async_fn(): pass async_fn() def test_async_gen_fn(): async def agen_fn(): yield agen_fn().aclose() agen_fn().asend(None) test_async_fn() test_async_gen_fn() ``` output: ``` /home/graingert/projects/anyio/foo.py:5: RuntimeWarning: coroutine 'test_async_fn..async_fn' was never awaited async_fn() RuntimeWarning: Enable tracemalloc to get the object allocation traceback ``` expected: ``` /home/graingert/projects/anyio/foo.py:5: RuntimeWarning: coroutine 'test_async_fn..async_fn' was never awaited async_fn() RuntimeWarning: Enable tracemalloc to get the object allocation traceback /home/graingert/projects/anyio/foo.py:12: RuntimeWarning: coroutine '' was never awaited agen_fn().aclose() RuntimeWarning: Enable tracemalloc to get the object allocation traceback /home/graingert/projects/anyio/foo.py:13: RuntimeWarning: coroutine '' was never awaited agen_fn().asend(None) RuntimeWarning: Enable tracemalloc to get the object allocation traceback ``` ---------- components: Interpreter Core messages: 399684 nosy: graingert priority: normal severity: normal status: open title: async generator missing unawaited coroutine warning versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 18:03:59 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 16 Aug 2021 22:03:59 +0000 Subject: [New-bugs-announce] [issue44929] Some RegexFlag cannot be printed in the repr Message-ID: <1629151439.75.0.477829647633.issue44929@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : When printing some instance of RegexFlag **in the REPL** it fails to print: >>> import gc # This prints a ton of objects, including the bad enum RegexFlag one >>> gc.get_referrers(None) Traceback (most recent call last): File "", line 1, in File "/home/pablogsal/github/cpython/Lib/enum.py", line 1399, in global_flag_repr return "%x" % (module, cls_name, self._value_) ~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ TypeError: %x format: an integer is required, not str ---------- messages: 399688 nosy: ethan.furman, pablogsal priority: normal severity: normal status: open title: Some RegexFlag cannot be printed in the repr versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 16 23:11:23 2021 From: report at bugs.python.org (wang xuancong) Date: Tue, 17 Aug 2021 03:11:23 +0000 Subject: [New-bugs-announce] [issue44930] super-Matlab-style ranged list literal initialization Message-ID: <1629169883.67.0.683689066007.issue44930@roundup.psfhosted.org> New submission from wang xuancong : Different from Python 2, Python 3 has removed the capability to create a list from a range. In Python 2, we can use range(1,100,2) to create a list [1, 3, 5, ..., 99], but in Python 3, we can only use list(range(1,100,2)) or [*range(1,100,2)] where the latter is even slower. I would like to propose to use something like [1:100:2] to initialize a list, moreover, you can use [1:100:2, 1000:1200:5, 5000:6000, :10] to create a list of multiple segments of ranges, i.e., [1,3,5,...,99,1000,1005,1010,...,1195,5000,5001,5002,...,5999,0,1,2,...,9]. Ranged list creation is quite useful and is often used in multi-thread/multi-processing scheduling or tracked sorting. This is especially useful in deep learning where you want to shuffle the training data but keep track of their corresponding labels. In deep RNN, where every training instance has a different length, after shuffling/sorting, you also need to keep track of their corresponding lengths information and etc. Thanks! ---------- components: Interpreter Core messages: 399707 nosy: xuancong84 priority: normal severity: normal status: open title: super-Matlab-style ranged list literal initialization type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 02:53:46 2021 From: report at bugs.python.org (Jurjen N.E. Bos) Date: Tue, 17 Aug 2021 06:53:46 +0000 Subject: [New-bugs-announce] [issue44931] Add "bidimap" to collections library Message-ID: <1629183226.79.0.0263345990362.issue44931@roundup.psfhosted.org> New submission from Jurjen N.E. Bos : The Java class "BiDiMap" is very useful and doesn't seem to have an equivalent in the Python libraries. I wrote a proposed class that does just that. Here's a simple implementation, that could be used as a starting point. ---------- files: bidimap.py hgrepos: 408 messages: 399710 nosy: jneb priority: normal severity: normal status: open title: Add "bidimap" to collections library Added file: https://bugs.python.org/file50225/bidimap.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 03:26:02 2021 From: report at bugs.python.org (theeshallnotknowethme) Date: Tue, 17 Aug 2021 07:26:02 +0000 Subject: [New-bugs-announce] [issue44932] `issubclass` and `isinstance` doesn't check for all 2nd argument types Message-ID: <1629185162.39.0.804391329839.issue44932@roundup.psfhosted.org> New submission from theeshallnotknowethme : When I tried using `isinstance` with a type (e.g. `bool`) as the 1st argument and a parameterized generic in a tuple (e.g. '(`bool`, `list[bool]`)') as the 2nd argument, it raised a `TypeError`, 'isinstance() argument 2 cannot be a parameterized generic'. But when I did the same thing in `issubclass`, it returned a boolean and did not raise any `TypeError`s. Using `isinstance` with an object as the 1st argument and the same tuple 2nd argument also returned a boolean, and did not raise any `TypeError`s. Is this expected behaviour, or should this be fixed? This was tested in Python 3.10.0rc1 in a 64-bit system. ---------- components: Tests, Windows messages: 399717 nosy: February291948, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: `issubclass` and `isinstance` doesn't check for all 2nd argument types type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 05:18:56 2021 From: report at bugs.python.org (tester) Date: Tue, 17 Aug 2021 09:18:56 +0000 Subject: [New-bugs-announce] [issue44933] python3.9-intel64 hardened runtime not enabled Message-ID: <1629191936.44.0.533137792291.issue44933@roundup.psfhosted.org> New submission from tester : When trying too build the python framework using the method below and you try to get it notarized you get the following error. "path": "munkitools-5.5.0.4362.pkg/munkitools_python.pkg Contents/Payload/usr/local/munki/Python.framework/Versions/3.9/bin/python3.9-intel64", "message": "The executable does not have the hardened runtime enabled.", https://github.com/munki/munki/blob/main/code/tools/build_python_framework.sh The package get built using this ttps://github.com/lifeunexpected/Scripts This issue happens on python 3.9.5 and 3.9.6 earlier versions did not include python3.9-intel64. ---------- components: macOS messages: 399721 nosy: bettyrab, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: python3.9-intel64 hardened runtime not enabled type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 06:35:52 2021 From: report at bugs.python.org (Bastian Neuburger) Date: Tue, 17 Aug 2021 10:35:52 +0000 Subject: [New-bugs-announce] [issue44934] Windows installer: Append Python to PATH instead of prepending it Message-ID: <1629196552.09.0.837835076265.issue44934@roundup.psfhosted.org> New submission from Bastian Neuburger : Hi there, in our organization Python 3.9 is installed on Windows with the PrependPath option; as expected the Install and Scripts directories are prepended to path. However if there are Python scripts with the same name as a system command (e.g. a script named ping.py vs. the included ping.exe), the Python script gets preferred if I run ping in cmd.exe or Powershell, which makes sense, since the Python script path is considered before e.g. C:\Windows\System32\. Is it possible to either change the option to append the Install and scripts directories to the systems PATH instead of prepending it or add an AppendPath option? I searched for a discussion why prepending was chosen instead of appending but I didn't find anything. ---------- components: Windows messages: 399737 nosy: bn_append, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows installer: Append Python to PATH instead of prepending it type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 09:51:47 2021 From: report at bugs.python.org (Jakub Kulik) Date: Tue, 17 Aug 2021 13:51:47 +0000 Subject: [New-bugs-announce] [issue44935] Solaris: enable posix_spawn in subprocess Message-ID: <1629208307.21.0.258573678386.issue44935@roundup.psfhosted.org> New submission from Jakub Kulik : Solaris also provides posix_spawn() syscall that can/should be used in the subprocess module to spawn new processes. ---------- components: Library (Lib) messages: 399750 nosy: kulikjak priority: normal severity: normal status: open title: Solaris: enable posix_spawn in subprocess versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 12:55:02 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Aug 2021 16:55:02 +0000 Subject: [New-bugs-announce] [issue44936] test_concurrent_futures: test_cancel_futures_wait_false() and test_interpreter_shutdown() failed on GHA Windows x64 Message-ID: <1629219302.53.0.957750750211.issue44936@roundup.psfhosted.org> New submission from STINNER Victor : GitHub Action Windows x64: https://github.com/python/cpython/runs/3342514542 test_concurrent_futures failed when tests are run in parallel, but then passed then re-run in verbose mode. ====================================================================== FAIL: test_cancel_futures_wait_false (test.test_concurrent_futures.ThreadPoolShutdownTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\cpython\cpython\lib\test\test_concurrent_futures.py", line 486, in test_cancel_futures_wait_false rc, out, err = assert_python_ok('-c', """if True: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\a\cpython\cpython\lib\test\support\script_helper.py", line 160, in assert_python_ok return _assert_python(True, *args, **env_vars) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\a\cpython\cpython\lib\test\support\script_helper.py", line 145, in _assert_python res.fail(cmd_line) ^^^^^^^^^^^^^^^^^^ File "D:\a\cpython\cpython\lib\test\support\script_helper.py", line 72, in fail raise AssertionError("Process return code is %d\n" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Process return code is 3221225477 command line: ['D:\\a\\cpython\\cpython\\PCbuild\\amd64\\python.exe', '-X', 'faulthandler', '-I', '-c', 'if True:\n from concurrent.futures import ThreadPoolExecutor\n from test.test_concurrent_futures import sleep_and_print\n if __name__ == "__main__":\n t = ThreadPoolExecutor()\n t.submit(sleep_and_print, .1, "apple")\n t.shutdown(wait=False, cancel_futures=True)\n '] stdout: --- apple --- stderr: --- --- ====================================================================== FAIL: test_interpreter_shutdown (test.test_concurrent_futures.ThreadPoolShutdownTest) 0:01:59 load avg: 5.63 [100/428/1] test_lltrace passed -- running: test_regrtest (40.5 sec) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\cpython\cpython\lib\test\test_concurrent_futures.py", line 307, in test_interpreter_shutdown rc, out, err = assert_python_ok('-c', """if 1: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\a\cpython\cpython\lib\test\support\script_helper.py", line 160, in assert_python_ok return _assert_python(True, *args, **env_vars) 0:01:59 load avg: 5.63 [101/428/1] test_ucn passed -- running: test_regrtest (40.5 sec) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ fetching http://www.pythontest.net/unicode/13.0.0/NamedSequences.txt ... File "D:\a\cpython\cpython\lib\test\support\script_helper.py", line 145, in _assert_python res.fail(cmd_line) ^^^^^^^^^^^^^^^^^^ File "D:\a\cpython\cpython\lib\test\support\script_helper.py", line 72, in fail raise AssertionError("Process return code is %d\n" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Process return code is 3221225477 command line: ['D:\\a\\cpython\\cpython\\PCbuild\\amd64\\python.exe', '-X', 'faulthandler', '-I', '-c', 'if 1:\n from concurrent.futures import ThreadPoolExecutor\n from time import sleep\n from test.test_concurrent_futures import sleep_and_print\n if __name__ == "__main__":\n context = \'\'\n if context == "":\n t = ThreadPoolExecutor(5)\n else:\n from multiprocessing import get_context\n context = get_context(context)\n t = ThreadPoolExecutor(5, mp_context=context)\n t.submit(sleep_and_print, 1.0, "apple")\n '] stdout: --- apple --- stderr: --- --- ---------------------------------------------------------------------- Ran 226 tests in 109.440s FAILED (failures=2, skipped=111) test test_concurrent_futures failed (...) 0:21:19 load avg: 0.02 Re-running test_concurrent_futures in verbose mode (matching: test_cancel_futures_wait_false, test_interpreter_shutdown) test_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... skipped 'require unix system' test_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... skipped 'require unix system' test_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 1.33s ok test_cancel_futures_wait_false (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.27s ok test_interpreter_shutdown (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 1.14s ok ---------------------------------------------------------------------- Ran 5 tests in 2.756s OK (skipped=2) ---------- components: Tests messages: 399768 nosy: lukasz.langa, pablogsal, vstinner priority: normal severity: normal status: open title: test_concurrent_futures: test_cancel_futures_wait_false() and test_interpreter_shutdown() failed on GHA Windows x64 versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 13:00:13 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Aug 2021 17:00:13 +0000 Subject: [New-bugs-announce] [issue44937] test_regrest: test_tools_script_run_tests() failed on GHA Windows x64 Message-ID: <1629219613.26.0.318881059728.issue44937@roundup.psfhosted.org> New submission from STINNER Victor : GitHub Action Windows x64 CI: https://github.com/python/cpython/runs/3342514542 First, test_regrtest was killed after 20 minutes, while it was running test_tools_script_run_tests(). Then, re-running test_regrtest in verbose mode failed with a PermissionError. 2021-08-16T17:33:22.5453810Z 0:21:19 load avg: 0.02 [428/428/3] test_regrtest crashed (Exit code 1) 2021-08-16T17:33:22.5454836Z Timeout (0:20:00)! 2021-08-16T17:33:22.5456120Z Thread 0x00000d2c (most recent call first): 2021-08-16T17:33:22.5457237Z File "D:\a\cpython\cpython\lib\subprocess.py", line 1493 in _readerthread 2021-08-16T17:33:22.5459078Z File "D:\a\cpython\cpython\lib\threading.py", line 946 in run 2021-08-16T17:33:22.5460342Z File "D:\a\cpython\cpython\lib\threading.py", line 1009 in _bootstrap_inner 2021-08-16T17:33:22.5461365Z File "D:\a\cpython\cpython\lib\threading.py", line 966 in _bootstrap 2021-08-16T17:33:22.5461983Z 2021-08-16T17:33:22.5462583Z Thread 0x000002e8 (most recent call first): 2021-08-16T17:33:22.5464628Z File "D:\a\cpython\cpython\lib\threading.py", line 1099 in _wait_for_tstate_lock 2021-08-16T17:33:22.5466364Z File "D:\a\cpython\cpython\lib\threading.py", line 1083 in join 2021-08-16T17:33:22.5468053Z File "D:\a\cpython\cpython\lib\subprocess.py", line 1522 in _communicate 2021-08-16T17:33:22.5469685Z File "D:\a\cpython\cpython\lib\subprocess.py", line 1148 in communicate 2021-08-16T17:33:22.5471344Z File "D:\a\cpython\cpython\lib\subprocess.py", line 503 in run 2021-08-16T17:33:22.5473016Z File "D:\a\cpython\cpython\lib\test\test_regrtest.py", line 521 in run_command 2021-08-16T17:33:22.5474852Z File "D:\a\cpython\cpython\lib\test\test_regrtest.py", line 546 in run_python 2021-08-16T17:33:22.5476717Z File "D:\a\cpython\cpython\lib\test\test_regrtest.py", line 600 in run_tests 2021-08-16T17:33:22.5478416Z File "D:\a\cpython\cpython\lib\test\test_regrtest.py", line 647 in test_tools_script_run_tests 2021-08-16T17:33:22.5480110Z File "D:\a\cpython\cpython\lib\unittest\case.py", line 549 in _callTestMethod 2021-08-16T17:33:22.5481748Z File "D:\a\cpython\cpython\lib\unittest\case.py", line 592 in run 2021-08-16T17:33:22.5483362Z File "D:\a\cpython\cpython\lib\unittest\case.py", line 652 in __call__ 2021-08-16T17:33:22.5484975Z File "D:\a\cpython\cpython\lib\unittest\suite.py", line 122 in run 2021-08-16T17:33:22.5486506Z File "D:\a\cpython\cpython\lib\unittest\suite.py", line 84 in __call__ 2021-08-16T17:33:22.5488012Z File "D:\a\cpython\cpython\lib\unittest\suite.py", line 122 in run 2021-08-16T17:33:22.5489316Z File "D:\a\cpython\cpython\lib\unittest\suite.py", line 84 in __call__ 2021-08-16T17:33:22.5490786Z File "D:\a\cpython\cpython\lib\unittest\suite.py", line 122 in run 2021-08-16T17:33:22.5492556Z File "D:\a\cpython\cpython\lib\unittest\suite.py", line 84 in __call__ 2021-08-16T17:33:22.5494071Z File "D:\a\cpython\cpython\lib\unittest\runner.py", line 176 in run 2021-08-16T17:33:22.5495666Z File "D:\a\cpython\cpython\lib\test\support\__init__.py", line 997 in _run_suite 2021-08-16T17:33:22.5496563Z test_curses test_dbm_gnu test_dbm_ndbm test_devpoll test_epoll 2021-08-16T17:33:22.5497530Z File "D:\a\cpython\cpython\lib\test\support\__init__.py", line 1122 in run_unittest 2021-08-16T17:33:22.5498484Z test_fcntl test_fork1 test_gdb test_grp test_ioctl test_kqueue 2021-08-16T17:33:22.5499441Z File "D:\a\cpython\cpython\lib\test\libregrtest\runtest.py", line 261 in _test_module 2021-08-16T17:33:22.5500545Z test_multiprocessing_fork test_multiprocessing_forkserver test_nis 2021-08-16T17:33:22.5501639Z File "D:\a\cpython\cpython\lib\test\libregrtest\runtest.py", line 297 in _runtest_inner2 2021-08-16T17:33:22.5502797Z test_openpty test_ossaudiodev test_pipes test_poll test_posix 2021-08-16T17:33:22.5503867Z File "D:\a\cpython\cpython\lib\test\libregrtest\runtest.py", line 335 in _runtest_inner 2021-08-16T17:33:22.5504886Z test_pty test_pwd test_readline test_resource test_spwd 2021-08-16T17:33:22.5505946Z File "D:\a\cpython\cpython\lib\test\libregrtest\runtest.py", line 202 in _runtest 2021-08-16T17:33:22.5506925Z test_syslog test_threadsignals test_wait3 test_wait4 2021-08-16T17:33:22.5507892Z File "D:\a\cpython\cpython\lib\test\libregrtest\runtest.py", line 245 in runtest 2021-08-16T17:33:22.5508801Z test_xxtestfuzz test_zipfile64 2021-08-16T17:33:22.5509740Z File "D:\a\cpython\cpython\lib\test\libregrtest\runtest_mp.py", line 83 in run_tests_worker 2021-08-16T17:33:22.5511140Z File "D:\a\cpython\cpython\lib\test\libregrtest\main.py", line 675 in _main 2021-08-16T17:33:22.5512662Z File "D:\a\cpython\cpython\lib\test\libregrtest\main.py", line 655 in main 2021-08-16T17:33:22.5513586Z test_asyncio test_concurrent_futures test_regrtest 2021-08-16T17:33:22.5514516Z File "D:\a\cpython\cpython\lib\test\libregrtest\main.py", line 733 in main 2021-08-16T17:33:22.5515982Z File "D:\a\cpython\cpython\lib\test\regrtest.py", line 43 in _main 2021-08-16T17:33:22.5517908Z File "D:\a\cpython\cpython\lib\test\regrtest.py", line 47 in 2021-08-16T17:33:22.5520342Z File "D:\a\cpython\cpython\lib\runpy.py", line 86 in _run_code 2021-08-16T17:33:22.5521322Z File "D:\a\cpython\cpython\lib\runpy.py", line 196 in _run_module_as_main (...) 2021-08-16T17:33:25.6698180Z 0:21:22 load avg: 0.01 Re-running test_regrtest in verbose mode 2021-08-16T17:33:25.7418249Z Traceback (most recent call last): 2021-08-16T17:33:25.7419935Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 396, in temp_dir 2021-08-16T17:33:25.7420863Z yield path 2021-08-16T17:33:25.7421674Z ^^^^^^^^^^ 2021-08-16T17:33:25.7422399Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 449, in temp_cwd 2021-08-16T17:33:25.7433248Z yield cwd_dir 2021-08-16T17:33:25.7434246Z ^^^^^^^^^^^^^ 2021-08-16T17:33:25.7435032Z File "D:\a\cpython\cpython\lib\test\libregrtest\main.py", line 655, in main 2021-08-16T17:33:25.7435868Z self._main(tests, kwargs) 2021-08-16T17:33:25.7436399Z ^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7437160Z File "D:\a\cpython\cpython\lib\test\libregrtest\main.py", line 728, in _main 2021-08-16T17:33:25.7438083Z sys.exit(0) 2021-08-16T17:33:25.7438591Z ^^^^^^^^^^^ 2021-08-16T17:33:25.7439133Z SystemExit: 0 2021-08-16T17:33:25.7439483Z 2021-08-16T17:33:25.7440281Z During handling of the above exception, another exception occurred: 2021-08-16T17:33:25.7440906Z 2021-08-16T17:33:25.7441466Z Traceback (most recent call last): 2021-08-16T17:33:25.7442312Z File "D:\a\cpython\cpython\lib\test\support\__init__.py", line 191, in _force_run 2021-08-16T17:33:25.7443113Z return func(*args) 2021-08-16T17:33:25.7443672Z ^^^^^^^^^^^ 2021-08-16T17:33:25.7446594Z PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'D:\\a\\cpython\\cpython\\build\\test_python_1976?\\test_python_worker_3504?' 2021-08-16T17:33:25.7447646Z 2021-08-16T17:33:25.7448449Z During handling of the above exception, another exception occurred: 2021-08-16T17:33:25.7449020Z 2021-08-16T17:33:25.7449617Z Traceback (most recent call last): 2021-08-16T17:33:25.7450443Z File "D:\a\cpython\cpython\lib\runpy.py", line 196, in _run_module_as_main 2021-08-16T17:33:25.7451253Z return _run_code(code, main_globals, None, 2021-08-16T17:33:25.7451964Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7452625Z File "D:\a\cpython\cpython\lib\runpy.py", line 86, in _run_code 2021-08-16T17:33:25.7453378Z exec(code, run_globals) 2021-08-16T17:33:25.7455186Z ^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7455860Z File "D:\a\cpython\cpython\lib\test\__main__.py", line 2, in 2021-08-16T17:33:25.7457168Z main() 2021-08-16T17:33:25.7457604Z ^^^^^^ 2021-08-16T17:33:25.7458336Z File "D:\a\cpython\cpython\lib\test\libregrtest\main.py", line 733, in main 2021-08-16T17:33:25.7459194Z Regrtest().main(tests=tests, **kwargs) 2021-08-16T17:33:25.7459769Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7460528Z File "D:\a\cpython\cpython\lib\test\libregrtest\main.py", line 649, in main 2021-08-16T17:33:25.7461374Z with os_helper.temp_cwd(test_cwd, quiet=True): 2021-08-16T17:33:25.7462043Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7462787Z File "D:\a\cpython\cpython\lib\contextlib.py", line 153, in __exit__ 2021-08-16T17:33:25.7463678Z self.gen.throw(typ, value, traceback) 2021-08-16T17:33:25.7464729Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7465461Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 447, in temp_cwd 2021-08-16T17:33:25.7466377Z with temp_dir(path=name, quiet=quiet) as temp_path: 2021-08-16T17:33:25.7467243Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7467944Z File "D:\a\cpython\cpython\lib\contextlib.py", line 153, in __exit__ 2021-08-16T17:33:25.7468797Z self.gen.throw(typ, value, traceback) 2021-08-16T17:33:25.7469408Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7470182Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 401, in temp_dir 2021-08-16T17:33:25.7471114Z rmtree(path) 2021-08-16T17:33:25.7471591Z ^^^^^^^^^^^^ 2021-08-16T17:33:25.7472342Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 358, in rmtree 2021-08-16T17:33:25.7473077Z _rmtree(path) 2021-08-16T17:33:25.7473676Z ^^^^^^^^^^^^^ 2021-08-16T17:33:25.7474441Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 301, in _rmtree 2021-08-16T17:33:25.7475690Z _waitfor(_rmtree_inner, path, waitall=True) 2021-08-16T17:33:25.7476344Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7477086Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 246, in _waitfor 2021-08-16T17:33:25.7477875Z func(pathname) 2021-08-16T17:33:25.7478398Z ^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7479130Z File "D:\a\cpython\cpython\lib\test\support\os_helper.py", line 298, in _rmtree_inner 2021-08-16T17:33:25.7480240Z _force_run(fullname, os.rmdir, fullname) 2021-08-16T17:33:25.7480851Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2021-08-16T17:33:25.7481603Z File "D:\a\cpython\cpython\lib\test\support\__init__.py", line 197, in _force_run 2021-08-16T17:33:25.7482298Z 2021-08-16T17:33:25.7482896Z return func(*args) 2021-08-16T17:33:25.7483533Z ---------------------------------------------------------------------- 2021-08-16T17:33:25.7484530Z ^^^^^^^^^^^ 2021-08-16T17:33:25.7484753Z 2021-08-16T17:33:25.7486229Z PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'D:\\a\\cpython\\cpython\\build\\test_python_1976?\\test_python_worker_3504?' 2021-08-16T17:33:25.7487341Z Ran 0 tests in 0.000s ---------- components: Tests messages: 399769 nosy: vstinner priority: normal severity: normal status: open title: test_regrest: test_tools_script_run_tests() failed on GHA Windows x64 versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 13:44:23 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 17 Aug 2021 17:44:23 +0000 Subject: [New-bugs-announce] [issue44938] Expose PyErr_ChainExceptions in the stable API Message-ID: <1629222263.76.0.471437987741.issue44938@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Currently, exception chaining in the C-API must be done by hand. This is a painstaking process that involves several steps and is very easy to do incorrectly. We currently have a private function: _PyErr_ChainExceptions that does this job. Given that exception chaining is a very fundamental operation, this functionality must be exposed in the stable C-API ---------- messages: 399777 nosy: pablogsal priority: normal severity: normal status: open title: Expose PyErr_ChainExceptions in the stable API _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 16:11:12 2021 From: report at bugs.python.org (Leon Mintz) Date: Tue, 17 Aug 2021 20:11:12 +0000 Subject: [New-bugs-announce] [issue44939] proposal: add support for regex in Literal type hint Message-ID: <1629231072.16.0.414725051443.issue44939@roundup.psfhosted.org> New submission from Leon Mintz : Could typing.Literal (or analogous) accept a regex pattern to match against? For example, if I want a duration string, duration: str # allowed syntax: 3s, 3m, 3h etc. vs duration: LiteralPattern['[0-9]+[smh]'] ---------- messages: 399787 nosy: leon.mintz priority: normal severity: normal status: open title: proposal: add support for regex in Literal type hint _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 18:24:06 2021 From: report at bugs.python.org (Rondevous) Date: Tue, 17 Aug 2021 22:24:06 +0000 Subject: [New-bugs-announce] [issue44940] Hint the use of non-capturing group in re.findall() documentation Message-ID: <1629239046.75.0.423461536015.issue44940@roundup.psfhosted.org> New submission from Rondevous : Can it please be hinted in the docs of re.findall to use (?:...) for non-capturing groups? >>> re.findall('(foo)?bar|cool', 'cool') [''] >>> ### I expected the result: ['cool'] After hours of frustration, I learnt that I should use a non-capturing group (?:foo) in the pattern. This was not obvious. P.S. Making the groups non-capturing in such a pattern is not needed in javascript (as tested on regexr.com); could this be an issue with the | operator in re.findall? ---------- assignee: docs at python components: Documentation messages: 399799 nosy: docs at python, rondevous priority: normal severity: normal status: open title: Hint the use of non-capturing group in re.findall() documentation type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 22:06:42 2021 From: report at bugs.python.org (Finn Mason) Date: Wed, 18 Aug 2021 02:06:42 +0000 Subject: [New-bugs-announce] [issue44941] Add check_methods function to standard library Message-ID: <1629252402.72.0.293302326241.issue44941@roundup.psfhosted.org> New submission from Finn Mason : In _collections_abc.py is a private function titled `_check_methods`. It takes a class and a number of method names (as strings), checks if the class has all of the methods, and returns NotImplemented if any are missing. The code is below: ``` def _check_methods(C, *methods): mro = C.__mro__ for method in methods: for B in mro: if method in B.__dict__: if B.__dict__[method] is None: return NotImplemented break else: return NotImplemented return True ``` This is an incredibly convenient function (referred to as check_methods here on out) for creating abstract base classes, and is much simpler than using `hasattr` for each method you want to check. For example: ``` >>> from abc import ABCMeta >>> # Without check_methods >>> class A(metaclass=ABCMeta): ... @classmethod ... def __subclasshook__(cls, subclass): ... return (hasattr(subclass, 'foo') and ... callable(subclass.foo) and ... hasattr(subclass, 'bar') and ... callable(subclass.bar) or ... NotImplemented) ... >>> # With check_methods >>> class B(metaclass=ABCMeta): ... @classmethod ... def __subclasshook(cls, subclass): ... return check_methods(subclass, 'foo', 'bar') ... >>> ``` This would be a great function to add to the standard lib, perhaps in the `abc` module. One problem with `check_methods` as defined in _collections_abc.py is that it doesn't check if the name is callable. Also, type hints and more readable variables may be desirable. The final code, if implemented, may look something like this: ``` # In imports section: from typing import Literal def check_methods(Class: type, *methods: str) -> Literal[True, NotImplemented]: """Check if class `Class` has methods `methods`.""" mro = Class.__mro__ for method in methods: for Base in mro: if (attr := getattr(Base, method, None)) is not None: if not callable(attr): return NotImplemented break else: return NotImplemented return True ``` Again, this would be a great function to add to the `abc` module or a similar one. ---------- components: Library (Lib) messages: 399814 nosy: finnjavier08 priority: normal severity: normal status: open title: Add check_methods function to standard library type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 23:08:11 2021 From: report at bugs.python.org (Ryan Mast (nightlark)) Date: Wed, 18 Aug 2021 03:08:11 +0000 Subject: [New-bugs-announce] [issue44942] Add number pad enter bind to TK's simpleDialog Message-ID: <1629256091.95.0.739265447957.issue44942@roundup.psfhosted.org> New submission from Ryan Mast (nightlark) : Tk the number pad enter and main enter keys separately. The number pad enter button should be bound to `self.ok` in simpleDialog's `Dialog` class so that both enter buttons have the same behavior. A PR for this change has been submitted on GitHub by Electro707. ---------- components: Tkinter messages: 399816 nosy: rmast priority: normal pull_requests: 26272 severity: normal status: open title: Add number pad enter bind to TK's simpleDialog type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 17 23:58:11 2021 From: report at bugs.python.org (Elsie Hupp) Date: Wed, 18 Aug 2021 03:58:11 +0000 Subject: [New-bugs-announce] [issue44943] Integrate PyHyphen into the textwrap module? Message-ID: <1629259091.17.0.693384406062.issue44943@roundup.psfhosted.org> New submission from Elsie Hupp : PyHyphen is a mature library that wraps the existing CPython `textwrap` module and provides the ability to break and hyphenate words in wrapped text. PyHyphen is on PyPI here: https://pypi.org/project/PyHyphen/ And on GitHub here: https://github.com/dr-leo/PyHyphen While the PyPI page and the README file say that PyHyphen uses an Apache 2.0 License, the GitHub repository says that it uses a GPL 2.0/LGPL 2.1/MPL 1.1 tri-license: https://github.com/dr-leo/PyHyphen/blob/master/LICENSE.txt To what extent would it be feasible to integrate PyHyphen's enhancements into the core `textwrap` module? It is my understanding that the `textwrap` itself began life as a third-party module, which would suggest that such integrations are somewhat precedented. I'm not experienced enough to know how to do a pull request myself, and I don't understand the legal details well enough to know if PyHyphen is license-compatible with CPython. ---------- messages: 399819 nosy: elsiehupp priority: normal severity: normal status: open title: Integrate PyHyphen into the textwrap module? type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 00:12:48 2021 From: report at bugs.python.org (Yatharth Mathur) Date: Wed, 18 Aug 2021 04:12:48 +0000 Subject: [New-bugs-announce] [issue44944] Addition of _heappush_max method to complete the max heap implementation in Python's heapq module Message-ID: <1629259968.75.0.146566969003.issue44944@roundup.psfhosted.org> Change by Yatharth Mathur : ---------- nosy: yatharthmathur priority: normal pull_requests: 26273 severity: normal status: open title: Addition of _heappush_max method to complete the max heap implementation in Python's heapq module type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 06:11:48 2021 From: report at bugs.python.org (Mark Shannon) Date: Wed, 18 Aug 2021 10:11:48 +0000 Subject: [New-bugs-announce] [issue44945] Specialize BINARY_ADD using PEP 659 machinery. Message-ID: <1629281508.42.0.164315730868.issue44945@roundup.psfhosted.org> New submission from Mark Shannon : Specializing BINARY_ADD is worthwhile for two reasons: Specializing for ints, floats and strings may give us some small speedup. It removes the complex checks for the special case of extending a string, `s = s + ...` from the normal instruction to a specialized form. ---------- messages: 399830 nosy: Mark.Shannon priority: normal severity: normal status: open title: Specialize BINARY_ADD using PEP 659 machinery. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 06:25:32 2021 From: report at bugs.python.org (Mark Shannon) Date: Wed, 18 Aug 2021 10:25:32 +0000 Subject: [New-bugs-announce] [issue44946] Integer operations are inefficient for "medium" integers. Message-ID: <1629282332.14.0.448234766884.issue44946@roundup.psfhosted.org> New submission from Mark Shannon : "Medium" integers are those with a single internal digit or zero. Medium integers are integers in the range -2**30 to +2**30 on 64 bit machines. "Small" integers, -5 to 256 are cached, but are represented as medium integers internally. To a good approximation, all integers are "medium". However, we make little effort to exploit that fact in the code for binary operations, which are very common operations on integers. ---------- components: Interpreter Core messages: 399832 nosy: Mark.Shannon priority: normal severity: normal status: open title: Integer operations are inefficient for "medium" integers. type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 07:48:58 2021 From: report at bugs.python.org (Andre Roberge) Date: Wed, 18 Aug 2021 11:48:58 +0000 Subject: [New-bugs-announce] [issue44947] SyntaxError: trailing comma not allowed ... misleading Message-ID: <1629287338.23.0.729068225715.issue44947@roundup.psfhosted.org> New submission from Andre Roberge : Consider the following four slightly different examples: Python 3.10.0rc1 ... >>> from math import sin and cos File "", line 1 from math import sin and cos ^^^ SyntaxError: invalid syntax >>> from math import sin, cos, and tan File "", line 1 from math import sin, cos, and tan ^^^ SyntaxError: trailing comma not allowed without surrounding parentheses >>> from math import (sin, cos,) and tan File "", line 1 from math import (sin, cos,) and tan ^^^ SyntaxError: invalid syntax >>> from math import sin, cos and tan File "", line 1 from math import sin, cos and tan ^^^ SyntaxError: invalid syntax ==== In all four cases, the keyword 'and' is correctly identified as causing the error. In the second case, the message given may suggest that adding parentheses is all that is needed to correct the problem; however, that is "obviously" not the case as shown in the third case. **Perhaps** when a _keyword_ like 'and' is identified as a problem, a generally better message would be something like SyntaxError: the keyword 'and' is not allowed here leaving out all guesses like 'surrounding by parentheses', "meaning == instead of =", 'perhaps forgot a comma', etc., which are sometimes added by Python 3.10+ ? I am fully and painfully aware that attempting to provide helpful and accurate error message is challenging... ---------- components: Parser messages: 399837 nosy: aroberge, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: SyntaxError: trailing comma not allowed ... misleading versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 10:16:49 2021 From: report at bugs.python.org (Thomas Trummer) Date: Wed, 18 Aug 2021 14:16:49 +0000 Subject: [New-bugs-announce] [issue44948] DeprecationWarning: Using ioctl() method Message-ID: <1629296209.14.0.265691405038.issue44948@roundup.psfhosted.org> New submission from Thomas Trummer : DeprecationWarning: Using ioctl() method on sockets returned from get_extra_info('socket') will be prohibited in asyncio 3.9. Please report your use case to bugs.python.org. Use case: def connection_made(self, transport: asyncio.BaseTransport) -> None: sock = transport.get_extra_info('socket') # type: socket.socket sock.ioctl(SIO_UDP_CONNRESET, False) Releated: https://bugs.python.org/issue44743 ---------- components: Windows, asyncio messages: 399845 nosy: Thomas Trummer, asvetlov, paul.moore, steve.dower, tim.golden, yselivanov, zach.ware priority: normal severity: normal status: open title: DeprecationWarning: Using ioctl() method versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 11:09:16 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 18 Aug 2021 15:09:16 +0000 Subject: [New-bugs-announce] [issue44949] test_readline: test_auto_history_disabled() fails on aarch64 RHEL8 Refleaks 3.9, 3.10 and 3.x Message-ID: <1629299356.46.0.606067543476.issue44949@roundup.psfhosted.org> New submission from STINNER Victor : test_readline fails randomly on aarc64 RHEL8 buildbots (3.9, 3.10 and 3.x). In some builds, test_readline fails but then pass when re-run in verbose mode. Example: https://buildbot.python.org/all/#/builders/41/builds/148 --- 0:02:56 load avg: 2.79 Re-running test_readline in verbose mode (matching: test_auto_history_disabled) test_auto_history_disabled (test.test_readline.TestReadline) ... ok --- aarch64 RHEL8 Refleaks 3.9: https://buildbot.python.org/all/#/builders/247/builds/107 test.pythoninfo: readline._READLINE_LIBRARY_VERSION: 7.0 readline._READLINE_RUNTIME_VERSION: 0x700 readline._READLINE_VERSION: 0x700 Tests: 0:33:57 load avg: 0.93 Re-running test_readline in verbose mode (matching: test_auto_history_disabled) beginning 6 repetitions 123456 readline version: 0x700 readline runtime version: 0x700 readline library version: '7.0' use libedit emulation? False test test_readline failed test_auto_history_disabled (test.test_readline.TestReadline) ... FAIL ====================================================================== FAIL: test_auto_history_disabled (test.test_readline.TestReadline) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-aarch64.refleak/build/Lib/test/test_readline.py", line 154, in test_auto_history_disabled self.assertIn(b"History length: 0\r\n", output) AssertionError: b'History length: 0\r\n' not found in bytearray(b'dummy input\r\ndummy input\r\nHistory length: 0') ---------------------------------------------------------------------- ---------- components: Tests messages: 399848 nosy: erlendaasland, lukasz.langa, pablogsal, vstinner priority: normal severity: normal status: open title: test_readline: test_auto_history_disabled() fails on aarch64 RHEL8 Refleaks 3.9, 3.10 and 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 19:36:56 2021 From: report at bugs.python.org (Hamish) Date: Wed, 18 Aug 2021 23:36:56 +0000 Subject: [New-bugs-announce] [issue44950] Math Message-ID: <1629329816.72.0.712259708349.issue44950@roundup.psfhosted.org> New submission from Hamish : Error shown in image ---------- components: Interpreter Core files: unknowasdasdasdn.png messages: 399874 nosy: hamish555 priority: normal severity: normal status: open title: Math type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50226/unknowasdasdasdn.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 22:49:09 2021 From: report at bugs.python.org (David Gilman) Date: Thu, 19 Aug 2021 02:49:09 +0000 Subject: [New-bugs-announce] [issue44951] selector.EpollSelector: EPOLLEXCLUSIVE, round 2 Message-ID: <1629341349.12.0.0455922443851.issue44951@roundup.psfhosted.org> New submission from David Gilman : Note that this is a different approach from the one taken in https://bugs.python.org/issue35517 although the issue is still the same. I've written a patch that allows users of selector.EpollSelector to enable EPOLLEXCLUSIVE on their file descriptors. This PR adds a setter and read only property to only the EpollSelector class instead of trying to expand the entire selector API like the other patch. The other discussion mentioned that there are some useful flags that could be passed down like this one. If other useful behavioral flags emerged in the future I think they should get their own API similar to how I've done it here. However, the other flags available so far for epoll are not useful for the selector module: EPOLLONESHOT and EPOLLET are incompatible with the design of the selector API and EPOLLWAKEUP is only marginally useful, not even getting exported into the select module after nearly a decade (Linux 3.5 was released in 2012). My API uses a getter/method instead of a read/write property because my understanding is that property access shouldn't raise exceptions, but if that doesn't matter here, it could be a read/write property. Justification: First, this is a useful flag that improves performance of epoll under even moderate load. I was going to turn it on by default in this patch but unfortunately Linux prevents you from doing epoll_mod() on anything that has EPOLLEXCLUSIVE set on it, breaking the python-level API. With this patch if you try to modify() after EPOLLEXCLUSIVE is set you'll get an EINVAL but I think the trade-off here is worth it. You don't enable EPOLLEXCLUSIVE on accident and you're reading the manpage for EPOLLEXCLUSIVE where this exact behavior is mentioned before turning anything on, right? And of course the Python docs also warn you about modify(). Second, the thundering herd problem solved by EPOLLEXCLUSIVE is somewhat of a sore spot for Python's PR. In the past year two widely disseminated articles have brought up this issue. This PR isn't going to be a silver bullet however it can make a huge impact in gunicorn, the 3rd party library mentioned in both articles. Gunicorn is a popular WSGI web server and its gthread worker (not the default but the one most often used in production) directly uses the selector module from the standard library. Honestly, it's pretty cool that they were able to make such efficient use of a standard library module like this - how far we've come from the days of asynchat! There is nothing in gunicorn's threaded worker that calls modify() so there would be no API breakage there. Gunicorn thundering herd articles: https://blog.clubhouse.com/reining-in-the-thundering-herd-with-django-and-gunicorn/ https://rachelbythebay.com/w/2020/03/07/costly/ ---------- components: Library (Lib) messages: 399880 nosy: David.Gilman priority: normal severity: normal status: open title: selector.EpollSelector: EPOLLEXCLUSIVE, round 2 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 18 23:56:29 2021 From: report at bugs.python.org (py_ok) Date: Thu, 19 Aug 2021 03:56:29 +0000 Subject: [New-bugs-announce] [issue44952] list need to filter again because the continue empty str value? Message-ID: <1629345389.69.0.268716831338.issue44952@roundup.psfhosted.org> New submission from py_ok <1979239641 at qq.com>: I`m poor in english.please run my code,Thanks. def rv(list): for i in list: #print(type(i)) #print(i.__len__()) if (i.isspace() or i=='' or len(i)==0): list.remove(i) return list list=['k', '', '', '', 'v', '', 'e', '', '', '', '73', '', 'p', '76', ''] print(rv(list))#['k', 'v', 'e', '73', '', 'p', '76', ''] The result still have empty str,I need to filter again.The reason is more empty val is linked.is a bug?or some reason? ---------- components: Build messages: 399882 nosy: py_ok priority: normal severity: normal status: open title: list need to filter again because the continue empty str value? type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 19 05:23:49 2021 From: report at bugs.python.org (Dennis Sweeney) Date: Thu, 19 Aug 2021 09:23:49 +0000 Subject: [New-bugs-announce] [issue44953] Add vectorcall on operator.itemgetter and attrgetter objects Message-ID: <1629365029.54.0.100452744635.issue44953@roundup.psfhosted.org> New submission from Dennis Sweeney : ## Below are my benchmarks for this change. from operator import itemgetter, attrgetter from pyperf import Runner class MyClass: __slots__ = "a", "b" namespace = {'itemgetter': itemgetter, 'attrgetter': attrgetter, 'MyClass': MyClass, } runner = Runner() runner.timeit( name="itemgetter", setup="f = itemgetter(1); x = (1, 2, 3)", stmt="f(x)", globals=namespace ) runner.timeit( name="attrgetter", setup="f = attrgetter('b'); x = MyClass(); x.a = x.b = 1", stmt="f(x)", globals=namespace ) ##### Results ##### # itemgetter: Mean +- std dev: [operator_main] 45.3 ns +- 1.3 ns -> [operator_vec] 29.5 ns +- 0.7 ns: 1.54x faster # attrgetter: Mean +- std dev: [operator_main] 61.6 ns +- 1.7 ns -> [operator_vec] 43.8 ns +- 0.9 ns: 1.41x faster ---------- components: Library (Lib) messages: 399900 nosy: Dennis Sweeney priority: normal severity: normal status: open title: Add vectorcall on operator.itemgetter and attrgetter objects type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 19 07:34:02 2021 From: report at bugs.python.org (Pedro Gimeno) Date: Thu, 19 Aug 2021 11:34:02 +0000 Subject: [New-bugs-announce] [issue44954] Bug in float.fromhex Message-ID: <1629372842.45.0.638068158675.issue44954@roundup.psfhosted.org> New submission from Pedro Gimeno : >>> float.fromhex('0x0.8p-1074') 0.0 >>> float.fromhex('0x.8p-1074') 5e-324 One of them is obviously wrong. It's the second one, because: - The smallest denormal is 0x1p-1074 - Therefore, 0x0.8p-1074 is a tie for rounding purposes. - The digit in the last place is even because the number is zero, and there is a tie, which implies rounding down. ---------- components: Library (Lib) messages: 399909 nosy: pgimeno priority: normal severity: normal status: open title: Bug in float.fromhex versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 19 07:58:46 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 19 Aug 2021 11:58:46 +0000 Subject: [New-bugs-announce] [issue44955] Method stopTestRun() is not always called for skipped tests Message-ID: <1629374326.22.0.989975388504.issue44955@roundup.psfhosted.org> New submission from Serhiy Storchaka : Method startTestRun() is always called for the TestResult object implicitly created by TestCase.defaultTestResult() when no TestResult object is passed to TestCase.run(). But method stopTestRun() is not always called in pair with startTestRun() for skipped tests. It is only called if SkipTest was raised directly or indirectly (via skipTest()). It is not called if a skipping decorator (@skip, @skipIf, @skipUnless) was used for a method or a class. ---------- components: Library (Lib) messages: 399911 nosy: ezio.melotti, michael.foord, rbcollins, serhiy.storchaka priority: normal severity: normal status: open title: Method stopTestRun() is not always called for skipped tests type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 19 09:30:19 2021 From: report at bugs.python.org (Robert T McQuaid) Date: Thu, 19 Aug 2021 13:30:19 +0000 Subject: [New-bugs-announce] [issue44956] curses getch returns wrong value Message-ID: <1629379819.39.0.754773231829.issue44956@roundup.psfhosted.org> New submission from Robert T McQuaid : This applies to Python 3.8 under Debian-11 Bullseye. Under curses getch should return the value of curses.KEY_B2 (350 decimal) when pressing the keypad 5. Instead it returns 574. The simple program following the signature block illustrates the problem. Robert T McQuaid 558 McMartin Road Mattawa Ontario P0H 1V0 phone: 705-744-6274 email: rtmq at fixcas.com # Put the following lines in a file bug.py # Run from a terminal with: python3 bug.py import curses as cs def report(stdscr): print('press the keypad 5') global result result=stdscr.getch() cs.initscr() cs.wrapper(report) print('KEY_B2 (decimal): '+str(cs.KEY_B2)) print('input decimal value: '+str(result)) ---------- components: Library (Lib) messages: 399917 nosy: arbor priority: normal severity: normal status: open title: curses getch returns wrong value versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 19 09:50:37 2021 From: report at bugs.python.org (Sebastian Rittau) Date: Thu, 19 Aug 2021 13:50:37 +0000 Subject: [New-bugs-announce] [issue44957] typing docs: Mention PEP 604 syntax more prominently Message-ID: <1629381037.95.0.0399293112923.issue44957@roundup.psfhosted.org> New submission from Sebastian Rittau : The new PEP 604 syntax for type unions should be mentioned more prominently in the typing docs, starting with Python 3.10. I'm preparing a PR for discussion. ---------- assignee: docs at python components: Documentation messages: 399919 nosy: docs at python, srittau priority: normal severity: normal status: open title: typing docs: Mention PEP 604 syntax more prominently versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 19 16:05:09 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Thu, 19 Aug 2021 20:05:09 +0000 Subject: [New-bugs-announce] [issue44958] [sqlite3] only reset statements when needed Message-ID: <1629403509.48.0.897657910933.issue44958@roundup.psfhosted.org> New submission from Erlend E. Aasland : Ref. Serhiy's msg387858 in bpo-43350: "Maybe the code could be rewritten in more explicit way and call pysqlite_statement_reset() only when it is necessary [...]" Currently, we try to reset statements in all "statement exit" routes. IMO, it would be cleaner to just reset statements when we really need to: 1. before the first sqlite3_step() 2. at cursor exit, if there's an active statement (3. in pysqlite_do_all_statements() ... see bpo-44092) This will make the code easier to follow, and it will minimise the number of resets. The current patch is pretty small: 7 insertions(+), 33 deletions(-) Pro: - less lines of code, less maintenance - cleaner exit paths - optimise SQLite API usage Con: - code churn If this is accepted, PR 25984 of bpo-44073 will be easier to land and review :) ---------- components: Extension Modules messages: 399931 nosy: berker.peksag, erlendaasland, pablogsal, serhiy.storchaka priority: normal severity: normal status: open title: [sqlite3] only reset statements when needed _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 05:34:43 2021 From: report at bugs.python.org (=?utf-8?q?Florin_Sp=C4=83tar?=) Date: Fri, 20 Aug 2021 09:34:43 +0000 Subject: [New-bugs-announce] [issue44959] EXT_SUFFIX is missing '.sl' on HP-UX Message-ID: <1629452083.0.0.513630560234.issue44959@roundup.psfhosted.org> New submission from Florin Sp?tar : On HP-UX, python can no longer find extension modules with the '.sl' suffix. [fspatar at hpux1131:/cust/fspatar/buildtest/hp-ux/11.31/build]> ~/tmp/investigation3/old/bin/python3 Python 3.8.11 (default, Aug 3 2021, 06:15:31) [GCC 4.2.4] on hp-ux-pa Type "help", "copyright", "credits" or "license" for more information. >>> import M2Crypto Traceback (most recent call last): File "/cust/fspatar/buildtest/hp-ux/11.31/build/M2Crypto/m2crypto.py", line 16, in swig_import_helper fp, pathname, description = imp.find_module('_m2crypto', [dirname(__file__)]) File "/h/fspatar/tmp/investigation3/old/lib/python3.8/imp.py", line 296, in find_module raise ImportError(_ERR_MSG.format(name), name=name) ImportError: No module named '_m2crypto' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "/cust/fspatar/buildtest/hp-ux/11.31/build/M2Crypto/__init__.py", line 37, in from M2Crypto import (ASN1, AuthCookie, BIO, BN, DH, DSA, EVP, Engine, Err, File "/cust/fspatar/buildtest/hp-ux/11.31/build/M2Crypto/ASN1.py", line 15, in from M2Crypto import BIO, m2, py27plus, six, has_typing File "/cust/fspatar/buildtest/hp-ux/11.31/build/M2Crypto/BIO.py", line 9, in from M2Crypto import m2, py27plus, six, has_typing File "/cust/fspatar/buildtest/hp-ux/11.31/build/M2Crypto/m2.py", line 30, in from M2Crypto.m2crypto import * File "/cust/fspatar/buildtest/hp-ux/11.31/build/M2Crypto/m2crypto.py", line 26, in _m2crypto = swig_import_helper() File "/cust/fspatar/buildtest/hp-ux/11.31/build/M2Crypto/m2crypto.py", line 18, in swig_import_helper import _m2crypto ModuleNotFoundError: No module named '_m2crypto' This works fine in python 3.8.5 [fspatar at hpux1131:/cust/fspatar/buildtest/hp-ux/11.31/build]> /opt/OPSWbuildtools/2.0.5/python/3.8.5.04/bin/python3 Python 3.8.5 (default, Jul 28 2021, 08:38:55) [GCC 4.2.4] on hp-ux-pa Type "help", "copyright", "credits" or "license" for more information. >>> import M2Crypto It seems to be related to recent changes from https://bugs.python.org/issue42604 Given the file name is _m2crypto.sl, python 3.8.11 can no longer find it. Based on https://www.python.org/dev/peps/pep-3149/#pep-384, my understanding is that python should search for the following file names when extension module _m2crypto is imported (in this order): _m2crypto.cpython-38.sl _m2crypto.so Python can only load the extension module if the file name is _m2crypto.cpython-38.sl [fspatar at hpux1131:/cust/fspatar/buildtest/hp-ux/11.31/build]> ~/tmp/investigation3/old/bin/python3 Python 3.8.11 (default, Aug 3 2021, 06:15:31) [GCC 4.2.4] on hp-ux-pa Type "help", "copyright", "credits" or "license" for more information. >>> import _imp >>> _imp.extension_suffixes() ['.cpython-38.sl'] >>> [fspatar at hpux1131:/cust/fspatar/buildtest/hp-ux/11.31/build]> /opt/OPSWbuildtools/2.0.5/python/3.8.5.04/bin/python3 Python 3.8.5 (default, Jul 28 2021, 08:38:55) [GCC 4.2.4] on hp-ux-pa Type "help", "copyright", "credits" or "license" for more information. >>> import _imp >>> _imp.extension_suffixes() ['.sl'] >>> ---------- components: Extension Modules files: python-hpux-extension-suffixes.patch keywords: patch messages: 399945 nosy: florinspatar, mattip, miss-islington, pablogsal, vstinner priority: normal severity: normal status: open title: EXT_SUFFIX is missing '.sl' on HP-UX type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file50227/python-hpux-extension-suffixes.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 06:23:50 2021 From: report at bugs.python.org (Steven D'Aprano) Date: Fri, 20 Aug 2021 10:23:50 +0000 Subject: [New-bugs-announce] [issue44960] Add regression test for geometric test Message-ID: <1629455030.72.0.396293974466.issue44960@roundup.psfhosted.org> New submission from Steven D'Aprano : Hi Irit, thanks for looking at #28327. Sorry to be That Guy who can't do it himself, but I'm still stuck with old tech and ignorance about the git way of doing things, which limits my ability to do PRs :-( Would you be willing to add a regression test for that bug to the test_statistics.py file please? If you can add a test case for this in class TestGeometricMean and make a PR that would be great. If you're not willing or able, please just reassign back to me and I'll go old school and make a patch file. Should be fairly straight-forward, just add a method like def test_regression_28327(self): # Regression test for b.p.o. #28327 gmean = statistics.geometric_mean expected = 3.80675409583932 self.assertTrue(math.isclose(gmean([2, 3, 5, 7]), expected)) self.assertTrue(math.isclose(gmean([2.0, 3.0, 5.0, 7.0]), expected)) Thanks. ---------- assignee: iritkatriel messages: 399956 nosy: iritkatriel, steven.daprano priority: normal severity: normal status: open title: Add regression test for geometric test type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 08:07:01 2021 From: report at bugs.python.org (Thomas) Date: Fri, 20 Aug 2021 12:07:01 +0000 Subject: [New-bugs-announce] [issue44961] @classmethod doesn't set __wrapped__ the same way as functool's update_wrapper Message-ID: <1629461221.25.0.480325774253.issue44961@roundup.psfhosted.org> New submission from Thomas : @classmethod defines a __wrapped__ attribute that always points to the inner most function in a decorator chain while functool's update_wrapper has been fixed to set the wrapper.__wrapped__ attribute after updating the wrapper.__dict__ (see https://bugs.python.org/issue17482) so .__wrapped__ points to the next decorator in the chain. This results in inconsistency of the value of the.__wrapped__ attribute. Consider this code: from functools import update_wrapper class foo_deco: def __init__(self, func): self._func = func update_wrapper(self, func) def __call__(self, *args, **kwargs): return self._func(*args, **kwargs) class bar_deco: def __init__(self, func): self._func = func update_wrapper(self, func) def __call__(self, *args, **kwargs): return self._func(*args, **kwargs) class Foo: @classmethod @foo_deco def bar_cm(self): pass @bar_deco @foo_deco def bar_bar(self): pass print(Foo.bar_cm.__wrapped__) # print(Foo.bar_bar.__wrapped__) # <__main__.foo_deco object at 0x7fb025445fd0> # The foo_deco object is available on bar_cm this way though print(Foo.__dict__['bar_cm'].__func__) # <__main__.foo_deco object at 0x7fb025445fa0> It would be more consistent if the fix that was applied to update_wrapper was ported to classmethod's construction (or classmethod could invoke update_wrapper directly, maybe). It's also worth noting that @staticmethod behaves the same and @property doesn't define a .__wrapped__ attribute. For @property, I don't know if this is by design or if it was just never ported, but I believe it would be a great addition just to be able to go down a decorator chain without having to special-case the code. ---------- components: Extension Modules messages: 399965 nosy: Thomas701 priority: normal severity: normal status: open title: @classmethod doesn't set __wrapped__ the same way as functool's update_wrapper type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 10:07:52 2021 From: report at bugs.python.org (Thomas Grainger) Date: Fri, 20 Aug 2021 14:07:52 +0000 Subject: [New-bugs-announce] [issue44962] asyncio.create_task weakrefset race condition Message-ID: <1629468472.03.0.485220244129.issue44962@roundup.psfhosted.org> New submission from Thomas Grainger : with the following demo script I can get a IndexError: pop from empty list ``` import itertools import asyncio import concurrent.futures import sys import threading threads = 200 def test_all_tasks_threading() -> None: async def foo() -> None: await asyncio.sleep(0) async def create_tasks() -> None: for i in range(1000): asyncio.create_task(foo()) await asyncio.sleep(0) results = [] with concurrent.futures.ThreadPoolExecutor(threads) as tpe: for f in concurrent.futures.as_completed( tpe.submit(asyncio.run, create_tasks()) for i in range(threads) ): results.append(f.result()) assert results == [None] * threads def main(): for i in itertools.count(): test_all_tasks_threading() print(f"worked {i}") return 0 if __name__ == "__main__": sys.exit(main()) ``` ``` worked 0 worked 1 worked 2 worked 3 worked 4 worked 5 worked 6 worked 7 worked 8 worked 9 worked 10 worked 11 worked 12 worked 13 worked 14 worked 15 worked 16 worked 17 worked 18 Traceback (most recent call last): File "/home/graingert/projects/asyncio-demo/demo.py", line 36, in sys.exit(main()) File "/home/graingert/projects/asyncio-demo/demo.py", line 30, in main test_all_tasks_threading() File "/home/graingert/projects/asyncio-demo/demo.py", line 24, in test_all_tasks_threading results.append(f.result()) File "/usr/lib/python3.9/concurrent/futures/_base.py", line 438, in result return self.__get_result() File "/usr/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result raise self._exception File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/usr/lib/python3.9/asyncio/runners.py", line 48, in run loop.run_until_complete(loop.shutdown_asyncgens()) File "/usr/lib/python3.9/asyncio/base_events.py", line 621, in run_until_complete future = tasks.ensure_future(future, loop=self) File "/usr/lib/python3.9/asyncio/tasks.py", line 667, in ensure_future task = loop.create_task(coro_or_future) File "/usr/lib/python3.9/asyncio/base_events.py", line 433, in create_task task = tasks.Task(coro, loop=self, name=name) File "/usr/lib/python3.9/_weakrefset.py", line 84, in add self._commit_removals() File "/usr/lib/python3.9/_weakrefset.py", line 57, in _commit_removals discard(l.pop()) IndexError: pop from empty list sys:1: RuntimeWarning: coroutine 'BaseEventLoop.shutdown_asyncgens' was never awaited Task was destroyed but it is pending! task: > ``` here's a live demo on github actions: https://github.com/graingert/asyncio-backport/runs/3380502247#step:5:90 ---------- components: asyncio messages: 399969 nosy: asvetlov, graingert, yselivanov priority: normal severity: normal status: open title: asyncio.create_task weakrefset race condition versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 13:24:00 2021 From: report at bugs.python.org (Daniel Pope) Date: Fri, 20 Aug 2021 17:24:00 +0000 Subject: [New-bugs-announce] [issue44963] anext_awaitable is not a collections.abc.Generator Message-ID: <1629480240.94.0.904763304285.issue44963@roundup.psfhosted.org> New submission from Daniel Pope : The anext_awaitable object returned by anext(..., default) does not support .send()/.throw(). It only supports __next__(). So we can pass messages from the suspending coroutine to the event loop but not from the event loop to the suspending coroutine. trio and curio rely on both directions working. (I don't know about asyncio.) For example, this trio code fails: import trio async def produce(): for v in range(3): await trio.sleep(1) yield v async def consume(): p = produce() while True: print(await anext(p, 'finished')) trio.run(consume) raising AttributeError: 'anext_awaitable' object has no attribute 'send'. I realise that any awaitable that wants to await another awaitable must return not an iterator from __await__() but something that implements the full PEP-342 generator protocol. Should PEP-492 section on __await__()[1] say something about that? [1] https://www.python.org/dev/peps/pep-0492/#await-expression ---------- components: Library (Lib) messages: 399982 nosy: lordmauve priority: normal severity: normal status: open title: anext_awaitable is not a collections.abc.Generator type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 14:07:40 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 20 Aug 2021 18:07:40 +0000 Subject: [New-bugs-announce] [issue44964] Semantics of PyCode_Addr2Line() changed Message-ID: <1629482860.18.0.0801179280266.issue44964@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : I have noticed that the semantics of PyCode_Addr2Line() have changed from 3.9 to 3.10. Technically, the function was called with: PyCode_Addr2Line(frame->f_code, frame->f_last_i * 2) but now it needs to be called with PyCode_Addr2Line(frame->f_code, frame->f_last_i * 2) This is likely going to break all users of this function. This is also not advertised in the 3.10 "how to port to Python 3.10" section. We should discuss what's the best approach here because technically this is a backwards incompatible change, although in the other hand PyCode_Addr2Line() was not documented previously so we may have some room. We need to decide on this ASAP, because there is only one extra release candidate before the actual release of 3.10/ ---------- keywords: 3.10regression messages: 399985 nosy: Mark.Shannon, pablogsal priority: release blocker severity: normal status: open title: Semantics of PyCode_Addr2Line() changed versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 16:07:27 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Fri, 20 Aug 2021 20:07:27 +0000 Subject: [New-bugs-announce] [issue44965] [sqlite3] early exit for non-DML statements in executemany() Message-ID: <1629490047.04.0.534577541585.issue44965@roundup.psfhosted.org> New submission from Erlend E. Aasland : Currently, if a non-DML statement is executed with executemany(), we only bail as late as possible: just before the call to _pysqlite_fetch_one_row(). This means that we've already stepped through the statement once (!), and possibly bound values, built the row cast map, and created the description tuple, all before raising the "executemany() can only execute DML statements." So, the error message currently is not quite true, because we already executed the statement once. Checking for this earlier will prevent a (possibly time-consuming) sqlite3_step(), and it will leave the main loop in _pysqlite_query_execute() slightly easier to read, IMO. ---------- components: Extension Modules messages: 399992 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: low severity: normal status: open title: [sqlite3] early exit for non-DML statements in executemany() type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 21:14:41 2021 From: report at bugs.python.org (=?utf-8?b?5p2o6Z2S?=) Date: Sat, 21 Aug 2021 01:14:41 +0000 Subject: [New-bugs-announce] [issue44966] example code does not macth the very version(3.9) Message-ID: <1629508481.91.0.0767549415848.issue44966@roundup.psfhosted.org> New submission from ?? : ?url?https://docs.python.org/3/tutorial/errors.html ?chapter?8.2. Exceptions ?origina example code? >>> '2' + 2 Traceback (most recent call last): File "", line 1, in TypeError: Can't convert 'int' object to str implicitly ?what i got in practice? >>> '2' + 2 Traceback (most recent call last): File "", line 1, in TypeError: can only concatenate str (not "int") to str ---------- assignee: docs at python components: Documentation messages: 400007 nosy: docs at python, yangqing priority: normal severity: normal status: open title: example code does not macth the very version(3.9) type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 20 23:22:27 2021 From: report at bugs.python.org (Gregory Anders) Date: Sat, 21 Aug 2021 03:22:27 +0000 Subject: [New-bugs-announce] [issue44967] pydoc should return non-zero exit code when a query is not found Message-ID: <1629516147.02.0.928809585196.issue44967@roundup.psfhosted.org> New submission from Gregory Anders : Currently pydoc returns an exit code of zero no matter what, even with e.g. pydoc lsjdfkdfj However, the ability to know whether or not pydoc successfully found a result is useful in tools that embed pydoc in some way. Here's one use case: Vim and Neovim have a feature that allows a user to run an external command for the keyword under the cursor (keywordprg). In Python files, this defaults to pydoc. In Neovim, we would like to automatically close the PTY buffers that we create for these processes when they finish without any errors, but if it returns a non-zero exit code we want to keep the PTY buffer open so the user can see what went wrong. Because pydoc returns immediately when it fails to find a match and does not indicate that it failed via a return code, the PTY buffer is closed immediately with no indication to the user that anything went wrong. I have a patch prepared for this that I will link to the issue. ---------- components: Demos and Tools messages: 400012 nosy: gpanders priority: normal severity: normal status: open title: pydoc should return non-zero exit code when a query is not found type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 21 00:27:32 2021 From: report at bugs.python.org (Ryan Mast (nightlark)) Date: Sat, 21 Aug 2021 04:27:32 +0000 Subject: [New-bugs-announce] [issue44968] Fix/remove test_subprocess_wait_no_same_group from test_asyncio tests Message-ID: <1629520052.97.0.572214305384.issue44968@roundup.psfhosted.org> New submission from Ryan Mast (nightlark) : A deprecation made in bpo-41322 uncovered issues with test_subprocess_wait_no_same_group in test_asyncio that seems to have been broken for some time. Reverting to a similar structure prior to the refactoring in https://github.com/python/cpython/commit/658103f84ea860888f8dab9615281ea64fee31b9 using async/await avoids the deprecation error, though it still might not be running correctly. With the change I tried in https://github.com/python/cpython/commit/658103f84ea860888f8dab9615281ea64fee31b9 there is a message about an `unknown child process`, which makes me think there could be some issues with the subprocess exiting prior to the refactoring ~8 years ago. ---------- components: Tests messages: 400018 nosy: asvetlov, ezio.melotti, michael.foord, rbcollins, rmast, yselivanov priority: normal severity: normal status: open title: Fix/remove test_subprocess_wait_no_same_group from test_asyncio tests versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 21 03:41:02 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 21 Aug 2021 07:41:02 +0000 Subject: [New-bugs-announce] [issue44969] Subclassing of annotated types does not always work Message-ID: <1629531662.05.0.587324912826.issue44969@roundup.psfhosted.org> New submission from Serhiy Storchaka : It works only with simple types >>> class X(Annotated[list, 'annotation']): pass ... But not with type aliases >>> class X(Annotated[List[int], 'annotation']): pass ... Traceback (most recent call last): File "", line 1, in TypeError: _GenericAlias.__init__() takes 3 positional arguments but 4 were given >>> class X(Annotated[list[int], 'annotation']): pass ... Traceback (most recent call last): File "", line 1, in TypeError: GenericAlias expected 2 arguments, got 3 And even if the original type is not subclassable, the error message is not always clear: >>> class X(Annotated[Union[int, str], 'annotation']): pass ... Traceback (most recent call last): File "", line 1, in TypeError: _GenericAlias.__init__() takes 3 positional arguments but 4 were given >>> class X(Annotated[int | str, 'annotation']): pass ... Traceback (most recent call last): File "", line 1, in TypeError: _GenericAlias.__init__() takes 3 positional arguments but 4 were given ---------- components: Library (Lib) messages: 400021 nosy: gvanrossum, kj, serhiy.storchaka priority: normal severity: normal status: open title: Subclassing of annotated types does not always work type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 21 09:26:43 2021 From: report at bugs.python.org (Mark Dickinson) Date: Sat, 21 Aug 2021 13:26:43 +0000 Subject: [New-bugs-announce] [issue44970] Re-examine complex pow special case handling Message-ID: <1629552403.37.0.715300348368.issue44970@roundup.psfhosted.org> New submission from Mark Dickinson : Complex power, both via `**` and the built-in `pow`, and via `cmath.pow`, is currently a bit of a mess when it comes to special-case handling - particularly handling of signed zeros, infinities, NaNs, and overflow. At some point it would be nice to rationalise and document the special-case handling, as far as possible, and to make the behaviour of `**` and `pow` consistent with that of `cmath.pow`. Note that while for all the other cmath functions we have good guidance from the C standards on how special cases should be handled, for pow we're on our own - the C standard refuses to specify anything at all about special case handling. Note also that there are a *lot* of special cases to consider. We have four real input parameters (the real and imaginary parts of each of the base and the exponent), each of which can be one of the 7 cases nan, -inf, -finite, -0.0, 0.0, finite, inf, for a total of 7**4 = 2401 combinations; moreover, for some cases we might need to distinguish integral from non-integral values, and even integers from odd integers. This is low priority - in many years of mathematical, scientific and numeric work, I've seen little evidence that anyone actually cares about or uses general complex power. Most users are interested in one or more subcases, like: - positive real base and complex exponent - complex base and integral exponent - complex nth root for positive integers n, especially for small n (square root, cube root, ...) So a possibly more manageable and more useful subtask would be to ensure that special cases are handled in a sensible manner for these subcases. ---------- messages: 400025 nosy: mark.dickinson priority: low severity: normal status: open title: Re-examine complex pow special case handling versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 21 10:55:34 2021 From: report at bugs.python.org (Russell Crosser) Date: Sat, 21 Aug 2021 14:55:34 +0000 Subject: [New-bugs-announce] [issue44971] Named widget has NoneType after single line creation Message-ID: <1629557734.8.0.458190947867.issue44971@roundup.psfhosted.org> New submission from Russell Crosser : Declaring a widget in the following form: ... label2 = ttk.Label(root, text='Show2 Label').pack() ... leaves the widget with a NoneType, and unable to be assigned to (for instance to assign new text). If giving a widget a name, I expect to use it later in the program. This declaration works correctly: ... label2 = ttk.Label(root, text='Show2 Label') label2.pack() ... Simple tkinter program attached. Only tested with 3.9.6 on Win 10. ---------- components: Tkinter files: test_pack.py messages: 400032 nosy: rcrosser priority: normal severity: normal status: open title: Named widget has NoneType after single line creation type: crash versions: Python 3.9 Added file: https://bugs.python.org/file50228/test_pack.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 21 13:23:48 2021 From: report at bugs.python.org (Ryan Mast (nightlark)) Date: Sat, 21 Aug 2021 17:23:48 +0000 Subject: [New-bugs-announce] [issue44972] Add workflow_dispatch trigger for GitHub Actions jobs Message-ID: <1629566628.48.0.680378972885.issue44972@roundup.psfhosted.org> New submission from Ryan Mast (nightlark) : Adding a workflow_dispatch trigger for the GitHub Actions jobs makes it possible to run the GHA CI jobs for commits to branches in a fork without opening a "draft/WIP" PR to one of the main release branches. It also runs the SSL tests which normally get skipped for PRs. The main constraint is that ---------- components: Build messages: 400036 nosy: pablogsal, rmast, vstinner, zach.ware priority: normal severity: normal status: open title: Add workflow_dispatch trigger for GitHub Actions jobs type: enhancement versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 02:39:43 2021 From: report at bugs.python.org (Tushar Sadhwani) Date: Sun, 22 Aug 2021 06:39:43 +0000 Subject: [New-bugs-announce] [issue44973] @classmethod can be stacked on @property, but @staticmethod cannot Message-ID: <1629614383.81.0.292978789456.issue44973@roundup.psfhosted.org> New submission from Tushar Sadhwani : Starting with Python3.9, `@classmethod` can be stacked on top of `@property`, but it seems that `@staticmethod` cannot. >>> class C: ... @classmethod ... @property ... def cm(cls): ... return cls.__name__ ... @staticmethod ... @property ... def magic_number(): ... return 42 ... >>> C.cm 'C' >>> C.magic_number >>> This feels like inconsistent behaviour, plus, having staticmethod properties can be useful for creating read-only class attributes, for eg: class C: @staticmethod @property def FINE_STRUCTURE_CONSTANT(): return 1 / 137 This would make it hard to accidentally modify the constant inside the class. ---------- messages: 400051 nosy: tusharsadhwani priority: normal severity: normal status: open title: @classmethod can be stacked on @property, but @staticmethod cannot type: enhancement versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 04:16:21 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 22 Aug 2021 08:16:21 +0000 Subject: [New-bugs-announce] [issue44974] Warning about "Unknown child process pid" in test_asyncio Message-ID: <1629620181.19.0.253216813733.issue44974@roundup.psfhosted.org> New submission from Serhiy Storchaka : Ryan Mast reported about a warning about "Unknown child process pid" after finishing test_asyncio (see msg400018 and https://github.com/python/cpython/pull/27870#issuecomment-903072119 for details). I cannot reproduce it locally. Ryan, could you help with locating the source of this warning? First suspect is test_close_dont_kill_finished. If it is not, it may be other test calling kill(). ---------- components: Tests, asyncio messages: 400057 nosy: asvetlov, rmast, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Warning about "Unknown child process pid" in test_asyncio type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 04:56:49 2021 From: report at bugs.python.org (Ken Jin) Date: Sun, 22 Aug 2021 08:56:49 +0000 Subject: [New-bugs-announce] [issue44975] [typing] Runtime protocols with ClassVar data members should support issubclass Message-ID: <1629622609.34.0.447069289698.issue44975@roundup.psfhosted.org> New submission from Ken Jin : This is a feature request by a user at https://github.com/python/typing/issues/822. A copy of their request: Currently issubclass cannot be used for runtime_checkable protocols with data members, because those attributes could be set in __init__. I propose to relax this restriction to allow protocols with ClassVar members, as those should be present in the class definition. I'm unsure if I need to update PEP 544 too. I don't remember if PEPs can be updated when 'Accepted', or was it 'Final' PEPs that can't be updated anymore? ---------- assignee: kj components: Library (Lib) messages: 400059 nosy: Jelle Zijlstra, gvanrossum, kj priority: normal severity: normal status: open title: [typing] Runtime protocols with ClassVar data members should support issubclass type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 05:06:30 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Sun, 22 Aug 2021 09:06:30 +0000 Subject: [New-bugs-announce] [issue44976] [sqlite3] lazy creation of result rows Message-ID: <1629623190.98.0.383271331638.issue44976@roundup.psfhosted.org> New submission from Erlend E. Aasland : Currently, we build the first result row in the _pysqlite_query_execute() loop if sqlite3_step() returned SQLITE_ROW. When the user asks for a row (for example, using sqlite3.Cursor.fetchone()), this pre-built row is returned, and the next row is prepared. Suggesting to lazily build result rows instead. Cons: - no result tuples are built unless sqlite3.Cursor.fetch*() is called - no need to keep the next result row (tuple) in pysqlite_Cursor; rows are built on demand - pysqlite_cursor_iternext() is vastly simplified (50% less lines of code) - the main loop in _pysqlite_query_execute() is further simplified Cons: - code churn git diff?main --shortstat: 2 files changed, 29 insertions(+), 58 deletions(-) ---------- components: Extension Modules messages: 400062 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: [sqlite3] lazy creation of result rows type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 05:17:57 2021 From: report at bugs.python.org (Mark Dickinson) Date: Sun, 22 Aug 2021 09:17:57 +0000 Subject: [New-bugs-announce] [issue44977] Deprecate delegation of int to __trunc__? Message-ID: <1629623877.06.0.26111591106.issue44977@roundup.psfhosted.org> New submission from Mark Dickinson : The int constructor, when applied to a general Python object `obj`, first looks for an __int__ method, then for an __index__ method, and then finally for a __trunc__ method. The delegation to __trunc__ used to be useful: it meant that users could write a custom class SomeNumber with the property that: - SomeNumber instances supported 'int' calls, returning a truncated value, but - SomeNumber instances weren't usable in indexing, chr() calls, and all the various other calls that implicitly invoked __int__. class SomeNumber: def __trunc__(self): However, with Python >= 3.10, we no longer use __int__ implicitly for argument conversion in internal code. So the second point above is no longer a concern, and SomeNumber can now simply be written as class SomeNumber: def __int__(self): This decouples int from __trunc__ and leaves __trunc__ as simply the support for the math.trunc function. ---------- messages: 400063 nosy: mark.dickinson priority: normal severity: normal status: open title: Deprecate delegation of int to __trunc__? _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 06:45:03 2021 From: report at bugs.python.org (Mark Dickinson) Date: Sun, 22 Aug 2021 10:45:03 +0000 Subject: [New-bugs-announce] [issue44978] Argument Clinic should not exclude __complex__ methods Message-ID: <1629629103.05.0.72351239388.issue44978@roundup.psfhosted.org> New submission from Mark Dickinson : The argument clinic currently refuses to handle a __complex__ method. However, unlike __int__ and __float__, __complex__ should require no special handling by the argument clinic, since there's no dedicated slot for the __complex__ method. PR arriving shortly. ---------- components: Demos and Tools messages: 400066 nosy: mark.dickinson priority: normal severity: normal status: open title: Argument Clinic should not exclude __complex__ methods type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 11:07:54 2021 From: report at bugs.python.org (Kirill Pinchuk) Date: Sun, 22 Aug 2021 15:07:54 +0000 Subject: [New-bugs-announce] [issue44979] pathlib: support relative path construction Message-ID: <1629644874.28.0.329999889777.issue44979@roundup.psfhosted.org> New submission from Kirill Pinchuk : Hi. I've been using this snippet for years and believe that it would be a nice addition to pathlib's functionality. Basically, it allows constructing path relative to the current file (instead of cwd). Comes quite handy when you're working with deeply nested resources like file fixtures in tests and many other cases. ``` @classmethod def relative(cls, path, depth=1): """ Return path that is constructed relatively to caller file. """ base = Path(sys._getframe(depth).f_code.co_filename).parent return (base / path).resolve() ``` ---------- components: Library (Lib) messages: 400075 nosy: cybergrind priority: normal severity: normal status: open title: pathlib: support relative path construction type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 14:07:45 2021 From: report at bugs.python.org (Andrei Kulakov) Date: Sun, 22 Aug 2021 18:07:45 +0000 Subject: [New-bugs-announce] [issue44980] Clean up a few tests that return a value!=None Message-ID: <1629655665.44.0.0754665054148.issue44980@roundup.psfhosted.org> New submission from Andrei Kulakov : In #41322 the behavior of returning a value!=None from test methods was deprecated, there are currently a few tests in Python that do that; it would be good to fix them to be consistent with our deprecation requirement and to avoid deprecation warnings in test runs; it may also possibly surface unexpected issues when reviewing these tests. - there are two distutils tests - test_quiet and test_no_optimize_flag -- probably not worth it to investigate them as distutils is set for removal in 3.12 - test_null_strings in CAPI - test_constructor in test_code I'll try to fix the two tests above today. ---------- components: Tests messages: 400083 nosy: andrei.avk priority: normal severity: normal status: open title: Clean up a few tests that return a value!=None _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 22 16:55:35 2021 From: report at bugs.python.org (Kolen Cheung) Date: Sun, 22 Aug 2021 20:55:35 +0000 Subject: [New-bugs-announce] [issue44981] `module has no attribute` when `__all__` includes certain unicode characters Message-ID: <1629665735.18.0.342540973812.issue44981@roundup.psfhosted.org> New submission from Kolen Cheung : With Python 3.9.6 on macOS, In a file all_bug.py, ```py __all__ = ("?",) ? = "?" ``` Then run `from all_bug import *`, resulted in AttributeError: module 'all_bug' has no attribute '?' This happens with some other unicode characters as well, but not all. I can provide them if needed. Removing the `__all__` line will successfully import ? and be used. ---------- messages: 400106 nosy: christian.kolen priority: normal severity: normal status: open title: `module has no attribute` when `__all__` includes certain unicode characters type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 09:27:53 2021 From: report at bugs.python.org (=?utf-8?q?Filipe_La=C3=ADns?=) Date: Mon, 23 Aug 2021 13:27:53 +0000 Subject: [New-bugs-announce] [issue44982] Add vendor information Message-ID: <1629725273.33.0.352265860161.issue44982@roundup.psfhosted.org> New submission from Filipe La?ns : In the effort of making the UX better with vendored Python versions, I think it would make sense to track and expose vendor information. Initially, the vendor information would be comprised of two fields, the vendor string (eg. `Debian`) and the vendor name (eg. `debian`). If specified, it would change the interpreter/installation in the following ways: - The vendor string would be shown in places like the IDLE shell (eg. [1]) - The vendor name would be added to the installation paths (/usr/lib/python3.9 would become /usr/lib/python3.9-debian) - This would include scripts, so the interpreter would be called python3-debian, the vendors can then rename or symlink it to python3 if they want to have that be the default Python on the system Additionally, I think we should add two new functions to the platform module, platform.vendor() and platform.vendor_name(). Overall, I think this would help out users identify the Python installation and avoid clashes between Python installations, even allowing parallel installations. If I remember everything correctly, this should fix Matthias issues with bpo-43976. Matthias, could you confirm? Any thoughts? [1] new IDLE shell output Debian Python 3.9.6 (default, Jun 30 2021, 10:22:16) [GCC 11.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ---------- messages: 400135 nosy: FFY00, christian.heimes, doko, jaraco, steve.dower, willingc priority: normal severity: normal status: open title: Add vendor information type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 09:32:50 2021 From: report at bugs.python.org (Takuo Matsuoka) Date: Mon, 23 Aug 2021 13:32:50 +0000 Subject: [New-bugs-announce] [issue44983] Wrong definition of a starred expression in the Language Reference Message-ID: <1629725570.26.0.70854073558.issue44983@roundup.psfhosted.org> New submission from Takuo Matsuoka : Being unaware of the processes here, I have posted the issue to the python-idea mailing list. Please refer to it. https://mail.python.org/archives/list/python-ideas at python.org/message/TCWYZIIRZWIR7CDJWDAUBCAMU2CBFB3Y/ Thank you. ---------- assignee: docs at python components: Documentation messages: 400136 nosy: Takuo Matsuoka, docs at python priority: normal severity: normal status: open title: Wrong definition of a starred expression in the Language Reference type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 10:46:11 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 23 Aug 2021 14:46:11 +0000 Subject: [New-bugs-announce] [issue44984] Rewrite test_null_strings in _testcapi Message-ID: <1629729971.91.0.181277754556.issue44984@roundup.psfhosted.org> New submission from Serhiy Storchaka : test_null_strings in Modules/_testcapimodule.c was initially added in 7580146b5c7025976f0907a9893e01dc3d3d3457 for testing PyObject_Str(NULL) and PyObject_Unicode(NULL). PyObject_Unicode() was removed in 3.0, so now the test calls PyObject_Str(NULL) twice that does not make sense. On other hand, PyObject_Bytes(NULL) and PyObject_Repr(NULL) are not tested. Additionally, there are now problems with unittest tests returning non-None. So this test should be completely rewritten. ---------- components: Tests messages: 400139 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Rewrite test_null_strings in _testcapi type: enhancement versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 12:06:48 2021 From: report at bugs.python.org (Mehrzad) Date: Mon, 23 Aug 2021 16:06:48 +0000 Subject: [New-bugs-announce] [issue44985] Inconsistent returned value of inspect.getfullargspec(object.__init__). Message-ID: <1629734808.74.0.392472113635.issue44985@roundup.psfhosted.org> New submission from Mehrzad : The inspection `inspect.getfullargspec(object.__init__)` shows that `object.__init__` takes both varargs (starred) and varkw (double-starred) arguments.* However, it is impossible to call `object.__init__` with varargs or varkw arguments. If one tries to call `object.__init__(SomeClass(), ...)` with either of those arguments, the following error is raised: `TypeError: SomeClass.__init__() takes exactly one argument (the instance to initialize)`. This error is not raised if `SomeClass()` is replaced with some literal, e.g. a number. * I can not certify whether it is intended behavior or a bug, because the signature of `obj.__init__` takes those arguments. ---------- components: Distutils, Interpreter Core, Parser files: object_init.py messages: 400144 nosy: Mehrzad, dstufft, eric.araujo, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Inconsistent returned value of inspect.getfullargspec(object.__init__). type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file50229/object_init.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 19:22:20 2021 From: report at bugs.python.org (=?utf-8?q?Mat=C3=ADas_Senger?=) Date: Mon, 23 Aug 2021 23:22:20 +0000 Subject: [New-bugs-announce] [issue44986] Date formats in help messages of argparse Message-ID: <1629760940.67.0.392348377667.issue44986@roundup.psfhosted.org> New submission from Mat?as Senger : If the help message of an argument in argparse contains a date format, e.g. %Y-%m-%d, it crashes when printing the help after being invoked with the -h option. Uploaded an example. ---------- components: Library (Lib) files: deleteme.py messages: 400183 nosy: mail.de.senger priority: normal severity: normal status: open title: Date formats in help messages of argparse type: crash versions: Python 3.11 Added file: https://bugs.python.org/file50232/deleteme.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 22:07:52 2021 From: report at bugs.python.org (Steven D'Aprano) Date: Tue, 24 Aug 2021 02:07:52 +0000 Subject: [New-bugs-announce] [issue44987] Speed up unicode normalization of ASCII strings Message-ID: <1629770872.82.0.815585920926.issue44987@roundup.psfhosted.org> New submission from Steven D'Aprano : I think there is an opportunity to speed up some unicode normalisations significantly. In 3.9 at least, the normalisation appears to be dependent on the length of the string: >>> setup="from unicodedata import normalize; s = 'reverse'" >>> t1 = Timer('normalize("NFKC", s)', setup=setup) >>> setup="from unicodedata import normalize; s = 'reverse'*1000" >>> t2 = Timer('normalize("NFKC", s)', setup=setup) >>> >>> min(t1.repeat(repeat=7)) 0.04854234401136637 >>> min(t2.repeat(repeat=7)) 9.98313440399943 But ASCII strings are always in normalised form, for all four normalisation forms. In CPython, with PEP 393 (Flexible String Representation), it should be a constant-time operation to detect whether a string is pure ASCII, and avoid scanning the string or attempting the normalisation. ---------- components: Unicode messages: 400192 nosy: ezio.melotti, steven.daprano, vstinner priority: normal severity: normal status: open title: Speed up unicode normalization of ASCII strings type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 23:21:13 2021 From: report at bugs.python.org (=?utf-8?b?5byg5bO76ZOt?=) Date: Tue, 24 Aug 2021 03:21:13 +0000 Subject: [New-bugs-announce] [issue44988] Use the newest tcl/tk support Message-ID: <1629775273.4.0.869869625405.issue44988@roundup.psfhosted.org> New submission from ??? <3180471716 at qq.com>: The newest tcl/tk(8.7) has been released. If python uses the newest tcl/tk, tkinter will be better in these respects: 1. progressbar will be added text on it. 2. the scrollbar, text and canvas will be moved more smoothly. 3. tcl/tk8.7 includes tk_sysnotify and tk_systray, which provide users with a modern way match OS to show the messages. Therefore, I suggest that python should use the newest tcl to bring new feature to Python GUI. ---------- components: Tkinter messages: 400195 nosy: smart-space priority: normal severity: normal status: open title: Use the newest tcl/tk support type: enhancement versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 23 23:33:20 2021 From: report at bugs.python.org (Steven D'Aprano) Date: Tue, 24 Aug 2021 03:33:20 +0000 Subject: [New-bugs-announce] [issue44989] Fix documentation for truth testing Message-ID: <1629776000.21.0.00803055274817.issue44989@roundup.psfhosted.org> New submission from Steven D'Aprano : Truth testing states that "Any object can be tested for truth value" but from 3.9 onwards, doing so with NotImplemented is deprecated and will be made a TypeError. https://docs.python.org/3/library/stdtypes.html#truth-value-testing It is also not true for third-party objects such as numpy arrays (which raise ValueError) and pandas dataframes. I think that truth testing should have been considered a fundamental operation that (in the absence of bugs) always succeeds, but #35712 says different. Not that I'm bitter *wink* In any case, at the very least the exception for NotImplemented should be documented. ---------- assignee: docs at python components: Documentation messages: 400196 nosy: docs at python, steven.daprano priority: normal severity: normal status: open title: Fix documentation for truth testing versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 24 06:18:35 2021 From: report at bugs.python.org (Mark Shannon) Date: Tue, 24 Aug 2021 10:18:35 +0000 Subject: [New-bugs-announce] [issue44990] Change layout of frames back to specials-locals-stack (from locals-specials-stack) Message-ID: <1629800315.67.0.0305853602791.issue44990@roundup.psfhosted.org> New submission from Mark Shannon : The two plausible layouts from evaluation stack frames are described here: https://github.com/faster-cpython/ideas/issues/31#issuecomment-844263795 We opted for layout A, although it is a bit more complex to manage and slightly more expensive in terms of pointers. The reason for this was that it theoretically allows zero-copying Python-to-Python calls. I now believe this was the wrong decision and we should have chosen layout B. B is cheaper. It needs 2 pointers, not 3, meaning that there is another register available for use in the interpreter. Also the linkage area doesn't need the nlocalsplus field. The benefit of zero-copy calls is much smaller than I thought: * Any calls from a generator functions do not benefit * An additional check is needed to make sure that both frames are in the same stack chunk * Any jitted code will keep stack values in registers, so stores will still be needed in either case. * The average number of arguments copied is low (typically 2 or 3). Even in the ideal case (interpreter, no generator, same stack chunk) changing to layout B will cost 2/3 memory moves (independent of each other), but will gain us extra code for checking chunks, and one move (moving nlocalsplus). So at best we only save 1/2 moves. In other cases layout B is better. One final improvement to layout B: saving the stackdepth as an offset from locals[0] not from stack[0] further speeds frame handling. ---------- assignee: Mark.Shannon messages: 400202 nosy: Mark.Shannon, pablogsal priority: normal severity: normal status: open title: Change layout of frames back to specials-locals-stack (from locals-specials-stack) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 24 09:07:12 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Tue, 24 Aug 2021 13:07:12 +0000 Subject: [New-bugs-announce] [issue44991] [sqlite3] cleanup GIL handling Message-ID: <1629810432.44.0.0891439205465.issue44991@roundup.psfhosted.org> New submission from Erlend E. Aasland : Quoting msg400205 by Petr in bpo-42064: I think the module could use a more comprehensive review for GIL handling, rather than doing it piecewise in individual PRs. I recommend that any function passed to SQLite (and only those) should - be named `*_callback`, for clarity - acquire the GIL at the very start - release the GIL at the very end ---------- assignee: erlendaasland components: Extension Modules messages: 400207 nosy: erlendaasland, petr.viktorin priority: normal severity: normal status: open title: [sqlite3] cleanup GIL handling type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 24 09:36:59 2021 From: report at bugs.python.org (Brian Lee) Date: Tue, 24 Aug 2021 13:36:59 +0000 Subject: [New-bugs-announce] [issue44992] functools.lru_cache does not consider strings and numpy strings as equivalent Message-ID: <1629812219.83.0.183087595287.issue44992@roundup.psfhosted.org> New submission from Brian Lee : This seems like unexpected behavior: Two keys that are equal and have equal hashes should yield cache hits, but they do not. Python 3.9.6 (default, Aug 18 2021, 19:38:01) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import functools >>> >>> import numpy as np >>> >>> @functools.lru_cache(maxsize=None) ... def f(x): ... return x ... >>> py_str = 'hello world' >>> np_str = np.str_(py_str) >>> >>> assert py_str == np_str >>> assert hash(py_str) == hash(np_str) >>> >>> assert f.cache_info().currsize == 0 >>> f(py_str) 'hello world' >>> assert f.cache_info().currsize == 1 >>> f(np_str) 'hello world' >>> assert f.cache_info().currsize == 2 >>> print(f.cache_info()) CacheInfo(hits=0, misses=2, maxsize=None, currsize=2) ---------- components: Library (Lib) messages: 400209 nosy: brilee priority: normal severity: normal status: open title: functools.lru_cache does not consider strings and numpy strings as equivalent versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 24 10:19:43 2021 From: report at bugs.python.org (David Rebbe) Date: Tue, 24 Aug 2021 14:19:43 +0000 Subject: [New-bugs-announce] [issue44993] enum.auto() starts with one instead of zero Message-ID: <1629814783.31.0.585894105546.issue44993@roundup.psfhosted.org> New submission from David Rebbe : enum.auto() By default, the initial value starts at 1. Per the documentation here: https://docs.python.org/3/library/enum.html#enum.auto This doesn't really follow expected behavior in majority of programming languages nor python. Most will expect starting value to be zero. I personally skipped over this as I've never seen an enum start at 1 in any language before. Excuse my ignorance if this is more common place then I realize. I propose an optional argument to the class to allow different starting values: enum.auto(0) ---------- messages: 400210 nosy: David Rebbe2 priority: normal severity: normal status: open title: enum.auto() starts with one instead of zero versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 24 16:32:18 2021 From: report at bugs.python.org (Julian Berman) Date: Tue, 24 Aug 2021 20:32:18 +0000 Subject: [New-bugs-announce] [issue44994] datetime's C implementation verifies fromisoformat is ASCII, but the pure python implementation does not Message-ID: <1629837138.94.0.0862501406939.issue44994@roundup.psfhosted.org> New submission from Julian Berman : This line (which contains a non-ASCII digit): python3.9 -c "import datetime; datetime.date.fromisoformat('1963-06-1?')" raises: Traceback (most recent call last): File "", line 1, in ValueError: Invalid isoformat string: '1963-06-1?' under the C implementation of the datetime module, but when the pure Python implementation is the one imported, succeeds (and produces `datetime.date(1963, 6, 14)`) The pure Python implementation should instead explicitly check and raise when encountering a non-ASCII string. (On PyPy, which always uses the pure-Python implementation, this contributes to a behavioral difference) ---------- components: Library (Lib) messages: 400235 nosy: Julian, p-ganssle priority: normal severity: normal status: open title: datetime's C implementation verifies fromisoformat is ASCII, but the pure python implementation does not type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 24 22:35:15 2021 From: report at bugs.python.org (=?utf-8?b?5p2o6Z2S?=) Date: Wed, 25 Aug 2021 02:35:15 +0000 Subject: [New-bugs-announce] [issue44995] "Hide the prompts and output" works abnormal Message-ID: <1629858915.39.0.643058961708.issue44995@roundup.psfhosted.org> New submission from ?? : ?url?https://docs.python.org/3/tutorial/classes.html ?chapter?9.4. Random Remarks ?problem description? When I click on the demo "Hide the prompts and output" switch, the class definition statements were also hided. Please take a look as the appended screenshot. ---------- assignee: docs at python components: Documentation files: screenshot.png messages: 400245 nosy: docs at python, yangqing priority: normal severity: normal status: open title: "Hide the prompts and output" works abnormal versions: Python 3.9 Added file: https://bugs.python.org/file50235/screenshot.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 05:55:33 2021 From: report at bugs.python.org (Nils) Date: Wed, 25 Aug 2021 09:55:33 +0000 Subject: [New-bugs-announce] [issue44996] tarfile missing TarInfo.offset_data member in documentation Message-ID: <1629885333.99.0.21899352511.issue44996@roundup.psfhosted.org> New submission from Nils : The title says it all: `TarInfo` objects are missing their `offset_data` member in all documentation versions. ---------- assignee: docs at python components: Documentation messages: 400248 nosy: docs at python, nilsnolde priority: normal severity: normal status: open title: tarfile missing TarInfo.offset_data member in documentation versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 06:03:29 2021 From: report at bugs.python.org (=?utf-8?b?6YOR5LmL5Li6?=) Date: Wed, 25 Aug 2021 10:03:29 +0000 Subject: [New-bugs-announce] [issue44997] _sqlite3 extention failed to build Message-ID: <1629885809.74.0.919600185255.issue44997@roundup.psfhosted.org> New submission from ??? : building '_sqlite3' extension creating build/temp.macosx-11.5-universal2-3.11/Users/jett/cpython/Modules/_sqlite gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -arch x86_64 -fno-semantic-interposition -flto -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -fprofile-instr-generate -I./Include/internal -IModules/_sqlite -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I./Include -I. -I/usr/local/include -I/Users/jett/cpython/Include -I/Users/jett/cpython -c /Users/jett/cpython/Modules/_sqlite/connection.c -o build/temp.macosx-11.5-universal2-3.11/Users/jett/cpython/Modules/_sqlite/connection.o /Users/jett/cpython/Modules/_sqlite/connection.c:1179:10: error: implicit declaration of function 'sqlite3_enable_load_extension' is invalid in C99 [-Werror,-Wimplicit-function-declaration] rc = sqlite3_enable_load_extension(self->db, onoff); ^ /Users/jett/cpython/Modules/_sqlite/connection.c:1215:10: error: implicit declaration of function 'sqlite3_load_extension' is invalid in C99 [-Werror,-Wimplicit-function-declaration] rc = sqlite3_load_extension(self->db, extension_name, 0, &errmsg); ^ /Users/jett/cpython/Modules/_sqlite/connection.c:1215:10: note: did you mean 'sqlite3_auto_extension'? /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sqlite3.h:6551:16: note: 'sqlite3_auto_extension' declared here SQLITE_API int sqlite3_auto_extension(void(*xEntryPoint)(void)); ^ 2 errors generated. This is the error message from clang on macOS 11.5.1 configure options: ./configure --prefix=/Users/jett/python311 --enable-optimizations --enable-loadable-sqlite-extensions --enable-ipv6 --enable-big-digits=30 --with-lto=full --with-experimental-isolated-subinterpreters --with-static-libpython --enable-universalsdk --with-universal-archs=universal2 ---------- components: Build messages: 400249 nosy: jett8998 priority: normal severity: normal status: open title: _sqlite3 extention failed to build type: compile error versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 06:09:33 2021 From: report at bugs.python.org (=?utf-8?b?6YOR5LmL5Li6?=) Date: Wed, 25 Aug 2021 10:09:33 +0000 Subject: [New-bugs-announce] [issue44998] tests failed Message-ID: <1629886173.59.0.970748469767.issue44998@roundup.psfhosted.org> New submission from ??? : running build_scripts creating build/scripts-3.11 copying and adjusting /Users/jett/cpython/Tools/scripts/pydoc3 -> build/scripts-3.11 copying and adjusting /Users/jett/cpython/Tools/scripts/idle3 -> build/scripts-3.11 copying and adjusting /Users/jett/cpython/Tools/scripts/2to3 -> build/scripts-3.11 changing mode of build/scripts-3.11/pydoc3 from 644 to 755 changing mode of build/scripts-3.11/idle3 from 644 to 755 changing mode of build/scripts-3.11/2to3 from 644 to 755 renaming build/scripts-3.11/pydoc3 to build/scripts-3.11/pydoc3.11 renaming build/scripts-3.11/idle3 to build/scripts-3.11/idle3.11 renaming build/scripts-3.11/2to3 to build/scripts-3.11/2to3-3.11 touch profile-gen-stamp # Next, run the profile task to generate the profile information. /Library/Developer/CommandLineTools/usr/bin/make run_profile_task LLVM_PROFILE_FILE="code-%p.profclangr" ./python.exe -m test --pgo --timeout=1200 || true 0:00:00 load avg: 5.31 Run tests sequentially (timeout: 20 min) 0:00:00 load avg: 5.31 [ 1/44] test_array 0:00:00 load avg: 5.31 [ 2/44] test_base64 0:00:00 load avg: 5.31 [ 3/44] test_binascii -- test_base64 failed (env changed) 0:00:00 load avg: 5.31 [ 4/44] test_binop 0:00:00 load avg: 5.31 [ 5/44] test_bisect 0:00:00 load avg: 5.31 [ 6/44] test_bytes 0:00:02 load avg: 5.31 [ 7/44] test_bz2 -- test_bytes failed (env changed) 0:00:02 load avg: 5.31 [ 8/44] test_cmath 0:00:02 load avg: 5.31 [ 9/44] test_codecs 0:00:03 load avg: 5.31 [10/44] test_collections 0:00:03 load avg: 5.31 [11/44] test_complex 0:00:04 load avg: 5.31 [12/44] test_dataclasses 0:00:04 load avg: 5.31 [13/44] test_datetime 0:00:07 load avg: 4.97 [14/44] test_decimal -------------------------------------------------------------------------------------------------- NOTICE -------------------------------------------------------------------------------------------------- test_decimal may generate "malloc can't allocate region" warnings on macOS systems. This behavior is known. Do not report a bug unless tests are also failing. See bpo-40928. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ python.exe(34042,0x100a27d40) malloc: can't allocate region :*** mach_vm_map(size=842105263157903360, flags: 100) failed (error code=3) python.exe(34042,0x100a27d40) malloc: *** set a breakpoint in malloc_error_break to debug python.exe(34042,0x100a27d40) malloc: can't allocate region :*** mach_vm_map(size=842105263157903360, flags: 100) failed (error code=3) python.exe(34042,0x100a27d40) malloc: *** set a breakpoint in malloc_error_break to debug python.exe(34042,0x100a27d40) malloc: can't allocate region :*** mach_vm_map(size=421052631578951680, flags: 100) failed (error code=3) python.exe(34042,0x100a27d40) malloc: *** set a breakpoint in malloc_error_break to debug python.exe(34042,0x100a27d40) malloc: can't allocate region :*** mach_vm_map(size=421052631578951680, flags: 100) failed (error code=3) python.exe(34042,0x100a27d40) malloc: *** set a breakpoint in malloc_error_break to debug 0:00:09 load avg: 4.73 [15/44] test_difflib 0:00:10 load avg: 4.73 [16/44] test_embed test test_embed failed 0:00:12 load avg: 4.73 [17/44] test_float -- test_embed failed (33 failures) 0:00:12 load avg: 4.73 [18/44] test_fstring 0:00:13 load avg: 4.73 [19/44] test_functools 0:00:13 load avg: 4.73 [20/44] test_generators 0:00:13 load avg: 4.73 [21/44] test_hashlib 0:00:13 load avg: 4.73 [22/44] test_heapq 0:00:14 load avg: 4.73 [23/44] test_int 0:00:14 load avg: 4.59 [24/44] test_itertools 0:00:16 load avg: 4.59 [25/44] test_json 0:00:17 load avg: 4.59 [26/44] test_long -- test_json failed (env changed) 0:00:19 load avg: 4.59 [27/44] test_lzma 0:00:19 load avg: 4.59 [28/44] test_math -- test_lzma skipped 0:00:20 load avg: 4.30 [29/44] test_memoryview 0:00:21 load avg: 4.30 [30/44] test_operator 0:00:21 load avg: 4.30 [31/44] test_ordered_dict 0:00:21 load avg: 4.30 [32/44] test_patma 0:00:22 load avg: 4.30 [33/44] test_pickle 0:00:24 load avg: 4.04 [34/44] test_pprint 0:00:24 load avg: 4.04 [35/44] test_re 0:00:25 load avg: 4.04 [36/44] test_set 0:00:26 load avg: 4.04 [37/44] test_sqlite 0:00:26 load avg: 4.04 [38/44] test_statistics -- test_sqlite skipped 0:00:27 load avg: 4.04 [39/44] test_struct 0:00:28 load avg: 4.04 [40/44] test_tabnanny -- test_struct failed (env changed) 0:00:28 load avg: 4.04 [41/44] test_time -- test_tabnanny failed (env changed) 0:00:30 load avg: 3.88 [42/44] test_unicode 0:00:31 load avg: 3.88 [43/44] test_xml_etree -- test_unicode failed (env changed) 0:00:31 load avg: 3.88 [44/44] test_xml_etree_c Total duration: 32.5 sec Tests result: FAILURE /Library/Developer/CommandLineTools/usr/bin/make build_all_merge_profile /usr/local/bin/llvm-profdata merge -output=code.profclangd *.profclangr warning: code-34042.profclangr: unsupported instrumentation profile format version warning: code-34080.profclangr: unsupported instrumentation profile format version warning: code-34092.profclangr: unsupported instrumentation profile format version warning: code-34100.profclangr: unsupported instrumentation profile format version warning: code-34109.profclangr: unsupported instrumentation profile format version warning: code-34120.profclangr: unsupported instrumentation profile format version warning: code-34128.profclangr: unsupported instrumentation profile format version warning: code-34076.profclangr: unsupported instrumentation profile format version warning: code-34084.profclangr: unsupported instrumentation profile format version warning: code-34096.profclangr: unsupported instrumentation profile format version warning: code-34104.profclangr: unsupported instrumentation profile format version warning: code-34114.profclangr: unsupported instrumentation profile format version warning: code-34124.profclangr: unsupported instrumentation profile format version warning: code-34074.profclangr: unsupported instrumentation profile format version warning: code-34082.profclangr: unsupported instrumentation profile format version warning: code-34094.profclangr: unsupported instrumentation profile format version warning: code-34102.profclangr: unsupported instrumentation profile format version warning: code-34112.profclangr: unsupported instrumentation profile format version warning: code-34122.profclangr: unsupported instrumentation profile format version warning: code-34078.profclangr: unsupported instrumentation profile format version warning: code-34087.profclangr: unsupported instrumentation profile format version warning: code-34098.profclangr: unsupported instrumentation profile format version warning: code-34106.profclangr: unsupported instrumentation profile format version warning: code-34116.profclangr: unsupported instrumentation profile format version warning: code-34126.profclangr: unsupported instrumentation profile format version warning: code-34073.profclangr: unsupported instrumentation profile format version warning: code-34081.profclangr: unsupported instrumentation profile format version warning: code-34093.profclangr: unsupported instrumentation profile format version warning: code-34101.profclangr: unsupported instrumentation profile format version warning: code-34110.profclangr: unsupported instrumentation profile format version warning: code-34121.profclangr: unsupported instrumentation profile format version warning: code-34077.profclangr: unsupported instrumentation profile format version warning: code-34085.profclangr: unsupported instrumentation profile format version warning: code-34097.profclangr: unsupported instrumentation profile format version warning: code-34105.profclangr: unsupported instrumentation profile format version warning: code-34125.profclangr: unsupported instrumentation profile format version warning: code-34115.profclangr: unsupported instrumentation profile format version warning: code-34075.profclangr: unsupported instrumentation profile format version warning: code-34083.profclangr: unsupported instrumentation profile format version warning: code-34095.profclangr: unsupported instrumentation profile format version warning: code-34103.profclangr: unsupported instrumentation profile format version warning: code-34113.profclangr: unsupported instrumentation profile format version warning: code-34123.profclangr: unsupported instrumentation profile format version warning: code-34079.profclangr: unsupported instrumentation profile format version warning: code-34088.profclangr: unsupported instrumentation profile format version warning: code-34099.profclangr: unsupported instrumentation profile format version warning: code-34107.profclangr: unsupported instrumentation profile format version warning: code-34117.profclangr: unsupported instrumentation profile format version warning: code-34127.profclangr: unsupported instrumentation profile format version error: no profile can be merged make[1]: *** [build_all_merge_profile] Error 1 make: *** [profile-run-stamp] Error 2 this is the whole log, configure options: ./configure --prefix=/Users/jett/python311 --enable-optimizations --enable-loadable-sqlite-extensions --enable-ipv6 --enable-big-digits=30 --with-lto=full --with-experimental-isolated-subinterpreters --with-static-libpython --enable-universalsdk --with-universal-archs=universal2 ---------- components: Build messages: 400250 nosy: jett8998 priority: normal severity: normal status: open title: tests failed type: compile error versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 06:14:59 2021 From: report at bugs.python.org (santhosh) Date: Wed, 25 Aug 2021 10:14:59 +0000 Subject: [New-bugs-announce] [issue44999] Argparse missing translates Message-ID: <1629886499.18.0.598632982374.issue44999@roundup.psfhosted.org> New submission from santhosh : Dear all, There are a few strings in the `argparse` module which are not translatable through the `gettext` API. Some have already been reported: - the "--version" help text at Lib/argparse.py:1105 (reported in issue 16786, fixed by PR 12711); - the "default" help text at Lib/argparse.py:697 (reported in 33775, fixed by PR 12711). However, some others remain: - the "default" help text for `BooleanOptionalAction` at Lib/argparse.py:878 (which, incidentally, will be duplicated when used with `ArgumentDefaultsHelpFormatter`); - the "argument %(argument_name)s: %(message)s" error message at Lib/argparse.py:751; - the formatted section heading at Lib/argparse.py:225: if the heading itself is translatable, the string "%(heading)s:" is not. More precisely, the colon right after the heading might also require localization, as some languages (e.g., French) typeset colons with a preceding non-breaking space (i.e., "%(heading)s :"). (Okay, I'll admit that this is nitpicking!) I'll submit a pull request with proposed fixes for these strings. Kind regards, Santhosh ---------- components: Parser messages: 400251 nosy: lys.nikolaou, pablogsal, santhu_reddy12 priority: normal severity: normal status: open title: Argparse missing translates type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 07:50:39 2021 From: report at bugs.python.org (Andre Roberge) Date: Wed, 25 Aug 2021 11:50:39 +0000 Subject: [New-bugs-announce] [issue45000] del __debug__ should be a SyntaxError Message-ID: <1629892239.06.0.814247211414.issue45000@roundup.psfhosted.org> New submission from Andre Roberge : Consider the following: Python 3.10.0rc1 ... >>> __debug__ True >>> del __debug__ Traceback (most recent call last): File "", line 1, in NameError: name '__debug__' is not defined >>> __debug__ True >>> __debug__ = False File "", line 1 SyntaxError: cannot assign to __debug__ I suggest that attempting to delete __debug__ should be a SyntaxError, similar to attempting to delete None and other constants. >>> del None File "", line 1 del None ^^^^ SyntaxError: cannot delete None = = = The same NameError exception is raised for Python 3.9 when attempting to delete __debug__. For Python 3.8, attempting to delete __debug__ silently fails, so the current behaviour is at least an improvement. ---------- components: Parser messages: 400256 nosy: aroberge, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: del __debug__ should be a SyntaxError type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 09:23:27 2021 From: report at bugs.python.org (wouter bolsterlee) Date: Wed, 25 Aug 2021 13:23:27 +0000 Subject: [New-bugs-announce] [issue45001] Date parsing helpers in email module incorrectly raise IndexError for some malformed inputs Message-ID: <1629897807.12.0.728438817983.issue45001@roundup.psfhosted.org> New submission from wouter bolsterlee : Various date parsing utilities in the email module, such as email.utils.parsedate(), are supposed to gracefully handle invalid input, typically by raising an appropriate exception or by returning None. The internal email._parseaddr._parsedate_tz() helper used by some of these date parsing routines tries to be robust against malformed input, but unfortunately it can still crash ungracefully when a non-empty but whitespace-only input is passed. This manifests as an unexpected IndexError. In practice, this can happen when parsing an email with only a newline inside a ?Date:? header, which unfortunately happens occasionally in the real world. Here's a minimal example: $ python Python 3.9.6 (default, Jun 30 2021, 10:22:16) [GCC 11.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import email.utils >>> email.utils.parsedate('foo') >>> email.utils.parsedate(' ') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.9/email/_parseaddr.py", line 176, in parsedate t = parsedate_tz(data) File "/usr/lib/python3.9/email/_parseaddr.py", line 50, in parsedate_tz res = _parsedate_tz(data) File "/usr/lib/python3.9/email/_parseaddr.py", line 72, in _parsedate_tz if data[0].endswith(',') or data[0].lower() in _daynames: IndexError: list index out of range The fix is rather straight-forward; will open a pull request shortly. ---------- components: email messages: 400261 nosy: barry, r.david.murray, wbolster priority: normal severity: normal status: open title: Date parsing helpers in email module incorrectly raise IndexError for some malformed inputs type: crash versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 10:35:52 2021 From: report at bugs.python.org (=?utf-8?b?6YOR5LmL5Li6?=) Date: Wed, 25 Aug 2021 14:35:52 +0000 Subject: [New-bugs-announce] [issue45002] won't match correct version of tcl/tk Message-ID: <1629902152.33.0.134354917611.issue45002@roundup.psfhosted.org> New submission from ??? : I used the following config: ./configure --prefix=/Users/jett/python311 --enable-optimizations --enable-ipv6 --enable-big-digits=30 --with-lto=full --with-experimental-isolated-subinterpreters --with-static-libpython --enable-universalsdk --with-universal-archs=universal2 --with-openssl=/opt/homebrew/opt/openssl at 1.1 export CFLAGS="-I/opt/homebrew/opt/xz/include -I/opt/homebrew/opt/tcl-tk/include" export CPPFLAGS="-I/opt/homebrew/opt/xz/include -I/opt/homebrew/opt/tcl-tk/include" export LDFLAGS="-L/opt/homebrew/opt/tcl-tk/lib" and got a version mismatch error: the executable is 8.6 and the library is 8.5 ---------- components: Build messages: 400272 nosy: jett8998 priority: normal severity: normal status: open title: won't match correct version of tcl/tk type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 14:09:04 2021 From: report at bugs.python.org (Calo) Date: Wed, 25 Aug 2021 18:09:04 +0000 Subject: [New-bugs-announce] [issue45003] Documentation wrote __div__ instead of __truediv__ Message-ID: <1629914944.6.0.138972683464.issue45003@roundup.psfhosted.org> New submission from Calo : The Python Language Reference, Chapter 6 (Expressions), Section 7 (Binary arithmetic operations) in Python version 3.9 has a sentence which reads: "This operation can be customized using the special __div__() and __floordiv__() methods." To my knowledge, when Python 3 was released, true division became the default, and __div__ became useless as well. Thus, I believe this part of the documentation should be changed to "[...] __truediv__() and __floordiv__() methods." to avoid misleading others. P.S. This is my first time submitting a Python bug report, I'm just a Python enthusiast and I'm not familiar at all with these official bug reports (I have read https://docs.python.org/3/bugs.html though!), so please let me know and correct me if I've done something incorrectly! Thanks! ---------- assignee: docs at python components: Documentation messages: 400279 nosy: docs at python, objectivitix priority: normal severity: normal status: open title: Documentation wrote __div__ instead of __truediv__ versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 15:24:46 2021 From: report at bugs.python.org (Ethan Furman) Date: Wed, 25 Aug 2021 19:24:46 +0000 Subject: [New-bugs-announce] [issue45004] add Enum to ctypes Message-ID: <1629919486.36.0.748865538511.issue45004@roundup.psfhosted.org> New submission from Ethan Furman : In issue44993 it was suggested to add a cEnum whose main purpose would be to start counting at 0 instead of 1. Issues to consider: - should such an enum subclass `int`, or a C type? - should there be an enum for each C type? - will mixing Enum and c_types even work? ---------- messages: 400289 nosy: amaury.forgeotdarc, belopolsky, ethan.furman, meador.inge priority: normal severity: normal status: open title: add Enum to ctypes type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 15:25:56 2021 From: report at bugs.python.org (Mjbmr) Date: Wed, 25 Aug 2021 19:25:56 +0000 Subject: [New-bugs-announce] [issue45005] Two Layers of SSL/TLS Message-ID: <1629919556.77.0.871117984975.issue45005@roundup.psfhosted.org> New submission from Mjbmr : A simple script, trying connect to second ssl through first sever doesn't work: import socket, ssl sock = socket.socket() sock.connect(('', 443)) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE sock = ctx.wrap_socket(sock) sock.send(b'CONNECT :443 HTTP/1.1\r\n\r\n') print(sock.recv(1024)) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE sock = ctx.wrap_socket(sock) sock.do_handshake() sock.send(b'CONNECT ifconf.me:80 HTTP/1.1\r\n\r\n') print(sock.recv(1024)) b'HTTP/1.1 200 Connection established\r\n\r\n' Traceback (most recent call last): File "C:\Users\Javad\Desktop\4.py", line 15, in sock = ctx.wrap_socket(sock) File "E:\Categories\Python\Python3.9.6\lib\ssl.py", line 500, in wrap_socket return self.sslsocket_class._create( File "E:\Categories\Python\Python3.9.6\lib\ssl.py", line 1040, in _create self.do_handshake() File "E:\Categories\Python\Python3.9.6\lib\ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host ---------- assignee: christian.heimes components: SSL messages: 400291 nosy: christian.heimes, mjbmr priority: normal severity: normal status: open title: Two Layers of SSL/TLS versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 22:30:22 2021 From: report at bugs.python.org (Kelvin Zhang) Date: Thu, 26 Aug 2021 02:30:22 +0000 Subject: [New-bugs-announce] [issue45006] Add data_offset field to ZipInfo Message-ID: <1629945022.64.0.0597946135718.issue45006@roundup.psfhosted.org> New submission from Kelvin Zhang : Currently python's zipfile module does not have a way query starting offset of compressed data. This might be handy when the user wants to copy compressed data as is. Therefore I propose adding a data_offset field to zipfile.ZipInfo, which stores the offset to beginning of compressed data. ---------- components: Library (Lib) messages: 400306 nosy: zhangxp1998 priority: normal severity: normal status: open title: Add data_offset field to ZipInfo type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 25 23:34:24 2021 From: report at bugs.python.org (Ned Deily) Date: Thu, 26 Aug 2021 03:34:24 +0000 Subject: [New-bugs-announce] [issue45007] OpenSSL 1.1.1l is released Message-ID: <1629948864.78.0.632969356818.issue45007@roundup.psfhosted.org> New submission from Ned Deily : OpenSSL 1.1.1l was released on 2021-08-24 so the Windows build and macOS binary installers should be updated to it. https://www.openssl.org/source/ However, it appears that 1.1.1l introduced a build failure on older macOS systems that would affect some python.org macOS installer builds so we should wait for an official fix before updating the macOS builds. I will take care of that part. https://github.com/openssl/openssl/issues/16407 ---------- assignee: christian.heimes components: SSL, Windows, macOS messages: 400307 nosy: christian.heimes, ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: high severity: normal stage: needs patch status: open title: OpenSSL 1.1.1l is released versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 01:28:34 2021 From: report at bugs.python.org (William Fisher) Date: Thu, 26 Aug 2021 05:28:34 +0000 Subject: [New-bugs-announce] [issue45008] asyncio.gather should not "dedup" awaitables Message-ID: <1629955714.55.0.00221644523212.issue45008@roundup.psfhosted.org> New submission from William Fisher : asyncio.gather uses a dictionary to de-duplicate futures and coros. However, this can lead to problems when you pass an awaitable object (implements __await__ but isn't a future or coro). 1. Two or more awaitables may compare for equality/hash, but still expect to produce different results (See the RandBits class in gather_test.py) 2. If an awaitable doesn't support hashing, asyncio.gather doesn't work. Would it be possible for non-future, non-coro awaitables to opt out of the dedup logic? The attached file shows an awaitable RandBits class. Each time you await it, you should get a different result. Using gather, you will always get the same result. ---------- components: asyncio files: gather_test.py messages: 400309 nosy: asvetlov, byllyfish, yselivanov priority: normal severity: normal status: open title: asyncio.gather should not "dedup" awaitables type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50236/gather_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 02:34:58 2021 From: report at bugs.python.org (Gopesh Singh) Date: Thu, 26 Aug 2021 06:34:58 +0000 Subject: [New-bugs-announce] [issue45009] Get last modified date of Folders and Files using pathlib module Message-ID: <1629959698.42.0.950387812936.issue45009@roundup.psfhosted.org> New submission from Gopesh Singh : I am trying to get Last modified dates of Folders and Files mounted on Azure Databricks. I am using following Code: ``` root_dir = "/dbfs/mnt/ADLS1/LANDING/parent" def get_directories(root_dir): for child in Path(root_dir).iterdir(): if child.is_file(): print(child, datetime.fromtimestamp(getmtime(child)).date()) else: print(child, datetime.fromtimestamp(getmtime(child)).date()) get_directories(child) ``` The issue is that it is giving wrong dates for some folders. When I put a wait time of 1 second (time.sleep(.000005)) for each iteration, it gives correct results. Otherwise, it sometimes give current date or date of folder listed from last iteration. Seems, when the processing is fast, it is not able to pick correct modified date. I have explained this issue on other portal: https://stackoverflow.com/questions/68917983/get-last-modified-date-of-folders-and-files-in-azure-databricks ---------- components: Library (Lib) messages: 400312 nosy: gopeshsingh priority: normal severity: normal status: open title: Get last modified date of Folders and Files using pathlib module type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 02:49:49 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 26 Aug 2021 06:49:49 +0000 Subject: [New-bugs-announce] [issue45010] Remove support of special method __div__ in unittest.mock Message-ID: <1629960589.05.0.156438795424.issue45010@roundup.psfhosted.org> New submission from Serhiy Storchaka : Special method __div__ was used in Python 2, but is not used in Python 3. I think it can be removed from the list of supported special methods in unittest.mock. ---------- components: Library (Lib) messages: 400314 nosy: michael.foord, serhiy.storchaka priority: normal severity: normal status: open title: Remove support of special method __div__ in unittest.mock versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 04:34:05 2021 From: report at bugs.python.org (mattip) Date: Thu, 26 Aug 2021 08:34:05 +0000 Subject: [New-bugs-announce] [issue45011] tests fail when using pure-python instead of _asycio Message-ID: <1629966845.23.0.661629183218.issue45011@roundup.psfhosted.org> New submission from mattip : PyPy has no asyncio c-extension module _asyncio. I see stdlib test failures when running the tests in test/test_asyncio/*.py. If I disable _asyncio in Lib/asyncio/events.py (at the end of the file) I see similar failures in CPython3.8 on Ubuntu 20.04 in test_buffered_proto.py test_buffered_proto_create_connection test_sslproto.py test_create_connection_memory_leak, test_handshake_timeout, test_start_tls_client_buf_proto_1, Also this test depends on _CFuture test_futures.py ---------- components: asyncio messages: 400322 nosy: asvetlov, mattip, yselivanov priority: normal severity: normal status: open title: tests fail when using pure-python instead of _asycio _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 07:53:33 2021 From: report at bugs.python.org (=?utf-8?q?Stanis=C5=82aw_Skonieczny_=28Uosiu=29?=) Date: Thu, 26 Aug 2021 11:53:33 +0000 Subject: [New-bugs-announce] [issue45012] DirEntry.stat method should release GIL Message-ID: <1629978813.83.0.528049638414.issue45012@roundup.psfhosted.org> New submission from Stanis?aw Skonieczny (Uosiu) : We have an application that crawls filesystem using `os.scandir`. It uses multiple threads for various things. Application is used on variety of filesystems, some of them might be slow or occasionally unresponsive. We have found out that sometimes whole crawling process is stuck and no thread makes any progress, even threads that are really simple, makes no IO and do not hold any locks. After running py-spy on process that was stuck we saw that one of the threads has entered `dentry.stat(follow_symlinks=False)` line and still holds the GIL. Other threads are stuck, because they are waiting for the GIL. This situation can take a long time. I think that `DirEntry` should release GIL when stat cache is empty and syscall is performed. This bug has already been fixed in `scandir` module. See: https://github.com/benhoyt/scandir/issues/131 ---------- components: Library (Lib) messages: 400337 nosy: Stanis?aw Skonieczny (Uosiu) priority: normal severity: normal status: open title: DirEntry.stat method should release GIL type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 08:41:25 2021 From: report at bugs.python.org (Luke Rossi) Date: Thu, 26 Aug 2021 12:41:25 +0000 Subject: [New-bugs-announce] [issue45013] os.path.isfile fails on path exactly 260 Chars long in Windows Message-ID: <1629981685.43.0.0137911951391.issue45013@roundup.psfhosted.org> New submission from Luke Rossi : I saw 33105, but believe this to be a different issue as path length 260 is valid. I did testing by crafting a path that is exactly 260 by hand - A path 259 in length reports .isfile() as True. ---------- components: Library (Lib) messages: 400341 nosy: serhiy.storchaka, ubermidget2 priority: normal severity: normal status: open title: os.path.isfile fails on path exactly 260 Chars long in Windows type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 09:40:24 2021 From: report at bugs.python.org (Takuo Matsuoka) Date: Thu, 26 Aug 2021 13:40:24 +0000 Subject: [New-bugs-announce] [issue45014] SyntaxError describing the error using a wrong term Message-ID: <1629985224.61.0.965048817379.issue45014@roundup.psfhosted.org> New submission from Takuo Matsuoka : The error is this: >>> *() File "", line 1 SyntaxError: can't use starred expression here I think it's right SyntaxError is raised here, but the message is incorrect. Indeed, many starred expressions are actually allowed there. E.g., >>> *(), () I happen to have filed in this issue tracker the problem that the definition of a starred expression given in the Language Reference is incorrect. https://bugs.python.org/issue44983 It appears all correct starred expressions and only them are allowed at the point of the error. Thus the error appears to be one because "*()" is not a starred expression in the correct sense. I think the wording in the message should be corrected. ---------- components: Interpreter Core messages: 400344 nosy: Takuo Matsuoka priority: normal severity: normal status: open title: SyntaxError describing the error using a wrong term type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 09:49:02 2021 From: report at bugs.python.org (Takuo Matsuoka) Date: Thu, 26 Aug 2021 13:49:02 +0000 Subject: [New-bugs-announce] [issue45015] Language Reference failing to describe the treatment of starred expressions Message-ID: <1629985742.7.0.870981642172.issue45015@roundup.psfhosted.org> New submission from Takuo Matsuoka : The issue is described as Issue (1) here: https://mail.python.org/archives/list/python-ideas at python.org/message/BEGGQEU6MG7RYIY7HB4I6VQ23L6TXB6H/ Please look at "Note" just before "Issues treated" there as well. What's mentioned in this note is also filed in this issue tracker with ID 44983 https://bugs.python.org/issue44983 Coming back to Issue (1), the discrepancy is in the definitions of yield_expression, return_stmt, augmented_assignment_stmt, and for_stmt. (That is, the only exception is the definition of subscription https://docs.python.org/3/reference/expressions.html#subscriptions making the exception look inconsistent and confusing.) https://docs.python.org/3/reference/expressions.html#yield-expressions https://docs.python.org/3/reference/simple_stmts.html#the-return-statement https://docs.python.org/3/reference/simple_stmts.html#augmented-assignment-statements https://docs.python.org/3/reference/compound_stmts.html#the-for-statement (In case someone is interested in what are proposed over there, a summary of the proposal can also be found at https://mail.python.org/archives/list/python-ideas at python.org/message/KF37FMD5K5M2ZVTJO6IS3J6M7HHE4VRU/ ) ---------- assignee: docs at python components: Documentation messages: 400346 nosy: Takuo Matsuoka, docs at python priority: normal severity: normal status: open title: Language Reference failing to describe the treatment of starred expressions versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 11:13:28 2021 From: report at bugs.python.org (Ronald Oussoren) Date: Thu, 26 Aug 2021 15:13:28 +0000 Subject: [New-bugs-announce] [issue45016] Multiprocessing freeze support unclear Message-ID: <1629990808.31.0.282439521542.issue45016@roundup.psfhosted.org> New submission from Ronald Oussoren : The requirements on a freezing tool to work with the freeze support in the multiprocessing library are unclear. In particular, I'm trying to support multiprocessing in py2app and cannot rely on the documentation to implement that support. The particular issue I run into: - With py2app "sys.executable" points to a regular interpreter - py2app sets sys.frozen to "macosx_app" or "macosx_plugin" - Multiprocessing.spawn.get_command_line() assumes that a special command-line should be used when "sys.frozen" is set and there is no way to disable this. The easiest way for me to fix this issue is to drop setting sys.frozen in py2app, although I have no idea what other code this might break. ---------- components: Library (Lib) messages: 400354 nosy: ronaldoussoren priority: normal severity: normal status: open title: Multiprocessing freeze support unclear versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 11:25:23 2021 From: report at bugs.python.org (Irit Katriel) Date: Thu, 26 Aug 2021 15:25:23 +0000 Subject: [New-bugs-announce] [issue45017] move opcode-related logic from modulefinder to dis Message-ID: <1629991523.03.0.713771310872.issue45017@roundup.psfhosted.org> New submission from Irit Katriel : The modulefinder library module has logic that understands particular opcodes (such as the scan_opcodes method). This should be encapsulated in the dis module, and modulefinder should not process opcodes directly. ---------- components: Library (Lib) messages: 400355 nosy: iritkatriel priority: normal severity: normal status: open title: move opcode-related logic from modulefinder to dis type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 12:42:14 2021 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Thu, 26 Aug 2021 16:42:14 +0000 Subject: [New-bugs-announce] [issue45018] Pickling a range iterator with an index of over sizeof(int) stores an invalid index Message-ID: <1629996134.04.0.676364219847.issue45018@roundup.psfhosted.org> New submission from ?ukasz Langa : Consider the following: >>> it = iter(range(2**32 + 2)) >>> for _ in range(2**32): ... _ = next(it) >>> it2 = pickle.loads( ... pickle.dumps(it) ... ) >>> assert next(it) == next(it2) This assert currently fails because the reduce method for range iterator objects serializes to `int` instead of `long`. (note that running this example might take tens of minutes on your box) ---------- messages: 400360 nosy: lukasz.langa priority: normal severity: normal stage: patch review status: open title: Pickling a range iterator with an index of over sizeof(int) stores an invalid index type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 15:06:19 2021 From: report at bugs.python.org (Eric Snow) Date: Thu, 26 Aug 2021 19:06:19 +0000 Subject: [New-bugs-announce] [issue45019] Freezing modules has manual steps but could be automated. Message-ID: <1630004779.62.0.284599538008.issue45019@roundup.psfhosted.org> New submission from Eric Snow : Currently we freeze the 3 main import-related modules, as well as one module for testing. Adding more modules or otherwise making any adjustments requires manually editing several files (frozen.c, Makefile.pre.in, ...). Those files aren't particularly obvious and it's easy to miss one. So it would be helpful to have a tool that generates the necessary lines in the relevant files, to avoid manual editing. I'll be putting up a PR shortly. ---------- assignee: eric.snow components: Build messages: 400369 nosy: brett.cannon, eric.snow, gvanrossum priority: normal severity: normal stage: needs patch status: open title: Freezing modules has manual steps but could be automated. type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 15:23:58 2021 From: report at bugs.python.org (Eric Snow) Date: Thu, 26 Aug 2021 19:23:58 +0000 Subject: [New-bugs-announce] [issue45020] Freeze all modules imported during startup. Message-ID: <1630005838.27.0.743852977618.issue45020@roundup.psfhosted.org> New submission from Eric Snow : Currently we freeze the 3 main import-related modules into the python binary (along with one test module). This allows us to bootstrap the import machinery from Python modules. It also means we get better performance importing those modules. If we freeze modules that are likely to be used during execution then we get even better startup times. I'll be putting up a PR that does so, freezing all the modules that are imported during startup. This could also be done for any stdlib modules that are commonly imported. (also see #45019 and https://github.com/faster-cpython/ideas/issues/82) ---------- assignee: eric.snow components: Build messages: 400370 nosy: brett.cannon, eric.snow, gvanrossum, nedbat priority: normal severity: normal stage: needs patch status: open title: Freeze all modules imported during startup. type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 16:09:17 2021 From: report at bugs.python.org (nullptr) Date: Thu, 26 Aug 2021 20:09:17 +0000 Subject: [New-bugs-announce] [issue45021] Race condition in thread.py Message-ID: <1630008557.08.0.231587430513.issue45021@roundup.psfhosted.org> New submission from nullptr : The following code can sometimes hang up import random from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor from time import sleep def worker(): with ProcessPoolExecutor() as pool: r = list(pool.map(sleep, [0.01] * 8)) if __name__ == '__main__': pool = ThreadPoolExecutor() i = 0 while True: if random.random() < 0.9: pool.submit(sleep, 0.001) else: r = pool.submit(worker) r = r.result() i += 1 print('alive', i) It's a bit hard to trigger that way but with some luck and many restarts it'll eventually freeze as r.result() never returns. The backtrace from a child process shows that the child is stuck in Lib/concurrent/futures/thread.py:_python_exit waiting for _global_shutdown_lock. The fork happened while the lock was already grabbed i.e. while executing ThreadPoolExecutor.submit ---------- components: Library (Lib) messages: 400378 nosy: xavier.lacroze priority: normal severity: normal status: open title: Race condition in thread.py versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 17:13:17 2021 From: report at bugs.python.org (Steve Dower) Date: Thu, 26 Aug 2021 21:13:17 +0000 Subject: [New-bugs-announce] [issue45022] Update libffi to 3.4.2 Message-ID: <1630012397.4.0.321187135755.issue45022@roundup.psfhosted.org> New submission from Steve Dower : libffi is doing releases again! We're a few versions behind, so should pull in the latest. https://github.com/libffi/libffi/ Adding RMs for opinions on backporting, and Ned in case this impacts the macOS build. ---------- components: Build, Windows, ctypes messages: 400382 nosy: lukasz.langa, ned.deily, pablogsal, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Update libffi to 3.4.2 type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 18:56:33 2021 From: report at bugs.python.org (Amber Wright) Date: Thu, 26 Aug 2021 22:56:33 +0000 Subject: [New-bugs-announce] [issue45023] Python doesn't exit with proper resultcode on SIGINT in multiprocessing.Process Message-ID: <1630018593.55.0.418512134995.issue45023@roundup.psfhosted.org> New submission from Amber Wright : The return code of python on linux/MacOS when the program is ended with a KeyboardInterrupt should be -2, when running with multiprocessing the exitcode is 1. I've attached a reproduced example. >From The Process.join() docs: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Process.exitcode > A negative value -N indicates that the child was terminated by signal N. output: $ /usr/local/opt/python at 3.9/bin/python3 -m test Traceback (most recent call last): File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/awright/docker/2108/test.py", line 49, in sys.exit(main()) File "/Users/awright/docker/2108/test.py", line 41, in main return target() File "/Users/awright/docker/2108/test.py", line 10, in target time.sleep(99999) KeyboardInterrupt proc.wait()=-2 Process SpawnProcess-1: Traceback (most recent call last): File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/local/Cellar/python at 3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/awright/docker/2108/test.py", line 10, in target time.sleep(99999) KeyboardInterrupt proc.exitcode=1 See also: https://bugs.python.org/issue1054041 and https://bugs.python.org/issue41602 ---------- components: Interpreter Core files: test.py messages: 400384 nosy: ambwrig priority: normal severity: normal status: open title: Python doesn't exit with proper resultcode on SIGINT in multiprocessing.Process versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50237/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 21:29:49 2021 From: report at bugs.python.org (Anup Parikh) Date: Fri, 27 Aug 2021 01:29:49 +0000 Subject: [New-bugs-announce] [issue45024] Cannot extend collections ABCs with protocol Message-ID: <1630027789.49.0.847415585989.issue45024@roundup.psfhosted.org> New submission from Anup Parikh : Since the container ABCs are normal classes, and Protocol cannot subclass normal classes, there's no way to create a protocol that extends the ABCs without explicitly listing out all the methods needed for the collection. e.g., can't do this: from typing import Iterable, Protocol class IterableWithMethod(Iterable, Protocol): def method(self) -> None: pass Since the ABCs don't provide any default implementations (I think?), maybe they should just be defined as runtime checkable protocols instead of ABCs? ---------- components: Library (Lib), Parser messages: 400387 nosy: anuppari, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Cannot extend collections ABCs with protocol type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 21:47:13 2021 From: report at bugs.python.org (Gregory Szorc) Date: Fri, 27 Aug 2021 01:47:13 +0000 Subject: [New-bugs-announce] [issue45025] Reliance on C bit fields is C API is undefined behavior Message-ID: <1630028833.13.0.234586694862.issue45025@roundup.psfhosted.org> New submission from Gregory Szorc : At least the PyASCIIObject struct in Include/cpython/unicodeobject.h uses bit fields. Various preprocessor macros like PyUnicode_IS_ASCII() and PyUnicode_KIND() access this struct's bit field. This is problematic because according to the C specification, the storage of bit fields is unspecified and may vary from compiler to compiler or architecture to architecture. Theoretically, a build of libpython with compiler A may not have the same storage layout of a bit field as a separate binary built with compiler B. These 2 binaries could be linked/loaded together, resulting in a crash or incorrect behavior at run-time. https://stackoverflow.com/questions/6043483/why-bit-endianness-is-an-issue-in-bitfields/6044223#6044223 To ensure bit field behavior is consistent, the same compiler must be used for all bit field interaction. Since it is effectively impossible to ensure this for programs like Python where multiple compilers are commonly at play (a 3rd party C extension will likely not be built on the same machine that built libpython), bit fields must not be exposed in the C API. If a bit field must exist, the bit field should not be declared in a public .h header and any APIs for accessing the bit field must be implemented as compiled functions such that only a single compiler will define the bit field storage layout. In order to avoid undefined behavior, Python's C API should avoid all use of bit fields. This issue is in response to https://github.com/PyO3/pyo3/issues/1824. ---------- components: C API messages: 400388 nosy: indygreg priority: normal severity: normal status: open title: Reliance on C bit fields is C API is undefined behavior type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 23:03:33 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 27 Aug 2021 03:03:33 +0000 Subject: [New-bugs-announce] [issue45026] More compact range iterator Message-ID: <1630033413.29.0.592444888293.issue45026@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR provides more compact implementation of the range iterator. It consumes less memory and produces smaller pickles. It is presumably faster because it performs simpler arithmetic operations on iteration (no multiplications). ---------- components: Interpreter Core messages: 400390 nosy: lukasz.langa, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: More compact range iterator type: resource usage versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 26 23:14:24 2021 From: report at bugs.python.org (Greg Werbin) Date: Fri, 27 Aug 2021 03:14:24 +0000 Subject: [New-bugs-announce] [issue45027] Allow basicConfig to configure any logger, not just root Message-ID: <1630034064.24.0.262638183252.issue45027@roundup.psfhosted.org> New submission from Greg Werbin : Hello all! I am proposing to add a "logger=" kwarg to logging.basicConfig(), which would cause the configuration to be applied to the specified logger. The value of this parameter could be a string or a logging.Logger object. Omitting logger= or passing logger=None would be equivalent to the current behavior, using the root logger. My rationale for this proposal is that the Python logging can be verbose to configure for "simple" use cases, and can be intimidating for new users, especially those who don't have prior experience with comparable logging frameworks in other languages. The simplicity of basicConfig() is great, but currently there is a very big usability gap between the "root logger only" case and the "fully manual configuration" case. This enhancement proposal would help to fill that gap. I observe that many Python users tend to use basicConfig() even when they would be better served by configuring only the logger(s) needed for their own app/library. And I think many of these same Python users would appreciate the reduced verbosity and greater convenience of having a "basic config" option that they could apply to various loggers independently. I know that I personally would use this enhanced basicConfig() all the time, and I hope that others feel the same way. I also believe that it would encourage adoption of sensible logging setups in a greater number of projects. Here are the Git diffs, as rendered by Github: * CPython: https://github.com/python/cpython/compare/main...gwerbin:gwerbin/basicconfig-any-logger * Mypy (typeshed): https://github.com/python/mypy/compare/master...gwerbin:gwerbin/basicconfig-any-logger ---------- components: Library (Lib) messages: 400391 nosy: gwerbin priority: normal severity: normal status: open title: Allow basicConfig to configure any logger, not just root type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 27 03:43:57 2021 From: report at bugs.python.org (Phani Kumar Yadavilli) Date: Fri, 27 Aug 2021 07:43:57 +0000 Subject: [New-bugs-announce] [issue45028] module 'unittest.mock' has no attribute 'AsyncMock' Message-ID: <1630050237.89.0.670577143839.issue45028@roundup.psfhosted.org> New submission from Phani Kumar Yadavilli : The unittest.mock does not have AsyncMock. I tested the same code in 3.9 it works fine. ---------- messages: 400400 nosy: wandermonk priority: normal severity: normal status: open title: module 'unittest.mock' has no attribute 'AsyncMock' type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 27 05:03:24 2021 From: report at bugs.python.org (Lyndon D'Arcy) Date: Fri, 27 Aug 2021 09:03:24 +0000 Subject: [New-bugs-announce] [issue45029] tkinter doc, hello world example - quit button clobbers method Message-ID: <1630055004.62.0.378999176997.issue45029@roundup.psfhosted.org> New submission from Lyndon D'Arcy : Below is the example as it is. Currently self.quit clobbers a built-in method of the same name. I would suggest renaming self.quit to self.quit_button or similar. ------------------------------------------------------- import tkinter as tk class Application(tk.Frame): def __init__(self, master=None): super().__init__(master) self.master = master self.pack() self.create_widgets() def create_widgets(self): self.hi_there = tk.Button(self) self.hi_there["text"] = "Hello World\n(click me)" self.hi_there["command"] = self.say_hi self.hi_there.pack(side="top") self.quit = tk.Button(self, text="QUIT", fg="red", command=self.master.destroy) self.quit.pack(side="bottom") def say_hi(self): print("hi there, everyone!") root = tk.Tk() app = Application(master=root) app.mainloop() ----------------------------------------------------------- >>> help(app.quit) Help on method quit in module tkinter: quit() method of __main__.Application instance Quit the Tcl interpreter. All widgets will be destroyed. ---------- assignee: docs at python components: Documentation, Tkinter messages: 400403 nosy: docs at python, lyndon.darcy priority: normal severity: normal status: open title: tkinter doc, hello world example - quit button clobbers method versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 27 10:47:07 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 27 Aug 2021 14:47:07 +0000 Subject: [New-bugs-announce] [issue45030] Integer overflow in __reduce__ of the range iterator Message-ID: <1630075627.02.0.307201647792.issue45030@roundup.psfhosted.org> New submission from Serhiy Storchaka : >>> it = iter(range(2**63-10, 2**63-1, 10)) >>> it.__reduce__() (, (range(9223372036854775798, -9223372036854775808, 10),), 0) >>> import pickle >>> it2 = pickle.loads(pickle.dumps(it)) >>> list(it) [9223372036854775798] >>> list(it2) [] ---------- components: Library (Lib) messages: 400428 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Integer overflow in __reduce__ of the range iterator type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 27 10:52:22 2021 From: report at bugs.python.org (Jan Ripke) Date: Fri, 27 Aug 2021 14:52:22 +0000 Subject: [New-bugs-announce] [issue45031] [Windows] datetime.fromtimestamp(t) when t = 253402210800 fails on Python 3.8 Message-ID: <1630075942.93.0.0506592878726.issue45031@roundup.psfhosted.org> New submission from Jan Ripke : When executing the following statement on a Windows machine it fails. On a linux machine it returns the expected date (9999-31-12 00:00:00) The Error we get on Windows is: OSError: [Errno 22] Invalid argument In another manor it was reported before: https://bugs.python.org/issue29097 The code: from datetime import datetime epoch_time = 253402210800000/1000 print(datetime.fromtimestamp(epoch_time)) ---------- components: Windows messages: 400429 nosy: janripke, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: [Windows] datetime.fromtimestamp(t) when t = 253402210800 fails on Python 3.8 type: crash versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 27 13:13:38 2021 From: report at bugs.python.org (Jorge Rojas) Date: Fri, 27 Aug 2021 17:13:38 +0000 Subject: [New-bugs-announce] [issue45032] struct.unpack() returns NaN Message-ID: <1630084418.34.0.275202976554.issue45032@roundup.psfhosted.org> New submission from Jorge Rojas : Hi all! I have this case when trying to get a float value applying pack to these integer values: struct.unpack('f', struct.pack('HH', 0, 32704)) This happens when executing the unpack function to a float format, from a bit-array where the sign bit is not in a suitable position I think. Applying big-endian to the format, it returns a numeric value, but being little-endian it returns a NaN. > struct.unpack(' struct.unpack('>f', struct.pack('HH',0, 32704)) Out[169]: (6.905458702346266e-41,) The current documentation on struct.unpack doesn't anything about what conditions a NaN is returned, besides this might be a expected value. Maybe explaining how this value could be converted to an equivalent format to retrieve the proper value may help, or why this returns a NaN and how to avoid it. Thanks in advance. ---------- components: Library (Lib) messages: 400434 nosy: jrojas priority: normal severity: normal status: open title: struct.unpack() returns NaN type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 27 19:56:25 2021 From: report at bugs.python.org (Zac Bentley) Date: Fri, 27 Aug 2021 23:56:25 +0000 Subject: [New-bugs-announce] [issue45033] Calls to PyErr_PrintEx in destructors cause calling async functions to incorrectly return None Message-ID: <1630108585.95.0.412897766167.issue45033@roundup.psfhosted.org> New submission from Zac Bentley : If an object's destructor contains native code which calls PyErr_PrintEx, and that object's refcount drops to zero as the result of an async function returning, the async function incorrectly returns None. I first identified this behavior while using Boost-python. A more detailed description, and steps to reproduce, are in the issue report I filed on that library: https://github.com/boostorg/python/issues/374 I'm not very familiar with interpreter internals, so it is possible that this is expected behavior. However, it does seem like at least a leaky abstraction between the mechanics of async calls (which use exception based control flow internally) and the PyErr_PrintEx function, which is typically invoked by callers interested in finding out about errors that they caused, not errors that are both caused elsewhere and whose propagation is important to preserving call stack state. ---------- components: C API, Interpreter Core, asyncio messages: 400448 nosy: asvetlov, yselivanov, zbentley priority: normal severity: normal status: open title: Calls to PyErr_PrintEx in destructors cause calling async functions to incorrectly return None type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 27 21:02:05 2021 From: report at bugs.python.org (Steven D'Aprano) Date: Sat, 28 Aug 2021 01:02:05 +0000 Subject: [New-bugs-announce] [issue45034] Improve struct.pack out of range error messages Message-ID: <1630112525.53.0.10403701832.issue45034@roundup.psfhosted.org> New submission from Steven D'Aprano : Packing errors using struct in 3.9 seem to be unnecessarily obfuscated to me. >>> import struct >>> struct.pack('H', 70000) Traceback (most recent call last): File "", line 1, in struct.error: ushort format requires 0 <= number <= (0x7fff * 2 + 1) Why "0x7fff * 2 + 1"? Why not the more straightforward "0xffff" or 65536? (I have a slight preference for hex.) Compare that to: >>> struct.pack('I', 4300000000) Traceback (most recent call last): File "", line 1, in struct.error: 'I' format requires 0 <= number <= 4294967295 which at least gives the actual value, but it would perhaps be a bit more useful in hex 0xffffffff. For the long-long format, the error message just gives up: >>> struct.pack('Q', 2**65) Traceback (most recent call last): File "", line 1, in struct.error: argument out of range Could be improved by: 'Q' format requires 0 <= number <= 0xffff_ffff_ffff_ffff ---------- components: Library (Lib) messages: 400452 nosy: steven.daprano priority: normal severity: normal status: open title: Improve struct.pack out of range error messages type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 28 02:43:22 2021 From: report at bugs.python.org (Tzu-ping Chung) Date: Sat, 28 Aug 2021 06:43:22 +0000 Subject: [New-bugs-announce] [issue45035] sysconfig's posix_home scheme has different platlib value to distutils's unix_home Message-ID: <1630133002.18.0.922476282003.issue45035@roundup.psfhosted.org> New submission from Tzu-ping Chung : This is similar to bpo-44860, but in the other direction: $ docker run -it --rm -h=p fedora:34 bash ... [root at p /]# yum install python3 -y ... [root at p /]# type python3 python3 is hashed (/usr/bin/python3) [root at p /]# python3 -V Python 3.9.6 [root at p /]# python3.9 -q >>> from distutils.command.install import install >>> from distutils.dist import Distribution >>> c = install(Distribution()) >>> c.home = '/foo' >>> c.finalize_options() >>> c.install_platlib '/foo/lib64/python' >>> import sysconfig >>> sysconfig.get_path('platlib', 'posix_home', vars={'home': '/root'}) '/foo/lib/python' sysconfig?s scheme should use `{platlib}` instead of hardcoding 'lib'. Note that on Python 3.10+ the platlib values from distutils and sysconfig do match (since the distutils scheme is automatically generated from sysconfig), but the issue remains; sysconfig?s scheme should likely include `{platlib}` (adding Victor and Miro to confirm this). ---------- components: Distutils, Library (Lib) messages: 400463 nosy: dstufft, eric.araujo, hroncok, uranusjr, vstinner priority: normal severity: normal status: open title: sysconfig's posix_home scheme has different platlib value to distutils's unix_home versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 28 09:15:14 2021 From: report at bugs.python.org (Techn010 Je11y) Date: Sat, 28 Aug 2021 13:15:14 +0000 Subject: [New-bugs-announce] [issue45036] turtle.onrelease() event doesn't get triggered sometimes Message-ID: <1630156514.02.0.346025167609.issue45036@roundup.psfhosted.org> New submission from Techn010 Je11y : (pls read with reference to attached code) I made a Paint-ish program with Turtle. As there isn't ondrag or onrelease for Screen, I created a turtle named bg so I could use ondrag and onrelease (see file attached ig) and eliminate the need for double-clicking(previously I used Screen.onclick to pen.up(), move it to cursor, and pen.down() then use turtle.drag() to draw). However, I noticed that it doesn't work(turtle doesn't penup when mouse is released sometimes) and added the print("...", i(or j)) bits(pls see code). On at least 1 instance 'start' was printed without a corresponding release. I'm a beginner so I apologise if it's just a bug in my code. I did not install anything related to python after I installed 3.9.6(64-bit btw). I do not have any other versions. I did not alter any part of what's installed. System info: Windows 10 Pro Education Version 10.0.19043(or 21H1), Build 19043.1165 Windows Feature Experience Pack 120.2212.3530.0 Lenovo L13 Gen 2, x64 based PC 11th Gen Intel Core i5-1135G7 @ 2.4GHz, 4 Cores, 8 logical processors 8GB ram Attached is my code(I'm sorry if it hurts your eyes) ---------- assignee: terry.reedy components: IDLE, Tkinter files: pain2exp.py messages: 400470 nosy: techn010je11y, terry.reedy priority: normal severity: normal status: open title: turtle.onrelease() event doesn't get triggered sometimes type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50239/pain2exp.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 28 09:20:00 2021 From: report at bugs.python.org (Ayush Parikh) Date: Sat, 28 Aug 2021 13:20:00 +0000 Subject: [New-bugs-announce] [issue45037] theme-change.py for tkinter lib Message-ID: <1630156800.16.0.673052638064.issue45037@roundup.psfhosted.org> Change by Ayush Parikh : ---------- components: Tkinter nosy: Ayushparikh-code priority: normal pull_requests: 26455 severity: normal status: open title: theme-change.py for tkinter lib _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 28 11:52:27 2021 From: report at bugs.python.org (Jonathan Isaac) Date: Sat, 28 Aug 2021 15:52:27 +0000 Subject: [New-bugs-announce] [issue45038] Bugs Message-ID: <17b8d763338.2808.67a03304c92d9dfce54c3de1c7b1fd49@gmail.com> New submission from Jonathan Isaac : Jonathan Isaac Sent with Aqua Mail for Android https://www.mobisystems.com/aqua-mail ---------- messages: 400479 nosy: bonesisaac1982 priority: normal severity: normal status: open title: Bugs _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 28 13:46:22 2021 From: report at bugs.python.org (Irit Katriel) Date: Sat, 28 Aug 2021 17:46:22 +0000 Subject: [New-bugs-announce] [issue45039] use ADDOP_LOAD_CONST consistently Message-ID: <1630172782.32.0.00459227167262.issue45039@roundup.psfhosted.org> New submission from Irit Katriel : The compiler generally uses ADDOP_LOAD_CONST to emit a LOAD_CONST, but there are two places that use ADDOP_O(c, LOAD_CONST, Py_None, consts); This is currently equivalent to ADDOP_LOAD_CONST(c, Py_None); It should be replaced because we may soon change ADDOP_LOAD_CONST. ---------- components: Interpreter Core messages: 400485 nosy: iritkatriel priority: normal severity: normal status: open title: use ADDOP_LOAD_CONST consistently versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 28 16:16:36 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Sat, 28 Aug 2021 20:16:36 +0000 Subject: [New-bugs-announce] [issue45040] [sqlite3] optimise transaction control functions Message-ID: <1630181796.88.0.161602783336.issue45040@roundup.psfhosted.org> New submission from Erlend E. Aasland : pysqlite_connection_commit_impl(), pysqlite_connection_rollback_impl(), and begin_transaction() can be simplified: sqlite3_finalize() will pass on any error set by sqlite3_step(). This implies that we only need to check the return value of sqlite3_prepare_v2() and sqlite3_finalize(), which implies that we can execute sqlite3_prepare_v2(), sqlite3_step() and sqlite3_finalize() in a row inside a begin/end threads wrapper. As a result, error handling will be greatly simplified. Fewer lines of code, simpler error paths, increased readability, and increased code coverage. diffstat: 2 files changed, 27 insertions(+), 62 deletions(-) ---------- components: Extension Modules messages: 400501 nosy: erlendaasland priority: low severity: normal status: open title: [sqlite3] optimise transaction control functions type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 28 17:42:20 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Sat, 28 Aug 2021 21:42:20 +0000 Subject: [New-bugs-announce] [issue45041] [sqlite3] simplify executescript() Message-ID: <1630186940.75.0.867509999699.issue45041@roundup.psfhosted.org> New submission from Erlend E. Aasland : See also bpo-45040 Since sqlite3_finalize() will pass on any error message set by sqlite3_step(), we can greatly simplify SQLite C API usage and error handling in sqlite3.Cursor.executescript(), thus reducing the number of times we save/restore thread state, and also simplifying error handling greatly. We can also "inline" the commit before the main loop using the SQLite API directly, instead of calling self.commit() Diffstat for the proposed patch: 1 file changed, 25 insertions(+), 42 deletions(-) ---------- components: Extension Modules messages: 400505 nosy: erlendaasland priority: low severity: normal status: open title: [sqlite3] simplify executescript() type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 29 07:01:30 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 29 Aug 2021 11:01:30 +0000 Subject: [New-bugs-announce] [issue45042] Many multiprocessing tests are silently skipped since 3.9 Message-ID: <1630234890.74.0.652972390758.issue45042@roundup.psfhosted.org> New submission from Serhiy Storchaka : Here is a list of multiprocessing tests which are run in 3.8 but are not found in 3.9+: OtherTest.test_answer_challenge_auth_failure OtherTest.test_deliver_challenge_auth_failure TestInitializers.test_manager_initializer TestInitializers.test_pool_initializer TestSyncManagerTypes.test_array TestSyncManagerTypes.test_barrier TestSyncManagerTypes.test_bounded_semaphore TestSyncManagerTypes.test_condition TestSyncManagerTypes.test_dict TestSyncManagerTypes.test_event TestSyncManagerTypes.test_joinable_queue TestSyncManagerTypes.test_list TestSyncManagerTypes.test_lock TestSyncManagerTypes.test_namespace TestSyncManagerTypes.test_pool TestSyncManagerTypes.test_queue TestSyncManagerTypes.test_rlock TestSyncManagerTypes.test_semaphore TestSyncManagerTypes.test_value WithManagerTestBarrier.test_abort WithManagerTestBarrier.test_abort_and_reset WithManagerTestBarrier.test_action WithManagerTestBarrier.test_barrier WithManagerTestBarrier.test_barrier_10 WithManagerTestBarrier.test_default_timeout WithManagerTestBarrier.test_reset WithManagerTestBarrier.test_single_thread WithManagerTestBarrier.test_thousand WithManagerTestBarrier.test_timeout WithManagerTestBarrier.test_wait_return WithManagerTestCondition.test_notify WithManagerTestCondition.test_notify_all WithManagerTestCondition.test_notify_n WithManagerTestCondition.test_timeout WithManagerTestCondition.test_wait_result WithManagerTestCondition.test_waitfor WithManagerTestCondition.test_waitfor_timeout WithManagerTestContainers.test_dict WithManagerTestContainers.test_dict_iter WithManagerTestContainers.test_dict_proxy_nested WithManagerTestContainers.test_list WithManagerTestContainers.test_list_iter WithManagerTestContainers.test_list_proxy_in_list WithManagerTestContainers.test_namespace WithManagerTestEvent.test_event WithManagerTestLock.test_lock WithManagerTestLock.test_lock_context WithManagerTestLock.test_rlock WithManagerTestManagerRestart.test_rapid_restart WithManagerTestMyManager.test_mymanager WithManagerTestMyManager.test_mymanager_context WithManagerTestMyManager.test_mymanager_context_prestarted WithManagerTestPool.test_apply WithManagerTestPool.test_async WithManagerTestPool.test_async_timeout WithManagerTestPool.test_context WithManagerTestPool.test_empty_iterable WithManagerTestPool.test_enter WithManagerTestPool.test_imap WithManagerTestPool.test_imap_handle_iterable_exception WithManagerTestPool.test_imap_unordered WithManagerTestPool.test_imap_unordered_handle_iterable_exception WithManagerTestPool.test_make_pool WithManagerTestPool.test_map WithManagerTestPool.test_map_async WithManagerTestPool.test_map_async_callbacks WithManagerTestPool.test_map_chunksize WithManagerTestPool.test_map_handle_iterable_exception WithManagerTestPool.test_map_no_failfast WithManagerTestPool.test_map_unplicklable WithManagerTestPool.test_release_task_refs WithManagerTestPool.test_resource_warning WithManagerTestPool.test_starmap WithManagerTestPool.test_starmap_async WithManagerTestPool.test_terminate WithManagerTestPool.test_traceback WithManagerTestPool.test_wrapped_exception WithManagerTestQueue.test_closed_queue_put_get_exceptions WithManagerTestQueue.test_fork WithManagerTestQueue.test_get WithManagerTestQueue.test_no_import_lock_contention WithManagerTestQueue.test_put WithManagerTestQueue.test_qsize WithManagerTestQueue.test_queue_feeder_donot_stop_onexc WithManagerTestQueue.test_queue_feeder_on_queue_feeder_error WithManagerTestQueue.test_task_done WithManagerTestQueue.test_timeout WithManagerTestRemoteManager.test_remote WithManagerTestSemaphore.test_bounded_semaphore WithManagerTestSemaphore.test_semaphore WithManagerTestSemaphore.test_timeout WithProcessesTestManagerRestart.test_rapid_restart WithProcessesTestPicklingConnections.test_access WithProcessesTestPicklingConnections.test_pickling WithProcessesTestSharedMemory.test_shared_memory_ShareableList_basics WithProcessesTestSharedMemory.test_shared_memory_ShareableList_pickling WithProcessesTestSharedMemory.test_shared_memory_SharedMemoryManager_basics WithProcessesTestSharedMemory.test_shared_memory_SharedMemoryManager_reuses_resource_tracker WithProcessesTestSharedMemory.test_shared_memory_SharedMemoryServer_ignores_sigint WithProcessesTestSharedMemory.test_shared_memory_across_processes WithProcessesTestSharedMemory.test_shared_memory_basics WithProcessesTestSharedMemory.test_shared_memory_cleaned_after_process_termination WithThreadsTestManagerRestart.test_rapid_restart ---------- components: Tests keywords: 3.9regression messages: 400521 nosy: davin, pitrou, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Many multiprocessing tests are silently skipped since 3.9 type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 29 07:14:26 2021 From: report at bugs.python.org (Matt Schuster) Date: Sun, 29 Aug 2021 11:14:26 +0000 Subject: [New-bugs-announce] [issue45043] Typo (change 'two' to 'three') Message-ID: <1630235666.41.0.976708559831.issue45043@roundup.psfhosted.org> New submission from Matt Schuster : Reference https://docs.python.org/3/library/time.html?highlight=time%20time#module-time in 3.8, 3.9, 3.10, 3.11 (previous versions do not have same issue). Specifically under time.asctime([t]) and time.ctime([secs]) Change "day field is two characters long", should be "day field is three characters long" ---------- assignee: docs at python components: Documentation messages: 400525 nosy: docs at python, nofliesonyou priority: normal severity: normal status: open title: Typo (change 'two' to 'three') versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 29 08:34:46 2021 From: report at bugs.python.org (Jim Fasarakis-Hilliard) Date: Sun, 29 Aug 2021 12:34:46 +0000 Subject: [New-bugs-announce] [issue45044] Agreeing on error raised by large repeat value for sequences Message-ID: <1630240486.34.0.157153351878.issue45044@roundup.psfhosted.org> New submission from Jim Fasarakis-Hilliard : There's currently a slight disagreement between some of the sequences about what is raised when the value for `repeat` is too large. Currently, `str` and `bytes` raise an `OverflowError` while `bytearray`, `tuple`, `list` and `deque` raise a `MemoryError`. To make things more confusing, if we exercise a different path not currently caught by the check, both `str` and `bytes` raise `MemoryError`s: >>> b'abc' * maxsize Traceback (most recent call last): File "", line 1, in OverflowError: repeated bytes are too long >>> b'a' * maxsize Traceback (most recent call last): File "", line 1, in MemoryError Not sure what the original rationale for having these `OverflowError`s was but, should we change them to `MemoryError`s? ---------- messages: 400527 nosy: Jim Fasarakis-Hilliard priority: normal severity: normal status: open title: Agreeing on error raised by large repeat value for sequences type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 29 11:56:08 2021 From: report at bugs.python.org (Dong-hee Na) Date: Sun, 29 Aug 2021 15:56:08 +0000 Subject: [New-bugs-announce] [issue45045] Optimize mapping patterns of structural pattern matching Message-ID: <1630252568.97.0.406594835978.issue45045@roundup.psfhosted.org> New submission from Dong-hee Na : There are optimizable points that can be achieved by removing unnecessary tuple transformation and using vector calling convention. +---------------+--------+----------------------+ | Benchmark | base | opt | +===============+========+======================+ | bench pattern | 482 ns | 417 ns: 1.15x faster | +---------------+--------+----------------------+ ---------- components: Interpreter Core files: bench_pattern.py messages: 400549 nosy: corona10 priority: normal severity: normal status: open title: Optimize mapping patterns of structural pattern matching type: performance versions: Python 3.11 Added file: https://bugs.python.org/file50240/bench_pattern.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 29 12:56:09 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 29 Aug 2021 16:56:09 +0000 Subject: [New-bugs-announce] [issue45046] Add support of context managers in unittest Message-ID: <1630256169.0.0.0736015144313.issue45046@roundup.psfhosted.org> New submission from Serhiy Storchaka : Methods setUp() and tearDown() of TestClass allow to add some code executed before and after every test method. In many cases addCleanup() is more convenient than tearDown() -- you do not need to keep data for cleaning up as TestCase attributes, addCleanup() doe it for you. You should not worry about partial cleaning up if setUp() fails in the middle. You can also use addCleanup() in test methods, and corresponding resources will be cleaned only for these tests which created them. resource = open_resource() self.addCleanup(close_resource, resource) self.resource = resource # optional, if you need access to it in test methods Some resources are managed by context managers. It is so easy to create a context manager with the contextlib.contextmanager decorator, that its __enter__ and __exit__ methods can be only way to create and destroy resource. So the code looks like the following: cm = my_context_manager() cm.__enter__() # or self.resource = cm.__enter__() self.addCleanup(cm.__exit__, None, None, None) It looks not so nice. You need to use dunder methods, and pass thee Nones as arguments for __exit__. I propose to add helpers: methods enterContext(), enterClassContext(), enterAsyncContext() and function enterModuleContext() which wraps addCleanup/addClassCleanup/addAsyncCleanup/addModuleCleanup correspondently and allow to get rid of the boilerplate code. Example: self.enterContext(my_context_manager()) # or self.resource = self.enterContext(my_context_manager()) It solves the same problem as issue15351, but from different direction, so I opened a separate issue. ---------- components: Library (Lib) messages: 400552 nosy: chris.jerdonek, ezio.melotti, michael.foord, r.david.murray, rbcollins, serhiy.storchaka priority: normal severity: normal status: open title: Add support of context managers in unittest type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 29 13:35:13 2021 From: report at bugs.python.org (Ayush Parikh) Date: Sun, 29 Aug 2021 17:35:13 +0000 Subject: [New-bugs-announce] [issue45047] Update demo files Message-ID: <1630258513.56.0.335503332351.issue45047@roundup.psfhosted.org> Change by Ayush Parikh : ---------- nosy: Ayushparikh-code priority: normal pull_requests: 26491 severity: normal status: open title: Update demo files type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 29 17:12:54 2021 From: report at bugs.python.org (DragonEggBedrockBreaking) Date: Sun, 29 Aug 2021 21:12:54 +0000 Subject: [New-bugs-announce] [issue45048] subprocess.run(capture_output=Bool) does the opposite of expected Message-ID: <1630271574.19.0.912939412698.issue45048@roundup.psfhosted.org> New submission from DragonEggBedrockBreaking : If you run subprocess.run(capture_output=True), it doesn't show output, but if you run subprocess.run(capture_output=False) (or if you just run subprocess.run() since False is default), it does show output. In the example in the docs, it shows this in the examples section: ```py >>> subprocess.run(["ls", "-l"]) # doesn't capture output CompletedProcess(args=['ls', '-l'], returncode=0) >>> subprocess.run(["ls", "-l", "/dev/null"], capture_output=True) CompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0, stdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\n', stderr=b'') ``` This clearly shows capture_output showing output if true but not if false. Test code: ```py import subprocess subprocess.run("dir", shell=True, capture_output=False) subprocess.run("dir", shell=True, capture_output=False) ``` Other notes: for some reason I get an error if I don't add shell=True, so maybe that contributes? I am on Windows 10 if that matters. ---------- components: Library (Lib) messages: 400561 nosy: DragonEggBedrockBreaking priority: normal severity: normal status: open title: subprocess.run(capture_output=Bool) does the opposite of expected versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 04:52:56 2021 From: report at bugs.python.org (Ayush Parikh) Date: Mon, 30 Aug 2021 08:52:56 +0000 Subject: [New-bugs-announce] [issue45049] added cbrt() in MATH_RADIANS_METHODDEF which is missing before Message-ID: <1630313576.26.0.444362528628.issue45049@roundup.psfhosted.org> Change by Ayush Parikh : ---------- nosy: Ayushparikh-code priority: normal pull_requests: 26502 severity: normal status: open title: added cbrt() in MATH_RADIANS_METHODDEF which is missing before type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 12:17:25 2021 From: report at bugs.python.org (Ayush Parikh) Date: Mon, 30 Aug 2021 16:17:25 +0000 Subject: [New-bugs-announce] [issue45050] created unittest file analyze_text.py Message-ID: <1630340245.5.0.292799395135.issue45050@roundup.psfhosted.org> Change by Ayush Parikh : ---------- nosy: Ayushparikh-code priority: normal pull_requests: 26513 severity: normal status: open title: created unittest file analyze_text.py type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 12:44:02 2021 From: report at bugs.python.org (Ayush Parikh) Date: Mon, 30 Aug 2021 16:44:02 +0000 Subject: [New-bugs-announce] [issue45051] wrote optimised async_test.py for async lib Message-ID: <1630341842.53.0.675248638365.issue45051@roundup.psfhosted.org> Change by Ayush Parikh : ---------- nosy: Ayushparikh-code priority: normal pull_requests: 26516 severity: normal status: open title: wrote optimised async_test.py for async lib type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 14:21:28 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Mon, 30 Aug 2021 18:21:28 +0000 Subject: [New-bugs-announce] [issue45052] WithProcessesTestSharedMemory.test_shared_memory_basics fails on Windows Message-ID: <1630347688.96.0.848198201023.issue45052@roundup.psfhosted.org> New submission from Nikita Sobolev : While working on https://github.com/python/cpython/pull/28060 we've noticed that `test.test_multiprocessing_spawn.WithProcessesTestSharedMemory.test_shared_memory_basics` fails on Windows: ``` ====================================================================== FAIL: test_shared_memory_basics (test.test_multiprocessing_spawn.WithProcessesTestSharedMemory) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\cpython\cpython\lib\test\_test_multiprocessing.py", line 3794, in test_shared_memory_basics self.assertEqual(sms.size, sms2.size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: 512 != 4096 ``` For now it is ignored. Related issue: https://bugs.python.org/issue45042 ---------- components: Tests messages: 400646 nosy: sobolevn priority: normal severity: normal status: open title: WithProcessesTestSharedMemory.test_shared_memory_basics fails on Windows type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 14:22:59 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Mon, 30 Aug 2021 18:22:59 +0000 Subject: [New-bugs-announce] [issue45053] MD5SumTests.test_checksum_fodder fails on Windows Message-ID: <1630347779.05.0.821089086925.issue45053@roundup.psfhosted.org> New submission from Nikita Sobolev : While working on https://github.com/python/cpython/pull/28060 we've noticed that `test.test_tools.test_md5sum.MD5SumTests.test_checksum_fodder` fails on Windows: ``` ====================================================================== FAIL: test_checksum_fodder (test.test_tools.test_md5sum.MD5SumTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\cpython\cpython\lib\test\test_tools\test_md5sum.py", line 41, in test_checksum_fodder self.assertIn(part.encode(), out) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: b'@test_1772_tmp\xc3\xa6' not found in b'd38dae2eb1ab346a292ef6850f9e1a0d @test_1772_tmp\xe6\\md5sum.fodder\r\n' ``` For now it is ignored. Related issue: https://bugs.python.org/issue45042 ---------- components: Tests messages: 400648 nosy: sobolevn priority: normal severity: normal status: open title: MD5SumTests.test_checksum_fodder fails on Windows type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 20:30:37 2021 From: report at bugs.python.org (Kevin Mills) Date: Tue, 31 Aug 2021 00:30:37 +0000 Subject: [New-bugs-announce] [issue45054] json module should issue warning about duplicate keys Message-ID: <1630369837.94.0.281309391119.issue45054@roundup.psfhosted.org> New submission from Kevin Mills : The json module will allow the following without complaint: import json d1 = {1: "fromstring", "1": "fromnumber"} string = json.dumps(d1) print(string) d2 = json.loads(string) print(d2) And it prints: {"1": "fromstring", "1": "fromnumber"} {'1': 'fromnumber'} This would be extremely confusing to anyone who doesn't already know that JSON keys have to be strings. Not only does `d1 != d2` (which the documentation does mention as a possibility after a round trip through JSON), but `len(d1) != len(d2)` and `d1['1'] != d2['1']`, even though '1' is in both. I suggest that if json.dump or json.dumps notices that it is producing a JSON document with duplicate keys, it should issue a warning. Similarly, if json.load or json.loads notices that it is reading a JSON document with duplicate keys, it should also issue a warning. ---------- components: Library (Lib) messages: 400678 nosy: Zeturic priority: normal severity: normal status: open title: json module should issue warning about duplicate keys type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 21:44:59 2021 From: report at bugs.python.org (Guido van Rossum) Date: Tue, 31 Aug 2021 01:44:59 +0000 Subject: [New-bugs-announce] [issue45055] Fresh build on Windows fails the first time for zlib.c Message-ID: <1630374299.93.0.725029231147.issue45055@roundup.psfhosted.org> New submission from Guido van Rossum : When I make a fresh checkout of the main branch on Windows and type "pcbuild\build" it starts downloading some distributions (e.g. sqlite) and then starts building. Fine. But at some point there's a whole bunch of errors that seem to come from building zlibmodule.c. Re-running pcbuild\build then downloads some extra thing and then everything builds to completion. First set of downloads and selected logs: Using py -3.9 (found 3.9 with py.exe) Fetching external libraries... Fetching bzip2-1.0.6... Fetching sqlite-3.35.5.0... Fetching xz-5.2.2... Fetching zlib-1.2.11... Traceback (most recent call last): File "C:\Users\gvanrossum\deepfreeze\PCbuild\get_external.py", line 60, in main() File "C:\Users\gvanrossum\deepfreeze\PCbuild\get_external.py", line 56, in main extract_zip(args.externals_dir, zip_path).replace(final_name) File "C:\Users\gvanrossum\AppData\Local\Programs\Python\Python39\lib\pathlib.py", line 1395, in replace self._accessor.replace(self, target) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\gvanrossum\\deepfreeze\\PCbuild\\..\\externals\\cpython-source-deps-zlib-1.2.11' -> 'C:\\Users\\gvanrossum\\deepfreeze\\PCbuild\\..\\externals\\zlib-1.2.11' Fetching external binaries... Fetching libffi-3.3.0... Fetching openssl-bin-1.1.1l... Fetching tcltk-8.6.11.0... Finished. Using "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\msbuild.exe" (found in the Visual Studio installation) Using py -3.9 (found 3.9 with py.exe) C:\Users\gvanrossum\deepfreeze>"C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\msbuild.exe" "C:\Users\gvanrossum\deepfreeze\PCbuild\pcbuild.proj" /t:Build /m /nologo /v:m /clp:summary /p:Configuration=Release /p:Platform=x64 /p:IncludeExternals=true /p:IncludeCTypes=true /p:IncludeSSL=true /p:IncludeTkinter=true /p:UseTestMarker= /p:GIT="C:\Program Files\Git\cmd\git.exe" Killing any running python.exe instances... Regenerate pycore_ast.h pycore_ast_state.h Python-ast.c C:\Users\gvanrossum\deepfreeze\Python\Python-ast.c, C:\Users\gvanrossum\deepfreeze\Include\inte rnal\pycore_ast.h, C:\Users\gvanrossum\deepfreeze\Include\internal\pycore_ast_state.h regenerat ed. Regenerate opcode.h opcode_targets.h Include\opcode.h regenerated from Lib\opcode.py Jump table written into Python\opcode_targets.h Regenerate token-list.inc token.h token.c token.py Generated sources are up to date Getting build info from "C:\Program Files\Git\cmd\git.exe" Building heads/deepfreeze:044e8d866f deepfreeze _abc.c ... Errors: Compiling... thread.c traceback.c zlibmodule.c C:\Users\gvanrossum\deepfreeze\Modules\zlibmodule.c(10,10): fatal error C1083: Cannot open includ e file: 'zlib.h': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vc xproj] adler32.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\adler32.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vc xproj] compress.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\compress.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.v cxproj] crc32.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\crc32.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vcxp roj] deflate.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\deflate.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vc xproj] infback.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\infback.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vc xproj] inffast.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\inffast.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vc xproj] inflate.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\inflate.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vc xproj] inftrees.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\inftrees.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.v cxproj] trees.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\trees.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vcxp roj] uncompr.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\uncompr.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vc xproj] zutil.c c1 : fatal error C1083: Cannot open source file: 'C:\Users\gvanrossum\deepfreeze\externals\zlib-1 .2.11\zutil.c': No such file or directory [C:\Users\gvanrossum\deepfreeze\PCbuild\pythoncore.vcxp roj] dl_nt.c Build FAILED. (Followed by the same errors repeated.) Second build: Using py -3.9 (found 3.9 with py.exe) Fetching external libraries... bzip2-1.0.6 already exists, skipping. sqlite-3.35.5.0 already exists, skipping. xz-5.2.2 already exists, skipping. Fetching zlib-1.2.11... Fetching external binaries... libffi-3.3.0 already exists, skipping. openssl-bin-1.1.1l already exists, skipping. tcltk-8.6.11.0 already exists, skipping. Finished. And then everything builds problem-free. ---------- components: Build, Windows messages: 400680 nosy: gvanrossum, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Fresh build on Windows fails the first time for zlib.c versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 30 23:41:56 2021 From: report at bugs.python.org (Inada Naoki) Date: Tue, 31 Aug 2021 03:41:56 +0000 Subject: [New-bugs-announce] [issue45056] compiler: Unnecessary None in co_consts Message-ID: <1630381316.63.0.607874885673.issue45056@roundup.psfhosted.org> New submission from Inada Naoki : Python 3.10 compiler adds None to co_consts even when None is not used at all. ``` $ cat x1.py def foo(): "docstring" return 42 import dis dis.dis(foo) print(foo.__code__.co_consts) $ python3.9 x1.py 3 0 LOAD_CONST 1 (42) 2 RETURN_VALUE ('docstring', 42) $ python3.10 x1.py 3 0 LOAD_CONST 1 (42) 2 RETURN_VALUE ('docstring', 42, None) ``` ---------- components: Interpreter Core keywords: 3.10regression messages: 400683 nosy: methane priority: normal severity: normal status: open title: compiler: Unnecessary None in co_consts versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 03:24:28 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 31 Aug 2021 07:24:28 +0000 Subject: [New-bugs-announce] [issue45057] Simplify RegressionTestResult Message-ID: <1630394668.9.0.639670262222.issue45057@roundup.psfhosted.org> New submission from Serhiy Storchaka : RegressionTestResult is a subclass of TextTestResult, but it completely ignores the TextTestResult function of outputting results and re-implements it. The problem of this is not only duplicating the code, but that if TextTestResult is changed (for example to fix issue25894) the corresponding changes should be re-implemented in RegressionTestResult. And since implementations that produce the same result are different (somewhere in subtle way), it adds much work and is errorprone. The proposed PR removes any text output code from RegressionTestResult and allows to use TextTestResult for output. ---------- components: Tests messages: 400697 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Simplify RegressionTestResult type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 07:31:30 2021 From: report at bugs.python.org (kftse) Date: Tue, 31 Aug 2021 11:31:30 +0000 Subject: [New-bugs-announce] [issue45058] Undefined behavior for syntax "except AError or BError:" accepted by interpreter Message-ID: <1630409490.64.0.879285632353.issue45058@roundup.psfhosted.org> New submission from kftse : Test case: try: raise TypeError() except TypeError or ValueError: print("OK") try: raise ValueError() except TypeError or ValueError: print("OK") Output: (Python 3.9.0) OK OK # seem to eventually lead to segmentation fault elsewhere (Python 3.8.0) OK Traceback (most recent call last): File "test.py", line 7, in raise ValueError() ValueError I understand that this code is incorrect syntax for exception. The awkward behavior is that the interpreter accepted this syntax and the output being correct in some case, or even in both cases, but seem to eventually lead to segmentation fault elsewhere. ---------- components: Interpreter Core messages: 400710 nosy: kftse20031207 priority: normal severity: normal status: open title: Undefined behavior for syntax "except AError or BError:" accepted by interpreter type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 07:41:52 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 31 Aug 2021 11:41:52 +0000 Subject: [New-bugs-announce] [issue45059] Typo: using "==" instead of "=" Message-ID: <1630410112.9.0.510923106875.issue45059@roundup.psfhosted.org> New submission from Serhiy Storchaka : While searching for use the equality operator with None I found the possible use of "==" instead of "=" (assignment) in Lib/idlelib/idle_test/test_macosx.py. for platform, types in ('darwin', alltypes), ('other', nontypes): with self.subTest(platform=platform): macosx.platform = platform macosx._tk_type == None macosx._init_tk_type() self.assertIn(macosx._tk_type, types) ---------- assignee: terry.reedy components: IDLE, Tests messages: 400713 nosy: serhiy.storchaka, taleinat, terry.reedy priority: normal severity: normal status: open title: Typo: using "==" instead of "=" type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 07:47:38 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 31 Aug 2021 11:47:38 +0000 Subject: [New-bugs-announce] [issue45060] Do not use the equality operators with None Message-ID: <1630410458.57.0.930194989698.issue45060@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are few uses of operators "==" and "!=" with None in the stdlib (against more than 8000 uses of "is" and "is not"). It is very uncommon writing, contradicts PEP 8, and is not safe in general. One bug was found -- using "==" instead of assignment (issue45059). ---------- components: Library (Lib) messages: 400715 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Do not use the equality operators with None versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 09:17:39 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 31 Aug 2021 13:17:39 +0000 Subject: [New-bugs-announce] [issue45061] [C API] Detect refcount bugs on True/False in C extensions Message-ID: <1630415859.84.0.961228000434.issue45061@roundup.psfhosted.org> New submission from STINNER Victor : Writing C extensions using directly the C API is error prone. It's easy to add or forget a Py_INCREF or Py_DECREF. Adding Py_DECREF(Py_True) or Py_DECREF(Py_False) by mistake causes a surprising crash at Python exit: --- Debug memory block at address p=0x8a6e80: API '' 0 bytes originally requested The 7 pad bytes at p-7 are not all FORBIDDENBYTE (0xfd): at p-7: 0x00 *** OUCH at p-6: 0x00 *** OUCH at p-5: 0x00 *** OUCH at p-4: 0x00 *** OUCH at p-3: 0x00 *** OUCH at p-2: 0x00 *** OUCH at p-1: 0x00 *** OUCH Because memory is corrupted at the start, the count of bytes requested may be bogus, and checking the trailing pad bytes may segfault. The 8 pad bytes at tail=0x8a6e80 are not all FORBIDDENBYTE (0xfd): at tail+0: 0x00 *** OUCH at tail+1: 0x00 *** OUCH at tail+2: 0x00 *** OUCH at tail+3: 0x00 *** OUCH at tail+4: 0x00 *** OUCH at tail+5: 0x00 *** OUCH at tail+6: 0x00 *** OUCH at tail+7: 0x00 *** OUCH Enable tracemalloc to get the memory block allocation traceback Fatal Python error: _PyMem_DebugRawFree: bad ID: Allocated using API '', verified using API 'o' Python runtime state: finalizing (tstate=0x0000000001f43c50) Current thread 0x00007f3f562fa740 (most recent call first): Garbage-collecting Abandon (core dumped) --- In my case, the bug occurs at Python exit, in code_dealloc(): "Py_XDECREF(co->co_consts);" destroys a tuple which contains True. It calls object_dealloc() which calls PyBool_Type.tp_dealloc(Py_True), but this object is allocated statically, and so its memory must not deallocated by the Python dynamic memory allocator. In debug mode, PyObject_Free() triggers a fatal error. Concrete example of such bug in PySide with Python 3.10 which is now stricter on reference counting (thanks to the work made in bpo-1635741 and for Python subinterpreters): https://bugreports.qt.io/browse/PYSIDE-1436 I propose to add a specific deallocator functions on bool to detect such bug, to ease debugging. There is already a similar deallocator for the None singleton. ---------- components: Extension Modules messages: 400728 nosy: vstinner priority: normal severity: normal status: open title: [C API] Detect refcount bugs on True/False in C extensions versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 09:43:37 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 31 Aug 2021 13:43:37 +0000 Subject: [New-bugs-announce] [issue45062] test_asyncio: test_huge_content_recvinto() failed Message-ID: <1630417417.52.0.890945752162.issue45062@roundup.psfhosted.org> New submission from STINNER Victor : PPC64LE RHEL8 Refleaks 3.9: https://buildbot.python.org/all/#/builders/482/builds/128 test test_asyncio failed -- Traceback (most recent call last): File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/test/test_asyncio/test_sock_lowlevel.py", line 373, in test_huge_content_recvinto self.loop.run_until_complete( File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/test/test_asyncio/test_sock_lowlevel.py", line 366, in _basetest_huge_content_recvinto self.assertEqual(size, 0) AssertionError: 978216 != 0 Full output (reformatted for readability): --- 0:22:42 load avg: 10.83 [300/425/1] test_asyncio failed (1 failure) (22 min 40 sec) -- running: (...) beginning 6 repetitions 123456 Unknown child process pid 201442, will report returncode 255 Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 201442 is closed ... Unknown child process pid 240062, will report returncode 255 Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 240062 is closed . /home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/test/support/__init__.py:1468: ResourceWarning: unclosed gc.collect() ResourceWarning: Enable tracemalloc to get the object allocation traceback Task exception was never retrieved future: exception=ConnectionResetError(104, 'Connection reset by peer')> Traceback (most recent call last): File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/asyncio/selector_events.py", line 462, in sock_sendall return await fut File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/asyncio/selector_events.py", line 470, in _sock_sendall n = sock.send(view[start:]) ConnectionResetError: [Errno 104] Connection reset by peer test test_asyncio failed -- Traceback (most recent call last): File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/test/test_asyncio/test_sock_lowlevel.py", line 373, in test_huge_content_recvinto self.loop.run_until_complete( File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/home/buildbot/buildarea/3.9.cstratak-RHEL8-ppc64le.refleak/build/Lib/test/test_asyncio/test_sock_lowlevel.py", line 366, in _basetest_huge_content_recvinto self.assertEqual(size, 0) AssertionError: 978216 != 0 --- Moreover, test_huge_content_recvinto() seems to change the event loop policy without restoring it once done: --- 0:48:26 load avg: 1.00 Re-running test_asyncio in verbose mode (matching: test_huge_content_recvinto) beginning 6 repetitions 123456 test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.367s OK test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.363s OK test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.402s OK test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.338s OK test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.332s OK test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.EPollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.PollEventLoopTests) ... ok test_huge_content_recvinto (test.test_asyncio.test_sock_lowlevel.SelectEventLoopTests) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.384s OK ...... Warning -- asyncio.events._event_loop_policy was modified by test_asyncio Before: None After: 1 test failed again: test_asyncio --- ---------- components: Tests, asyncio messages: 400732 nosy: asvetlov, erlendaasland, lukasz.langa, pablogsal, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: test_huge_content_recvinto() failed versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 10:05:56 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 31 Aug 2021 14:05:56 +0000 Subject: [New-bugs-announce] [issue45063] PEP 657 Fine Grained Error Locations: make the traceback less verbose when possible Message-ID: <1630418756.59.0.674837335232.issue45063@roundup.psfhosted.org> New submission from STINNER Victor : The PEP 657 introduced ^^^ in tracebacks. It is useful when the error happens on an sub-expression in a long line. Example: File "/home/vstinner/python/main/Lib/ftplib.py", line 462, in retrlines with self.transfercmd(cmd) as conn, \ ^^^^^^^^^^^^^^^^^^^^^ But ^^^ makes the output more verbose and doesn't bring much value when the error concerns the whole line: File "/home/vstinner/python/main/Lib/socket.py", line 845, in create_connection raise err ^^^^^^^^^ Would it be possible to omit ^^^ when it concerns the whole line? Full example (currently): ERROR: test_retrlines (test.test_ftplib.TestFTPClass) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/python/main/Lib/test/test_ftplib.py", line 603, in test_retrlines self.client.retrlines('retr', received.append) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/ftplib.py", line 462, in retrlines with self.transfercmd(cmd) as conn, \ ^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/ftplib.py", line 393, in transfercmd return self.ntransfercmd(cmd, rest)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/ftplib.py", line 354, in ntransfercmd conn = socket.create_connection((host, port), self.timeout, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/socket.py", line 845, in create_connection raise err ^^^^^^^^^ File "/home/vstinner/python/main/Lib/socket.py", line 833, in create_connection sock.connect(sa) ^^^^^^^^^^^^^^^^ ConnectionRefusedError: [Errno 111] Connection refused I would prefer: ERROR: test_retrlines (test.test_ftplib.TestFTPClass) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/python/main/Lib/test/test_ftplib.py", line 603, in test_retrlines self.client.retrlines('retr', received.append) File "/home/vstinner/python/main/Lib/ftplib.py", line 462, in retrlines with self.transfercmd(cmd) as conn, \ ^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/ftplib.py", line 393, in transfercmd return self.ntransfercmd(cmd, rest)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/ftplib.py", line 354, in ntransfercmd conn = socket.create_connection((host, port), self.timeout, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vstinner/python/main/Lib/socket.py", line 845, in create_connection raise err File "/home/vstinner/python/main/Lib/socket.py", line 833, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused In term of release process, can we change the traceback after Python 3.10.0 final? Or can we only change it in Python 3.11? I mark the issue as a release blocker, but I let Pablo (author of the PEP and Python 3.10 release manager) decide. ---------- components: Interpreter Core messages: 400736 nosy: BTaskaya, lukasz.langa, pablogsal, vstinner priority: release blocker severity: normal status: open title: PEP 657 Fine Grained Error Locations: make the traceback less verbose when possible versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 11:11:37 2021 From: report at bugs.python.org (Mateusz) Date: Tue, 31 Aug 2021 15:11:37 +0000 Subject: [New-bugs-announce] [issue45064] Raising AttributeError in descriptor decorator causing searching attribute in __mro__ Message-ID: <1630422697.09.0.31446327761.issue45064@roundup.psfhosted.org> New submission from Mateusz : A descriptor that is raising AttributeError in __get__() causes that the Python's interpreter continues searching for attributes in __mro__ calling __getattr__() function in inherited classes. Let's take a look for example script with this bug. class A1: def __getattr__(self, name): print("A1 visited") raise AttributeError(f"{self.__class__.__name__}: {name} not found.") class A2(A1): def __getattr__(self, name): print("A2 visited") super().__getattr__(name) class A3(A2): def __getattr__(self, name): print("A3 visited") super().__getattr__(name) class B: def __init__(self, f): self.f = f def __get__(self, obj, objtype=None): raise AttributeError("Python bug?") class C(A3): @B def test(self): return 25 B is a decorator attached to C.test() and it is throwing AttributeError. When c.test() is performed it starts for walking through C.__mro__ and calling __getattr__() function in objects. >>> from bug import C >>> c = C() >>> c.test() A3 visited A2 visited A1 visited Traceback (most recent call last): File "", line 1, in File "/home/blooser/python-bug/bug.py", line 17, in __getattr__ super().__getattr__(name) File "/home/blooser/python-bug/bug.py", line 11, in __getattr__ super().__getattr__(name) File "/home/blooser/python-bug/bug.py", line 6, in __getattr__ raise AttributeError(f"{self.__class__.__name__}: {name} not found.") AttributeError: C: test not found. Changing error in B.__get__() to NameError: class B: def __init__(self, f): self.f = f def __get__(self, obj, objtype=None): raise NameError("Python bug?") causes it omits C.__mro__. >>> from bug import C >>> c = C() >>> c.test() Traceback (most recent call last): File "", line 1, in File "/home/blooser/python-bug/bug.py", line 26, in __get__ raise NameError("Python bug?") NameError: Python bug? I'm thinking that it is expected behavior or is this a bug? ---------- components: Interpreter Core files: bug.py messages: 400743 nosy: blooser priority: normal severity: normal status: open title: Raising AttributeError in descriptor decorator causing searching attribute in __mro__ type: behavior versions: Python 3.11 Added file: https://bugs.python.org/file50248/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 11:26:55 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 31 Aug 2021 15:26:55 +0000 Subject: [New-bugs-announce] [issue45065] test_asyncio failed (env changed) on s390x RHEL8 Refleaks 3.10: RuntimeError('Event loop is closed') in _SSLProtocolTransport.__del__ Message-ID: <1630423615.96.0.978514058482.issue45065@roundup.psfhosted.org> New submission from STINNER Victor : s390x RHEL8 Refleaks 3.10: https://buildbot.python.org/all/#/builders/669/builds/121 Reformatted output: 0:19:31 load avg: 2.74 [316/427/1] test_asyncio failed (env changed) (14 min 36 sec) -- running: test_statistics (37.6 sec), test_signal (16 min 51 sec), test_pydoc (5 min 36 sec), test_xmlrpc (1 min 49 sec), test_subprocess (1 min 15 sec) beginning 6 repetitions 123456 Unknown child process pid 1398289, will report returncode 255 Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 1398289 is closed . Unknown child process pid 1404140, will report returncode 255 Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 1404140 is closed . /home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/sslproto.py:320: ResourceWarning: unclosed transport _warn(f"unclosed transport {self!r}", ResourceWarning, source=self) ResourceWarning: Enable tracemalloc to get the object allocation traceback Warning -- Unraisable exception Exception ignored in: Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/sslproto.py", line 321, in __del__ self.close() File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/sslproto.py", line 316, in close self._ssl_protocol._start_shutdown() File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/sslproto.py", line 590, in _start_shutdown self._abort() File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/sslproto.py", line 731, in _abort self._transport.abort() File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/selector_events.py", line 680, in abort self._force_close(None) File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/selector_events.py", line 731, in _force_close self._loop.call_soon(self._call_connection_lost, exc) File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/base_events.py", line 745, in call_soon self._check_closed() File "/home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/base_events.py", line 510, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed /home/dje/cpython-buildarea/3.10.edelsohn-rhel8-z.refleak/build/Lib/asyncio/selector_events.py:704: ResourceWarning: unclosed transport <_SelectorSocketTransport closing fd=9> _warn(f"unclosed transport {self!r}", ResourceWarning, source=self) ResourceWarning: Enable tracemalloc to get the object allocation traceback Task was destroyed but it is pending! task: wait_for=> Unknown child process pid 1411156, will report returncode 255 Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 1411156 is closed . Unknown child process pid 1415148, will report returncode 255 Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 1415148 is closed .. Unknown child process pid 1426190, will report returncode 255 Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 1426190 is closed . ---------- components: Tests, asyncio messages: 400744 nosy: asvetlov, erlendaasland, lukasz.langa, pablogsal, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio failed (env changed) on s390x RHEL8 Refleaks 3.10: RuntimeError('Event loop is closed') in _SSLProtocolTransport.__del__ versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 14:04:32 2021 From: report at bugs.python.org (anarcat) Date: Tue, 31 Aug 2021 18:04:32 +0000 Subject: [New-bugs-announce] [issue45066] email parser fails to decode quoted-printable rfc822 message attachemnt Message-ID: <1630433072.94.0.61440036856.issue45066@roundup.psfhosted.org> New submission from anarcat : If an email message has a message/rfc822 part *and* that part is quoted-printable encoded, Python freaks out. Consider this code: import email.parser import email.policy # python 3.9.2 cannot decode this message, it fails with # "email.errors.StartBoundaryNotFoundDefect" mail = """Mime-Version: 1.0 Content-Type: multipart/report; boundary=aaaaaa Content-Transfer-Encoding: 7bit --aaaaaa Content-Type: message/rfc822 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline MIME-Version: 1.0 Content-Type: multipart/alternative; boundary=3D"=3Dbbbbbb" --=3Dbbbbbb Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=3Dutf-8 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx= x --=3Dbbbbbb-- --aaaaaa-- """ msg_abuse = email.parser.Parser(policy=email.policy.default + email.policy.strict).parsestr(mail) That crashes with: email.errors.StartBoundaryNotFoundDefect This should normally work: the sub-message is valid, assuming you decode the content. But if you do not, you end up in this bizarre situation, because the multipart boundary is probably considered to be something like `3D"=3Dbbbbbb"`, and of course the above code crashes with the above exception. If you remove the quoted-printable part from the equation, the parser actually behaves: import email.parser import email.policy # python 3.9.2 cannot decode this message, it fails with # "email.errors.StartBoundaryNotFoundDefect" mail = """Mime-Version: 1.0 Content-Type: multipart/report; boundary=aaaaaa Content-Transfer-Encoding: 7bit --aaaaaa Content-Type: message/rfc822 Content-Disposition: inline MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="=bbbbbb" --=bbbbbb Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=utf-8 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --=bbbbbb-- --aaaaaa-- """ msg_abuse = email.parser.Parser(policy=email.policy.default + email.policy.strict).parsestr(mail) The above correctly parses the message. This problem causes all sorts of weird issues. In one real-world example, it would just stop parsing headers inside the email because long lines in headers (typical in Received-by headers) would get broken up... So it would not actually fail completely. Or, to be more accurate, by *default* (ie. if you do not use strict), it does not crash and instead produces invalid data (e.g. a message without a Message-ID or From). On most messages that are encoded this way, the strict mode will actually fail with: email.errors.MissingHeaderBodySeparatorDefect because it will stumble upon a header line that should be a continuation but instead is treated like a full header line, so it's missing a colon (":"). ---------- components: email messages: 400764 nosy: anarcat, barry, r.david.murray priority: normal severity: normal status: open title: email parser fails to decode quoted-printable rfc822 message attachemnt type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 19:39:03 2021 From: report at bugs.python.org (Senthil Kumaran) Date: Tue, 31 Aug 2021 23:39:03 +0000 Subject: [New-bugs-announce] [issue45067] Failed to build _curses on CentOS 7 Message-ID: <1630453143.24.0.854803512429.issue45067@roundup.psfhosted.org> New submission from Senthil Kumaran : I verified that ncurses-devel is installed. ./configure is able to verify ncurses checking curses.h usability... yes checking curses.h presence... yes checking for curses.h... yes checking ncurses.h usability... yes checking ncurses.h presence... yes checking for ncurses.h... yes checking for term.h... yes But _curses fails to build, this is the output message from `make` gcc -pthread -fPIC -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -DHAVE_NCURSESW=1 -I/usr/include/ncursesw -I./Include -I. -I/usr/local/include -I/local/home/senthilx/cpython/Include -I/local/home/senthilx/cpython -c /local/home/senthilx/cpython/Modules/_curses_panel.c -o build/temp.linux-x86_64-3.11/local/home/senthilx/cpython/Modules/_curses_panel.o gcc -pthread -shared build/temp.linux-x86_64-3.11/local/home/senthilx/cpython/Modules/_curses_panel.o -L/usr/local/lib -lpanelw -lncursesw -o build/lib.linux-x86_64-3.11/_curses_panel.cpython-311-x86_64-linux-gnu.so *** WARNING: renaming "_curses_panel" since importing it failed: No module named '_curses' The following modules found by detect_modules() in setup.py, have been built by the Makefile instead, as configured by the Setup files: _abc pwd time Failed to build these modules: _curses Following modules built successfully but were removed because they could not be imported: _curses_panel ---------- messages: 400795 nosy: orsenthil priority: normal severity: normal status: open title: Failed to build _curses on CentOS 7 type: compile error versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 23:10:44 2021 From: report at bugs.python.org (xcl-1) Date: Wed, 01 Sep 2021 03:10:44 +0000 Subject: [New-bugs-announce] [issue45068] python 3.9.2 contains libcrypto-1_1.dll and libssl-1_1.dll associates CVE-2021-23840\CVE-2021-3450\CVE-2021-3711\CVE-2021-3712\CVE-2021-23841\CVE-2021-3449 of openssl-1.1.1i Message-ID: <1630465844.21.0.724335860149.issue45068@roundup.psfhosted.org> New submission from xcl-1 <1318683902 at qq.com>: Calls to EVP_CipherUpdate, EVP_EncryptUpdate and EVP_DecryptUpdate may overflow the output length argument in some cases where the input length is close to the maximum permissable length for an integer on the platform. In such cases the return value from the function call will be 1 (indicating success), but the output length value will be negative. This could cause applications to behave incorrectly or crash. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). ---------- components: Build messages: 400798 nosy: xcl123 priority: normal severity: normal status: open title: python 3.9.2 contains libcrypto-1_1.dll and libssl-1_1.dll associates CVE-2021-23840\CVE-2021-3450\CVE-2021-3711\CVE-2021-3712\CVE-2021-23841\CVE-2021-3449 of openssl-1.1.1i type: security _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 23:19:17 2021 From: report at bugs.python.org (xcl-1) Date: Wed, 01 Sep 2021 03:19:17 +0000 Subject: [New-bugs-announce] [issue45069] python 3.9.2 contains libcrypto-1_1.dll and libssl-1_1.dll associates CVE-2021-23840\CVE-2021-3450\CVE-2021-3711\CVE-2021-3712\CVE-2021-23841\CVE-2021-3449 of openssl-1.1.1i Message-ID: <1630466357.81.0.93174296975.issue45069@roundup.psfhosted.org> New submission from xcl-1 <1318683902 at qq.com>: python 3.9.2 contains libcrypto-1_1.dll and libssl-1_1.dll associates CVE-2021-23840\CVE-2021-3450\CVE-2021-3711\CVE-2021-3712\CVE-2021-23841\CVE-2021-3449 of openssl-1.1.1i ---------- messages: 400800 nosy: xcl123 priority: normal severity: normal status: open title: python 3.9.2 contains libcrypto-1_1.dll and libssl-1_1.dll associates CVE-2021-23840\CVE-2021-3450\CVE-2021-3711\CVE-2021-3712\CVE-2021-23841\CVE-2021-3449 of openssl-1.1.1i _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 23:19:31 2021 From: report at bugs.python.org (xcl-1) Date: Wed, 01 Sep 2021 03:19:31 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue45070=5D_python_3=2E9=2E2_?= =?utf-8?q?contains_wininst-10=2E0-amd64=2Eexe=2E_wininst-10=2E0=2Eexe=2Ew?= =?utf-8?q?ininst-7=2E1=2Eexe=2E_wininst-8=2E0=2Eexe=2Ewininst-9=2E0=2Eexe?= =?utf-8?q?=2Ewininst-9=2E0-amd64=2Eexe=2Ewininst-14=2E0-amd64=2Eexe_and_w?= =?utf-8?q?ininst-14=2E0=2Eexe_associates_CVE-2016-9843=E3=80=81CVE-2016-9?= =?utf-8?q?841=E3=80=81CVE-2016-9840_and_CVE-2016-9842_of_zlib=281=2E2=2E8?= =?utf-8?b?LCAxLjIuMywxLjIuNSk=?= Message-ID: <1630466371.81.0.178412542042.issue45070@roundup.psfhosted.org> New submission from xcl-1 <1318683902 at qq.com>: python 3.9.2 contains wininst-10.0-amd64.exe. wininst-10.0.exe.wininst-7.1.exe. wininst-8.0.exe.wininst-9.0.exe.wininst-9.0-amd64.exe.wininst-14.0-amd64.exe and wininst-14.0.exe associates CVE-2016-9843?CVE-2016-9841?CVE-2016-9840 and CVE-2016-9842 of zlib(1.2.8, 1.2.3,1.2.5) ---------- messages: 400801 nosy: xcl123 priority: normal severity: normal status: open title: python 3.9.2 contains wininst-10.0-amd64.exe. wininst-10.0.exe.wininst-7.1.exe. wininst-8.0.exe.wininst-9.0.exe.wininst-9.0-amd64.exe.wininst-14.0-amd64.exe and wininst-14.0.exe associates CVE-2016-9843?CVE-2016-9841?CVE-2016-9840 and CVE-2016-9842 of zlib(1.2.8, 1.2.3,1.2.5) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 23:21:03 2021 From: report at bugs.python.org (xcl-1) Date: Wed, 01 Sep 2021 03:21:03 +0000 Subject: [New-bugs-announce] [issue45071] python 3.9.2 contains _bz2.pyd associates CVE-2019-12900 and CVE-2016-3189 of bzip2-1.0.6 Message-ID: <1630466463.38.0.708138715693.issue45071@roundup.psfhosted.org> New submission from xcl-1 <1318683902 at qq.com>: python 3.9.2 contains _bz2.pyd associates CVE-2019-12900 and CVE-2016-3189 of bzip2-1.0.6 ---------- messages: 400802 nosy: xcl123 priority: normal severity: normal status: open title: python 3.9.2 contains _bz2.pyd associates CVE-2019-12900 and CVE-2016-3189 of bzip2-1.0.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 31 23:23:04 2021 From: report at bugs.python.org (xcl-1) Date: Wed, 01 Sep 2021 03:23:04 +0000 Subject: [New-bugs-announce] [issue45072] python 3.9.2 contains ensurepip and pip associates CVE-2021-3572 of pip-20.2.3 Message-ID: <1630466584.74.0.362679690335.issue45072@roundup.psfhosted.org> New submission from xcl-1 <1318683902 at qq.com>: python 3.9.2 contains ensurepip and pip associates CVE-2021-3572 of pip-20.2.3 ---------- messages: 400803 nosy: xcl123 priority: normal severity: normal status: open title: python 3.9.2 contains ensurepip and pip associates CVE-2021-3572 of pip-20.2.3 _______________________________________ Python tracker _______________________________________