From report at bugs.python.org Sun Dec 1 05:40:54 2019 From: report at bugs.python.org (Ankur) Date: Sun, 01 Dec 2019 10:40:54 +0000 Subject: [New-bugs-announce] [issue38948] os.module.ismount() returns true in python 3.7.4 and false in 2.7.14 Message-ID: <1575196854.18.0.0543230038393.issue38948@roundup.psfhosted.org> New submission from Ankur : Tested with following lines of code on windows 10: import os.path print(os.path.ismount("F:")) The above statement returns true in python 3.7.4 and false in 2.7.14 Note that F: drive does not have any mount points. Somehow, python 3.7.4 returns true for all drive letters except C: ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 357674 nosy: jainankur priority: normal severity: normal status: open title: os.module.ismount() returns true in python 3.7.4 and false in 2.7.14 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 1 09:01:26 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sun, 01 Dec 2019 14:01:26 +0000 Subject: [New-bugs-announce] [issue38949] incorrect prefix, exec_prefix in distutils.command.install Message-ID: <1575208886.3.0.139345727722.issue38949@roundup.psfhosted.org> New submission from Xavier de Gaye : In function finalize_options() of Lib/distutils/command/install.py at https://github.com/python/cpython/blob/575d0b46d122292ca6e0576a91265d7abf7cbc3d/Lib/distutils/command/install.py#L284 (prefix, exec_prefix) is set using get_config_vars(). This may be incorrect when Python has been manually copied in another location from the location where it has been installed with 'make install'. We should use sys.prefix and sy.exec_prefix instead, those values are calculated by getpath.c instead of being retrieved from the sysconfigdata module. ---------- components: Distutils messages: 357678 nosy: dstufft, eric.araujo, xdegaye priority: normal severity: normal stage: needs patch status: open title: incorrect prefix, exec_prefix in distutils.command.install type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 1 09:30:00 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sun, 01 Dec 2019 14:30:00 +0000 Subject: [New-bugs-announce] [issue38950] argparse uses "optional arguments" for "keyword arguments" Message-ID: <1575210600.45.0.569419059905.issue38950@roundup.psfhosted.org> New submission from G?ry : The argparse module incorrectly uses the terms "optional arguments" for keyword arguments. For instance this argument parser takes a required keyword argument and an optional positional argument, but classifies the former as an "optional argument" and the latter as a "positional argument": >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', required=True) _StoreAction(option_strings=['--foo'], dest='foo', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None) >>> parser.add_argument('bar', nargs='?') _StoreAction(option_strings=[], dest='bar', nargs='?', const=None, default=None, type=None, choices=None, help=None, metavar=None) >>> parser.parse_args(['-h']) usage: [-h] --foo FOO [bar] positional arguments: bar optional arguments: -h, --help show this help message and exit --foo FOO Since the actual classification seems to distinguish positional from keyword arguments instead of required from optional arguments, I think that the "optional arguments:" section should be renamed to "keyword arguments:". ---------- components: Library (Lib) messages: 357681 nosy: bethard, maggyero, rhettinger priority: normal severity: normal status: open title: argparse uses "optional arguments" for "keyword arguments" type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 1 13:38:45 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Sun, 01 Dec 2019 18:38:45 +0000 Subject: [New-bugs-announce] [issue38951] Use threading.main_thread() check in asyncio Message-ID: <1575225525.0.0.724928399081.issue38951@roundup.psfhosted.org> New submission from Andrew Svetlov : Now we use private `isinstance(thread, threading._MainThread)` check. main_thread() function was added in Python 3.4 by me to avoid it. Sorry, I forgot to update asyncio code. The fix is trivial, I very appreciate if somebody will take care. ---------- components: asyncio keywords: easy messages: 357683 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Use threading.main_thread() check in asyncio versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 1 20:07:46 2019 From: report at bugs.python.org (Max Coplan) Date: Mon, 02 Dec 2019 01:07:46 +0000 Subject: [New-bugs-announce] [issue38952] asyncio cannot handle Python3 IPv4Address or IPv6 Address Message-ID: <1575248866.63.0.518271436447.issue38952@roundup.psfhosted.org> New submission from Max Coplan : Trying to use new Python 3 `IPv4Address`s fails with the following error ``` File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 1270, in _ensure_resolved info = _ipaddr_info(host, port, family, type, proto, *address[2:]) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 134, in _ipaddr_info if '%' in host: TypeError: argument of type 'IPv4Address' is not iterable ``` ---------- components: asyncio messages: 357697 nosy: Max Coplan, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio cannot handle Python3 IPv4Address or IPv6 Address versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 2 06:06:14 2019 From: report at bugs.python.org (Zac Hatfield-Dodds) Date: Mon, 02 Dec 2019 11:06:14 +0000 Subject: [New-bugs-announce] [issue38953] Untokenize and retokenize does not round-trip Message-ID: <1575284774.35.0.371169137429.issue38953@roundup.psfhosted.org> New submission from Zac Hatfield-Dodds : I've been working on a tool called Hypothesmith - https://github.com/Zac-HD/hypothesmith - to generate arbitrary Python source code, inspired by CSmith's success in finding C compiler bugs. It's based on the grammar but ultimately only generates strings which `compile` accepts; this is the only way I know to answer the question "is the string valid Python"! I should be clear that I don't think the minimal examples are representative of real problems that users may encounter! However, fuzzing is very effective at finding important bugs if we can get these apparently-trivial ones out of the way by changing either the code or the test :-) ```python @example("#") @example("\n\\\n") @example("#\n\x0cpass#\n") @given(source_code=hypothesmith.from_grammar().map(fixup).filter(str.strip)) def test_tokenize_round_trip_string(source_code): tokens = list(tokenize.generate_tokens(io.StringIO(source_code).readline)) outstring = tokenize.untokenize(tokens) # may have changed whitespace from source output = tokenize.generate_tokens(io.StringIO(outstring).readline) assert [(t.type, t.string) for t in tokens] == [(t.type, t.string) for t in output] ``` Each of the `@example` cases are accepted by `compile` but fail the test; the `@given` case describes how to generate more such strings. You can read more details in the Hypothesmith repo if interested. I think these are real and probably unimportant bugs, but I'd love to start a conversation about what properties should *always* hold for functions dealing with Python source code - and how best to report research results if I can demonstrate that they don't! (for example, lib2to3 has many similar failures but I don't want to open a long list of low-value issues) ---------- components: Library (Lib) messages: 357704 nosy: Zac Hatfield-Dodds, meador.inge priority: normal severity: normal status: open title: Untokenize and retokenize does not round-trip type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 2 08:38:37 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 02 Dec 2019 13:38:37 +0000 Subject: [New-bugs-announce] [issue38954] test_ssl fails in all Fedora buildbots Message-ID: <1575293917.48.0.60189521632.issue38954@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : This issue is probably duplicate of some other, but I decided to make one new one due to the fact that this affect all Fedora build bots. test test_ssl failed test_timeout_connect_ex (test.test_ssl.NetworkedTests) ... ok ====================================================================== FAIL: test_min_max_version (test.test_ssl.ContextTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.8.cstratak-fedora-rawhide-x86_64.lto-pgo/build/Lib/test/test_ssl.py", line 1207, in test_min_max_version self.assertEqual( AssertionError: != ---------------------------------------------------------------------- Ran 161 tests in 2.681s FAILED (failures=1, skipped=11) 1 test failed again: test_ssl Example failure: https://buildbot.python.org/all/#/builders/222 ---------- components: Tests messages: 357708 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_ssl fails in all Fedora buildbots versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 2 13:39:47 2019 From: report at bugs.python.org (Matthias Bussonnier) Date: Mon, 02 Dec 2019 18:39:47 +0000 Subject: [New-bugs-announce] [issue38955] Non indemnpotent behavior of asyncio.get_event_loop and asyncio.run sequence. Message-ID: <1575311987.47.0.389219942485.issue38955@roundup.psfhosted.org> New submission from Matthias Bussonnier : Hi, Not sure if this a bug, or an intended feature. I got surprise by the following behavior. from asyncio import run, sleep, get_event_loop print(get_event_loop()) # return the current eventloop run(sleep(0)) print(get_event_loop()) # raise a RuntimeError I would expect `get_event_loop` to get back to it's initial default value. This comes from the fact that `run()` call `set_event_loop()` to `None`. with both sets `_set_called` to `True`, but `get_event_loop` seem to assume if _set_called is True, then loop cannot be none. I'm tempted to think that if `set_loop()` is called with `None`, it should reset the `_set_called` to False. Or Am I supposed to call `set_event_loop(new_event_loop())` myself ? I'm likely missing something so any insight would be appreciated; if you believe this is an actual issue I'm happy to send a PR. ---------- components: asyncio messages: 357727 nosy: asvetlov, mbussonn, yselivanov priority: normal severity: normal status: open title: Non indemnpotent behavior of asyncio.get_event_loop and asyncio.run sequence. type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 2 18:03:24 2019 From: report at bugs.python.org (Antony Lee) Date: Mon, 02 Dec 2019 23:03:24 +0000 Subject: [New-bugs-announce] [issue38956] argparse.BooleanOptionalAction should not add the default value to the help string by default Message-ID: <1575327804.72.0.272027136625.issue38956@roundup.psfhosted.org> New submission from Antony Lee : https://bugs.python.org/issue8538 recently added to Py3.9 a much welcome addition to argparse, namely the capability to generate --foo/--no-foo flag pairs. A small issue with the implementation is that it *always* appends the default value to the help string (if any): if help is not None and default is not None: help += f" (default: {default})" This is inconsistent with other action classes, and results in the defaults being printed twice if using ArgumentsDefaultHelpFormatter (which is the documented way to include the defaults in the help text): from argparse import * parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter) parser.add_argument("--foo", action=BooleanOptionalAction, help="Whether to foo it", default=True) parser.add_argument("--quux", help="Set the quux", default=42) print(parser.parse_args()) yields usage: foo.py [-h] [--foo | --no-foo] [--quux QUUX] optional arguments: -h, --help show this help message and exit --foo, --no-foo Whether to foo it (default: True) (default: True) # <--- HERE --quux QUUX Set the quux (default: 42) I think the fix is just a matter of not adding the default value to the help string. ---------- components: Library (Lib) messages: 357733 nosy: Antony.Lee priority: normal severity: normal status: open title: argparse.BooleanOptionalAction should not add the default value to the help string by default versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 2 18:38:33 2019 From: report at bugs.python.org (Ashley Whetter) Date: Mon, 02 Dec 2019 23:38:33 +0000 Subject: [New-bugs-announce] [issue38957] Cannot compile with libffi from source on Windows Message-ID: <1575329913.13.0.856957660536.issue38957@roundup.psfhosted.org> New submission from Ashley Whetter : get_externals.bat downloads and extracts libffi to a versioned directory (much like the other external libraries). See https://github.com/python/cpython/blob/v3.8.0/PCbuild/get_externals.bat#L55 However the binary release is downloaded to an unversioned directory. See https://github.com/python/cpython/blob/v3.8.0/PCbuild/get_externals.bat#L79 The visual studio project looks for the unversioned directory (https://github.com/python/cpython/blob/v3.8.0/PCbuild/python.props#L62), and so the binaries are always used. So it is possible to build from source, but you have to move libffi yourself after running `get_externals_bat --libffi-src`. I think the fix here is to make get_externals.bat and visual studio always use a versioned directory. ---------- components: Build, Windows messages: 357736 nosy: AWhetter, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Cannot compile with libffi from source on Windows versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 2 19:10:02 2019 From: report at bugs.python.org (Io Mintz) Date: Tue, 03 Dec 2019 00:10:02 +0000 Subject: [New-bugs-announce] [issue38958] asyncio REPL swallows KeyboardInterrupt while editing Message-ID: <1575331802.11.0.760790431063.issue38958@roundup.psfhosted.org> New submission from Io Mintz : "python3 -m asyncio" swallows KeyboardInterrupt while editing a line. Problem steps: ============== - run python -m asyncio - press ^C Expected behavior (normal CPython REPL, as well as python -m code): ================================================================== The current input line is abandoned and "\nKeyboardInterrupt" is printed. Sample for "spam^C" with the normal REPL: ----------------------------------------- Python 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> spam KeyboardInterrupt Python 3.9.0a1+ (heads/master:a62ad4730c, Dec 2 2019, 17:38:37) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> spam KeyboardInterrupt Sample for "spam^C" in the InteractiveConsole REPL, as invoked by `python -m code`: ----------------------------------------------------------------------------------- Python 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> spam KeyboardInterrupt Python 3.9.0a1+ (heads/master:a62ad4730c, Dec 2 2019, 17:38:37) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> spam KeyboardInterrupt Actual behavior: ================ The KeyboardInterrupt is ignored, the current input line remains on screen, and a new line is not started: asyncio REPL 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0] on linux Use "await" directly instead of "asyncio.run()". Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> spam asyncio REPL 3.9.0a1+ (heads/master:a62ad4730c, Dec 2 2019, 17:38:37) [GCC 9.2.0] on linux Use "await" directly instead of "asyncio.run()". Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> spam Workaround ========== If editing a continued block (ie a line prefixed by sys.ps2 / "..."), enter any invalid syntax (such as unindented code) to cancel the current block. If editing a single line, press ^U to clear the line. OS Details ========== Arch Linux, python extra/python 3.8.0-1. ---------- components: asyncio messages: 357738 nosy: asvetlov, iomintz, yselivanov priority: normal severity: normal status: open title: asyncio REPL swallows KeyboardInterrupt while editing versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 3 01:36:00 2019 From: report at bugs.python.org (Jiebin) Date: Tue, 03 Dec 2019 06:36:00 +0000 Subject: [New-bugs-announce] [issue38959] Parboil -- OpenMP CUTCP performance regression when upgrade from python 2.7.15 to 2.7.17 Message-ID: <1575354960.22.0.085059134503.issue38959@roundup.psfhosted.org> New submission from Jiebin : When we replace the rpm package from 2.7.15 to 2.7.17. The benchmark Parboil -- OpenMP CUTCP shows regression. Steps to reproduce the regression. You can reproduce the issue by phoronix test suite or run it from the source code. Benchmarks indicators unit HIB/LIB Legacy version New version Regression status OS HW platform comments component version value RSD component version value RSD ?parboil ?OpenMP CUTCP Seconds ?LIB ?2.7.15 ?1.20 ?0.5% ?2.7.17 ?1.37 ?0.5% ?12% ?Clear Linux ?CLX server ---------- components: Library (Lib) files: image-2019-12-03-13-55-06-020.png messages: 357744 nosy: jiebinsu priority: normal severity: normal status: open title: Parboil -- OpenMP CUTCP performance regression when upgrade from python 2.7.15 to 2.7.17 type: performance versions: Python 2.7 Added file: https://bugs.python.org/file48749/image-2019-12-03-13-55-06-020.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 3 11:38:06 2019 From: report at bugs.python.org (David Carlier) Date: Tue, 03 Dec 2019 16:38:06 +0000 Subject: [New-bugs-announce] [issue38960] DTrace FreeBSD build fix Message-ID: <1575391086.22.0.899014183284.issue38960@roundup.psfhosted.org> Change by David Carlier : ---------- components: FreeBSD nosy: David Carlier, koobs priority: normal severity: normal status: open title: DTrace FreeBSD build fix versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 3 11:40:39 2019 From: report at bugs.python.org (John-Mark) Date: Tue, 03 Dec 2019 16:40:39 +0000 Subject: [New-bugs-announce] [issue38961] Flaky detection of compiler vendor Message-ID: <1575391239.52.0.258963452733.issue38961@roundup.psfhosted.org> New submission from John-Mark : The `configure` script for building what appears to be any version of python (I've manually checked 2.7, 3.6.6, and master) uses simple substring-in-path checks to determine the compiler vendor. This is problematic because it is very easy for it to produce very confusing false-positives. This appeared for me when compiling with a custom version of `clang` located at `/home/riccardo/some/path`, which caused this line https://github.com/python/cpython/blob/894331838b256412c95d54051ec46a1cb96f52e7/configure#L7546 to mistakenly assume an ICC configuration (because `icc` is a substring of `riccardo`). A quick check through the script reveals that compiler vendor detection in the script doesn't appear to be unified and are mostly similarly flaky. Other projects compile a small program that checks for defines, or parses the output of `$CC --version` or similar. ---------- components: Build messages: 357755 nosy: jmaargh priority: normal severity: normal status: open title: Flaky detection of compiler vendor type: compile error versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 3 14:55:50 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 03 Dec 2019 19:55:50 +0000 Subject: [New-bugs-announce] [issue38962] Reference leaks in subinterpreters Message-ID: <1575402950.84.0.0958518006452.issue38962@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : All the refleak build bots for master are reporting reference leaks in subinterpreter related tests: https://buildbot.python.org/all/#/builders/126/builds/6/steps/5/logs/stdio https://buildbot.python.org/all/#/builders/144/builds/6/steps/5/logs/stdio https://buildbot.python.org/all/#/builders/206/builds/6 https://buildbot.python.org/all/#/builders/157/builds/6/steps/4/logs/stdio ...... test_atexit leaked [882, 882, 882] references, sum=2646 test_atexit leaked [12, 12, 12] memory blocks, sum=36 5 tests failed again: test__xxsubinterpreters test_atexit test_capi test_httpservers test_threading == Tests result: FAILURE then FAILURE == 401 tests OK. 10 slowest tests: - test_asyncio: 33 min 34 sec - test_concurrent_futures: 17 min 22 sec - test_multiprocessing_spawn: 17 min 6 sec - test_zipfile: 9 min 25 sec - test_multiprocessing_forkserver: 9 min 2 sec - test_multiprocessing_fork: 8 min 43 sec - test_largefile: 7 min 32 sec - test_lib2to3: 7 min 3 sec - test_mailbox: 6 min 27 sec - test_argparse: 5 min 5 sec 5 tests failed: test__xxsubinterpreters test_atexit test_capi test_httpservers test_threading 14 tests skipped: test_devpoll test_gdb test_ioctl test_kqueue test_msilib test_ossaudiodev test_startfile test_tix test_tk test_ttk_guionly test_winconsoleio test_winreg test_winsound test_zipfile64 7 re-run tests: test__xxsubinterpreters test_atexit test_capi test_httpservers test_nntplib test_pty test_threading Bisecting shows the following commit as the culprit: ef5aa9af7c7e493402ac62009e4400aed7c3d54e is the first bad commit commit ef5aa9af7c7e493402ac62009e4400aed7c3d54e Author: Victor Stinner Date: Wed Nov 20 00:38:03 2019 +0100 bpo-38858: Reorganize pycore_init_types() (GH-17265) * Call _PyLong_Init() and _PyExc_Init() earlier * new_interpreter() reuses pycore_init_types() Python/pylifecycle.c | 31 +++++++++++-------------------- 1 file changed, 11 insertions(+), 20 deletions(-) bisect run success Running * test.test_atexit.SubinterpreterTest.test_callbacks_leak is enough for reproducing the problem. ---------- assignee: vstinner components: Tests messages: 357759 nosy: pablogsal, vstinner priority: high severity: normal stage: needs patch status: open title: Reference leaks in subinterpreters type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 00:34:03 2019 From: report at bugs.python.org (Roman Joost) Date: Wed, 04 Dec 2019 05:34:03 +0000 Subject: [New-bugs-announce] [issue38963] multiprocessing processes seem to "bleed" user information (GID/UID/groups) Message-ID: <1575437643.58.0.600766236009.issue38963@roundup.psfhosted.org> New submission from Roman Joost : When running a process which changes UID/GID, some of the following processes will run as the user I change to per process. In order to reproduce (see the attached reproducer): 1. Change the 'USERNAME' to an unprivileged user on your system. 2. Run the reproducer as a user with elevated privileges (e.g. root or some secondary user you have on your system). Mind you, I don't think the user you run as needs elevated privileges, but that's the user I ran as when I observed this behaviour. 3. The reproducer iterates over a list (It stems from a test function which was checking permissions on log files). Observe the print out, which prints the process' GID, UID and secondary groups before we're changing to the users GID, UID and secondary groups. 4. You should observe that at some point the process prints the user information of the user we want to change to not the one which initially started the script. Example output when running locally as root: ('B', (0, 0, [0])) ('A', (0, 0, [0])) ('C', (0, 0, [0])) ('E', (0, 0, [0])) ('D', (0, 0, [0])) ('F', (1002, 1002, [10, 135, 1000, 1002])) ('H', (1002, 1002, [10, 135, 1000, 1002])) ('I', (1002, 1002, [10, 135, 1000, 1002])) ('J', (1002, 1002, [10, 135, 1000, 1002])) ('G', (1002, 1002, [10, 135, 1000, 1002])) ('K', (1002, 1002, [10, 135, 1000, 1002])) ('L', (1002, 1002, [10, 135, 1000, 1002])) ('M', (1002, 1002, [10, 135, 1000, 1002])) ('N', (1002, 1002, [10, 135, 1000, 1002])) I would have expected `0` all the way through. However, if I initialise the Pool with `maxtasksperchild=1` the isolation seems as expected. I don't know whether this is a bug or I'm foolish to invoke multiprocessing like this. I've run out of time to investigate this further. It's certainly strange behaviour to me and I thought I better report it, since reproducing seems fairly deterministic. ---------- assignee: docs at python components: Documentation, Library (Lib) files: reproducer.py messages: 357773 nosy: docs at python, romanofski priority: normal severity: normal status: open title: multiprocessing processes seem to "bleed" user information (GID/UID/groups) type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48753/reproducer.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 03:27:53 2019 From: report at bugs.python.org (Erik Cederstrand) Date: Wed, 04 Dec 2019 08:27:53 +0000 Subject: [New-bugs-announce] [issue38964] Output of syntax error in f-string contains wrong filename Message-ID: <1575448073.72.0.500810200955.issue38964@roundup.psfhosted.org> New submission from Erik Cederstrand : When I have a normal syntax error in a file, Python reports the filename in the exception output: $ cat syntax_error.py 0x=5 $ python3.8 syntax_error.py File "syntax_error.py", line 1 0x=5 ^ SyntaxError: invalid hexadecimal literal But if the syntax error is inside an f-string, Python reports 'File ""' instead of the actual filename in the exception output. $ cat syntax_error_in_fstring.py f'This is a syntax error: {0x=5}' $ python3.8 syntax_error_in_fstring.py File "", line 1 SyntaxError: invalid hexadecimal literal ---------- messages: 357777 nosy: Erik Cederstrand priority: normal severity: normal status: open title: Output of syntax error in f-string contains wrong filename type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 03:33:21 2019 From: report at bugs.python.org (=?utf-8?q?Martin_Li=C5=A1ka?=) Date: Wed, 04 Dec 2019 08:33:21 +0000 Subject: [New-bugs-announce] [issue38965] test_stack_overflow (test.test_faulthandler.FaultHandlerTests) is stuck with GCC10 Message-ID: <1575448401.0.0.574576942819.issue38965@roundup.psfhosted.org> New submission from Martin Li?ka : The test-case is stuck after update to GCC 10. I've got a patch for that. ---------- messages: 357778 nosy: Martin Li?ka priority: normal severity: normal status: open title: test_stack_overflow (test.test_faulthandler.FaultHandlerTests) is stuck with GCC10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 03:39:40 2019 From: report at bugs.python.org (Pranav Pandya) Date: Wed, 04 Dec 2019 08:39:40 +0000 Subject: [New-bugs-announce] [issue38966] List similarity relationship Message-ID: <1575448780.55.0.278075875702.issue38966@roundup.psfhosted.org> New submission from Pranav Pandya : When list is initialized and equated to give the same value as list1=list2=[], then post initialization list1 & list2 are taken as same values and any changes in list2 are changed in list 1 and so on. Thus during initialization if lists are equated, they are taken as same values even if one is changed ---------- assignee: terry.reedy components: IDLE messages: 357779 nosy: PranavSP, terry.reedy priority: normal severity: normal status: open title: List similarity relationship type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 04:06:38 2019 From: report at bugs.python.org (=?utf-8?b?UnViw6luIEplc8O6cyBHYXJjw61hIEhlcm7DoW5kZXo=?=) Date: Wed, 04 Dec 2019 09:06:38 +0000 Subject: [New-bugs-announce] [issue38967] Improve error message in enum.py for python 3.5 Message-ID: <1575450398.98.0.140033297408.issue38967@roundup.psfhosted.org> New submission from Rub?n Jes?s Garc?a Hern?ndez : I changed the '_names_ are reserved for future Enum use' line to be more user-friendly thus: 'Names surrounded by underscore (such as "%s") are reserved for future Enum use' % key The current message can be interpreted as the literal string _names_; and showing the offending key can help users debug. ---------- assignee: docs at python components: Documentation files: enum.diff.txt messages: 357782 nosy: Rub?n Jes?s Garc?a Hern?ndez, docs at python priority: normal severity: normal status: open title: Improve error message in enum.py for python 3.5 versions: Python 3.5 Added file: https://bugs.python.org/file48754/enum.diff.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 06:29:11 2019 From: report at bugs.python.org (Riccardo La Marca) Date: Wed, 04 Dec 2019 11:29:11 +0000 Subject: [New-bugs-announce] [issue38968] int method works improperly Message-ID: <1575458951.34.0.163465014932.issue38968@roundup.psfhosted.org> Change by Riccardo La Marca : ---------- files: Schermata 2019-12-04 alle 12.09.36.png nosy: Riccardo La Marca priority: normal severity: normal status: open title: int method works improperly versions: Python 3.8 Added file: https://bugs.python.org/file48755/Schermata 2019-12-04 alle 12.09.36.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 08:00:31 2019 From: report at bugs.python.org (Riccardo La Marca) Date: Wed, 04 Dec 2019 13:00:31 +0000 Subject: [New-bugs-announce] [issue38969] The "int" method doesn't work correctly for long numbers with some decimal places. Message-ID: <1575464431.36.0.640599521559.issue38969@roundup.psfhosted.org> New submission from Riccardo La Marca : PyDev console: starting. Python 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)] on darwin >>> int(123456789012345678901234567890) 123456789012345678901234567890 >>> int(123456789012345678901234567890.76) 123456789012345677877719597056 ---------- messages: 357803 nosy: Riccardo La Marca priority: normal severity: normal status: open title: The "int" method doesn't work correctly for long numbers with some decimal places. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 10:17:04 2019 From: report at bugs.python.org (castix) Date: Wed, 04 Dec 2019 15:17:04 +0000 Subject: [New-bugs-announce] [issue38970] [PDB] NameError in list comprehension in PDB Message-ID: <1575472624.05.0.465658492617.issue38970@roundup.psfhosted.org> New submission from castix : Related to https://bugs.python.org/issue27316 This code works from the repl: Python 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pdb; pdb.set_trace() --Return-- > (1)()->None (Pdb) z = True (Pdb) [x for x in [1,2] if z] [1, 2] (Pdb) However in my (turbogears2) wsgi application it raises: (Pdb) z = True (Pdb) [x for x in [1,2] if z] *** NameError: name 'z' is not defined (Pdb) z True (Pdb) I don't know how to report the issue in a reproducible way. Thanks ---------- components: Library (Lib) messages: 357807 nosy: castix priority: normal severity: normal status: open title: [PDB] NameError in list comprehension in PDB type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 11:51:47 2019 From: report at bugs.python.org (Brock Mendel) Date: Wed, 04 Dec 2019 16:51:47 +0000 Subject: [New-bugs-announce] [issue38971] codecs.open leaks file descriptor when invalid encoding is passed Message-ID: <1575478307.61.0.215522324906.issue38971@roundup.psfhosted.org> New submission from Brock Mendel : xref https://github.com/pandas-dev/pandas/pull/30034 codecs.open does `file = open(...)` before validating the encoding kwarg, leaving the open file behind if that validation raises. ---------- messages: 357811 nosy: Brock Mendel priority: normal severity: normal status: open title: codecs.open leaks file descriptor when invalid encoding is passed _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 13:24:06 2019 From: report at bugs.python.org (Brett Cannon) Date: Wed, 04 Dec 2019 18:24:06 +0000 Subject: [New-bugs-announce] [issue38972] Link to instructions to change PowerShell execution policy for venv activation Message-ID: <1575483846.28.0.24893284684.issue38972@roundup.psfhosted.org> New submission from Brett Cannon : It would probably be good to add a note in the venv docs about execution policies, why it needs to change for environment activation, and how to do it -- especially now that we sign Activate.ps1 -- so there's less of a chance of people being caught off-guard. ---------- assignee: brett.cannon components: Documentation messages: 357816 nosy: brett.cannon, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Link to instructions to change PowerShell execution policy for venv activation type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 13:29:49 2019 From: report at bugs.python.org (Derek Frombach) Date: Wed, 04 Dec 2019 18:29:49 +0000 Subject: [New-bugs-announce] [issue38973] Shared Memory List Returns 0 Length Message-ID: <1575484189.99.0.873814899855.issue38973@roundup.psfhosted.org> New submission from Derek Frombach : When accessing Shared Memory Lists, occasionally the shared memory list will have a length of zero for only one line of code. Even know the length of the list is constant and greater than zero, when accessing this list, like say sml[0], python returns a ValueError complaining that sml is an empty list. As well, if you print out sml on the very next line in the exception handler, then you get a full length list, with no access issues whatsoever. This isn't a locking issue, since locks were acquired before writing to the lists, and released after writing. This is a shared memory list runtime access consistency issue. An Example of this Issue can be Seen Here: https://github.com/uofrobotics/RPLidarVidStream The issue is in the process_data function, only when smd, sma, smq, or sml are read from. ---------- components: Extension Modules, IO, Interpreter Core, asyncio, ctypes files: 20191203_194951.jpg messages: 357817 nosy: Derek Frombach, asvetlov, yselivanov priority: normal severity: normal status: open title: Shared Memory List Returns 0 Length type: crash versions: Python 3.8 Added file: https://bugs.python.org/file48757/20191203_194951.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 14:44:42 2019 From: report at bugs.python.org (Daniel Preston) Date: Wed, 04 Dec 2019 19:44:42 +0000 Subject: [New-bugs-announce] [issue38974] using filedialog.askopenfilename() freezes python 3.8 Message-ID: <1575488682.09.0.412861821421.issue38974@roundup.psfhosted.org> New submission from Daniel Preston : I am using Tkinter in my program, and at a point I use a button to open a file by running a function with the following code: def UploadAction(event=None): global filename filename = filedialog.askopenfilename() filename = [filename] return filename However, when I run this function, it causes Python to freeze. Apparently there have been bugs like this before in previous versions, and this wasn't a problem on 3.7, which makes me suspect that this is a bug rather than a fault on my end. Would you be able to release an update to fix this bug? Thanks. ---------- components: Windows messages: 357819 nosy: Daniel Preston, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: using filedialog.askopenfilename() freezes python 3.8 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 4 16:12:31 2019 From: report at bugs.python.org (Baptiste Mispelon) Date: Wed, 04 Dec 2019 21:12:31 +0000 Subject: [New-bugs-announce] [issue38975] Add direct anchors to regex syntax documentation Message-ID: <1575493951.25.0.258282311971.issue38975@roundup.psfhosted.org> New submission from Baptiste Mispelon : While writing documentation about regexps for a project I help maintain, I wanted to link to some specific aspects of Python's implementation (in my case, non-capturing groups) which are described on https://docs.python.org/3/library/re.html. There are no visible ? anchors for the items in the "Regular Expression Syntax" section. Inspecting the generated HTML, there does seem to be auto-generated ids (like `#index-16` for example) but I wouldn't like to rely on those as I'm not sure how stable they are. I couldn't find how to add the ? symbol show up next to the titles but I have a PR that adds a bunch of references so that items can be linked directly. ---------- assignee: docs at python components: Documentation messages: 357831 nosy: bmispelon, docs at python priority: normal severity: normal status: open title: Add direct anchors to regex syntax documentation type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 00:13:43 2019 From: report at bugs.python.org (Jacob Taylor) Date: Thu, 05 Dec 2019 05:13:43 +0000 Subject: [New-bugs-announce] [issue38976] Add support for HTTP Only flag in MozillaCookieJar Message-ID: <1575522823.75.0.720870581486.issue38976@roundup.psfhosted.org> New submission from Jacob Taylor : This PR adds support for the HttpOnly flag as encoded in CURL cookiejars. This PR was mainly designed to allow the MozillaCookieJar to parse in the cookies, as previously they were considered comments and ignored. As HttpOnly is considered a non-standard attribute, the nonstandard attribute dict was considered the most appropriate place to persist this information. ---------- components: Library (Lib) messages: 357837 nosy: Jacob Taylor priority: normal severity: normal status: open title: Add support for HTTP Only flag in MozillaCookieJar type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 05:15:30 2019 From: report at bugs.python.org (Simon.yu) Date: Thu, 05 Dec 2019 10:15:30 +0000 Subject: [New-bugs-announce] [issue38977] python3.8 and namedlist1.7 is Incompatible Message-ID: <1575540930.03.0.27673784229.issue38977@roundup.psfhosted.org> New submission from Simon.yu : When I use pytest based on python3.8, met a problem,it seems namedlist1.7 is incompatible!! see blow logs: ---------------------- File "C:\Program Files\Python38\lib\site-packages\namedlist.py", line 180, in _make_fn code = compile(module_node, '', 'exec') TypeError: required field "posonlyargs" missing from arguments ---------------------- ---------- components: Library (Lib) messages: 357842 nosy: ling.yu at outlook.com priority: normal severity: normal status: open title: python3.8 and namedlist1.7 is Incompatible type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 07:53:23 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Thu, 05 Dec 2019 12:53:23 +0000 Subject: [New-bugs-announce] [issue38978] Implement __class_getitem__ for Future, Task, Queue Message-ID: <1575550403.72.0.0854877987028.issue38978@roundup.psfhosted.org> New submission from Andrew Svetlov : Typeshed declares asyncio.Future, asyncio.Task and asyncio.Queue as generic types, which is 100% correct. The problem is that these classes don't support generic instantiation in runtime, e.g. Future[str] raises TypeError. The feature should be implemented by adding __class_getitem__ methods which return self. The patch is trivial but requires a few lines of C code for C Accelerated CTask and CFuture as well as updating Python code. A volunteer is welcome! ---------- components: asyncio keywords: easy, easy (C) messages: 357848 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Implement __class_getitem__ for Future, Task, Queue versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 08:04:18 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Thu, 05 Dec 2019 13:04:18 +0000 Subject: [New-bugs-announce] [issue38979] ContextVar[str] should return ContextVar class, not None Message-ID: <1575551058.99.0.903743507965.issue38979@roundup.psfhosted.org> New submission from Andrew Svetlov : The issue is minor, I suspect nobody wants to derive from ContextVar class. The generic implementation for __class_getitem__ is returning unmodified self argument. Yuri, is there a reason to behave differently in the case of ContextVar? If no, we can mark the issue as easy(C) and wait for a volunteer, the fix seems trivial. ---------- components: Extension Modules messages: 357850 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: ContextVar[str] should return ContextVar class, not None versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 11:00:31 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 05 Dec 2019 16:00:31 +0000 Subject: [New-bugs-announce] [issue38980] Compile libpython with -fno-semantic-interposition Message-ID: <1575561631.45.0.417730594345.issue38980@roundup.psfhosted.org> New submission from STINNER Victor : The Fedora packaging has been modified to compile libpython with -fno-semantic-interposition flag: it makes Python up to 1.3x faster without having to touch any line of the C code! See pyperformance results: https://fedoraproject.org/wiki/Changes/PythonNoSemanticInterpositionSpeedup#Benefit_to_Fedora The main drawback is that -fno-semantic-interposition prevents to override Python symbols using a custom library preloaded by LD_PRELOAD. For example, override PyErr_Occurred() function. We (authors of the Fedora change) failed to find any use case for LD_PRELOAD. To be honest, I found *one* user in the last 10 years who used LD_PRELOAD to track memory allocations in Python 2.7. This use case is no longer relevant in Python 3 with PEP 445 which provides a supported C API to override Python memory allocators or to install hooks on Python memory allocators. Moreover, tracemalloc is a nice way to track memory allocations. Is there anyone aware of any special use of LD_PRELOAD for libpython? To be clear: -fno-semantic-interposition only impacts libpython. All other libraries still respect LD_PRELOAD. For example, it is still possible to override glibc malloc/free. Why -fno-semantic-interposition makes Python faster? There are multiple reasons. For of all, libpython makes a lot of function calls to libpython. Like really a lot, especially in the hot code paths. Without -fno-semantic-interposition, function calls to libpython requires to get through "interposition": for example "Procedure Linkage Table" (PLT) indirection on Linux. It prevents function inlining which has a major impact on performance (missed optimization). In short, even with PGO and LTO, libpython function calls have two performance "penalities": * indirect function calls (PLT) * no inlining I'm comparing Python performance of "statically linked Python" (Debian/Ubuntu choice: don't use ./configure --enable-shared, python is not linked to libpython) to "dynamically linked Python" (Fedora choice: use "./configure --enable-shared", python is dynamically linked to libpython). With -fno-semantic-interposition, function calls are direct and can be inlined when appropriate. You don't have to trust me, look at pyperformance benchmark results ;-) When using ./configure --enable-shared (libpython), the "python" binary is exactly one function call and that's all: int main(int argc, char **argv) { return Py_BytesMain(argc, argv); } So 100% of the time is only spent in libpython. For a longer rationale, see the accepted Fedora change: https://fedoraproject.org/wiki/Changes/PythonNoSemanticInterpositionSpeedup ---------- components: Build messages: 357856 nosy: inada.naoki, pablogsal, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Compile libpython with -fno-semantic-interposition type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 11:56:07 2019 From: report at bugs.python.org (Matthias Bussonnier) Date: Thu, 05 Dec 2019 16:56:07 +0000 Subject: [New-bugs-announce] [issue38981] better name for re.error Exception class. Message-ID: <1575564967.92.0.182367459899.issue38981@roundup.psfhosted.org> New submission from Matthias Bussonnier : better error/exception name for re.compile error. Currently the error raise by re.compile when it fails to compile is `error` defined in sre_constants.py: ``` class error(Exception): """Exception raised for invalid regular expressions. ``` This is quite disturbing as most exception start with an uppercase and have a tiny bit more descriptive name. Would it be possible to have it renamed as something more explicit like `ReCompileError`, and still keeping the potential `error` alias as deprecated ? ---------- components: Regular Expressions messages: 357867 nosy: ezio.melotti, mbussonn, mrabarnett priority: normal severity: normal status: open title: better name for re.error Exception class. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 12:09:01 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 05 Dec 2019 17:09:01 +0000 Subject: [New-bugs-announce] [issue38982] test_asyncio: SubprocessPidfdWatcherTests..test_close_dont_kill_finished() leaks a file descriptor Message-ID: <1575565741.97.0.071963111218.issue38982@roundup.psfhosted.org> New submission from STINNER Victor : See on AMD64 Fedora Rawhide Refleaks 3.x: https://buildbot.python.org/all/#/builders/82/builds/7 I found the leaking test using test.bisect_cmd: $ ./python -m test test_asyncio -R 3:3 --fail-env-changed -v -m test.test_asyncio.test_subprocess.SubprocessPidfdWatcherTests.test_close_dont_kill_finished ... test_asyncio leaked [1, 1, 1] file descriptors, sum=3 ---------- components: Tests, asyncio messages: 357869 nosy: asvetlov, pablogsal, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: SubprocessPidfdWatcherTests..test_close_dont_kill_finished() leaks a file descriptor versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 13:29:36 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 05 Dec 2019 18:29:36 +0000 Subject: [New-bugs-announce] [issue38983] test_venv: test_overwrite_existing() failed on AMD64 Windows7 SP1 3.x Message-ID: <1575570576.83.0.24142874902.issue38983@roundup.psfhosted.org> New submission from STINNER Victor : Failure on AMD64 Windows7 SP1 3.x: https://buildbot.python.org/all/#builders/81/builds/16 ====================================================================== ERROR: test_overwrite_existing (test.test_venv.BasicTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\test_venv.py", line 225, in test_overwrite_existing builder.create(self.env_dir) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\venv\__init__.py", line 65, in create context = self.ensure_directories(env_dir) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\venv\__init__.py", line 108, in ensure_directories self.clear_directory(env_dir) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\venv\__init__.py", line 91, in clear_directory shutil.rmtree(fn) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 731, in rmtree return _rmtree_unsafe(path, onerror) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 613, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\shutil.py", line 611, in _rmtree_unsafe os.rmdir(path) OSError: [WinError 145] The directory is not empty: 'C:\\Users\\Buildbot\\AppData\\Local\\Temp\\tmp9y0m6j7z\\Scripts' ====================================================================== ERROR: test_overwrite_existing (test.test_venv.BasicTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 380, in _force_run return func(*args) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\Buildbot\\AppData\\Local\\Temp\\tmp9y0m6j7z\\Scripts\\activate.bat' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\test_venv.py", line 71, in tearDown rmtree(self.env_dir) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 502, in rmtree _rmtree(path) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 443, in _rmtree _waitfor(_rmtree_inner, path, waitall=True) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 391, in _waitfor func(pathname) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 439, in _rmtree_inner _waitfor(_rmtree_inner, fullname, waitall=True) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 391, in _waitfor func(pathname) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 442, in _rmtree_inner _force_run(fullname, os.unlink, fullname) File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\support\__init__.py", line 385, in _force_run os.chmod(path, stat.S_IRWXU) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\Buildbot\\AppData\\Local\\Temp\\tmp9y0m6j7z\\Scripts\\activate.bat' ---------- components: Tests messages: 357881 nosy: vstinner priority: normal severity: normal status: open title: test_venv: test_overwrite_existing() failed on AMD64 Windows7 SP1 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 5 19:22:50 2019 From: report at bugs.python.org (Gabriele) Date: Fri, 06 Dec 2019 00:22:50 +0000 Subject: [New-bugs-announce] [issue38984] Value add to the wrong key in a dictionary Message-ID: <1575591770.15.0.618190193616.issue38984@roundup.psfhosted.org> New submission from Gabriele : hello, I found this strange issue in my program: I?m reading a file with 5 columns separated by ?;? . Columns 3,4 and 5 can have multiple values and in this case, are separated by ?,? . Some values in column 3 can be repeated in each line in the same column. Goal: I want create a dictionary for each line using as a key the value in the first column and another key using the values that are in column 3. If the key in column 3 is already in the dictionary, I want add the value in column 4 to the dictionary for the key that is repeated. Example file: col1;col2;col3;col4;col5 id1;i1;val1;Si1;Da1 id2;i2;val2,val1;Si2,Si1;Da2 id3;i3;val3;Si3;Da3 Expected Result: {?id1? : [ 'id1','i1',['val1'],['Si1'],['Da1?] ] , ?id2? : [? id2','i2',['val2','val1'] ,[Si2,Si1] ,['Da2?] ] , ?val1? : [ 'id1','i1',['val2','val1'],['Si2','Si1'],['Da1','Da2?] ] , ?val2? : [ 'id2','i2',['val2','val1'],[Si2,Si1] ,['Da2?] ] , 'id3? : [ 'id3','i3','val3',['Si3'],['Da3?] ] , ?val3? : [ 'id3','i3','val3',['Si3'],['Da3?] ] } But what I obtaining is {'id2': ['id2', 'i2', ['val2', 'val1'], ['Si2', 'Si1'], ['Da2'], '5'], 'id3': ['id3', 'i3', ['val3'], ['Si3'], ['Da3'], '5?], 'id1': ['id1', 'i1', ['val1'], ['Si1', 'Si2', 'Si1'], ['Da1', 'Da2'], '5'], 'val3': ['id3', 'i3', ['val3'], ['Si3'], ['Da3'], '5'], 'val2': ['id2', 'i2', ['val2', 'val1'], ['Si2', 'Si1'], ['Da2'], '5'], 'val1': ['id1', 'i1', ['val1'], ['Si1', 'Si2', 'Si1'], ['Da1', 'Da2'], '5']} My bug: Key id1 was called just one time in my program, but in my results, I can find that list in position 3 is with 3 value (Si1 , Si2 , Si1) when it supposes to have just Si1. Am I doing something wrong or this is a potential bug? I ran the program using a different machines and different python versions, but results don't change. ---------- files: program.py messages: 357894 nosy: malbianco priority: normal severity: normal status: open title: Value add to the wrong key in a dictionary type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48759/program.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 6 04:29:05 2019 From: report at bugs.python.org (Koh) Date: Fri, 06 Dec 2019 09:29:05 +0000 Subject: [New-bugs-announce] [issue38985] `compile` returns the first line of file on termination Message-ID: <1575624545.38.0.942954446264.issue38985@roundup.psfhosted.org> New submission from Koh : By specifying a filename in the compile function and then improperly terminating it, we are able to return the first line of any file. >> compile('yield', '/etc/passwd', 'exec') File "/etc/passwd", line 1 root:x:0:0:root:/root:/bin/bash ^ SyntaxError: 'yield' outside function Is this intended behavior? I have been able to use it to escape sandboxes. ---------- messages: 357906 nosy: iso priority: normal severity: normal status: open title: `compile` returns the first line of file on termination type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 6 08:09:22 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Fri, 06 Dec 2019 13:09:22 +0000 Subject: [New-bugs-announce] [issue38986] Suppport TaskWakeupMethWrapper.__self__ to conform asyncio _format_handle logic Message-ID: <1575637762.64.0.656469338348.issue38986@roundup.psfhosted.org> New submission from Andrew Svetlov : _format_handle() behaves differently if handle._callback.__self__ is asyncio.Task instance. To follow this logic TaskWakeupMethWrapper from _asynciomodule.c should support the corresponding member. The fix is very desired for analyzing slow callbacks, without it the output doesn't point on slow coroutine but mentions only. See also #38608 ---------- components: asyncio messages: 357913 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Suppport TaskWakeupMethWrapper.__self__ to conform asyncio _format_handle logic versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 6 12:19:12 2019 From: report at bugs.python.org (Kevin Buchs) Date: Fri, 06 Dec 2019 17:19:12 +0000 Subject: [New-bugs-announce] [issue38987] 3.8.0 on GNU/Linux fails to find shared library Message-ID: <1575652752.94.0.770020872085.issue38987@roundup.psfhosted.org> New submission from Kevin Buchs : I just downloaded Python 3.8.0 and built (from source) on Ubuntu 18.04. I used these options to configure: ./configure --enable-shared --enable-ipv6 --enable-optimizations The shared library gets installed into /usr/local/lib: find / -type f -name libpython3.8.so.1.0 /usr/local/src/Python-3.8.0/libpython3.8.so.1.0 /usr/local/lib/libpython3.8.so.1.0 /usr/local/lib is defined as a path to search for shared libraries: # /etc/ld.so.conf loads all /etc/ld.so.conf.d/*.conf grep /usr/local /etc/ld.so.conf.d/*.conf /etc/ld.so.conf.d/i386-linux-gnu.conf:/usr/local/lib/i386-linux-gnu /etc/ld.so.conf.d/i386-linux-gnu.conf:/usr/local/lib/i686-linux-gnu /etc/ld.so.conf.d/libc.conf:/usr/local/lib /etc/ld.so.conf.d/x86_64-linux-gnu.conf:/usr/local/lib/x86_64-linux-gnu But, the python executable is unable to find it: /usr/local/bin/python3.8 /usr/local/bin/python3.8: error while loading shared libraries: libpython3.8.so.1.0: cannot open shared object file: No such file or directory ---------- components: Installation messages: 357924 nosy: buchs priority: normal severity: normal status: open title: 3.8.0 on GNU/Linux fails to find shared library type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 6 12:41:00 2019 From: report at bugs.python.org (dontbugme) Date: Fri, 06 Dec 2019 17:41:00 +0000 Subject: [New-bugs-announce] [issue38988] Killing asyncio subprocesses on timeout? Message-ID: <1575654060.86.0.979877919031.issue38988@roundup.psfhosted.org> New submission from dontbugme : I'm trying to use asyncio.subproceess and am having difficulty killing the subprocesses after timeout. My use case is launching processes that hold on to file handles and other exclusive resources, so subsequent processes can only be launched after the first ones are fully stopped. The documentation on https://docs.python.org/3/library/asyncio-subprocess.html#asyncio.asyncio.subprocess.Process say there is no timeout-parameter and suggests using wait_for() instead. I tried this but it's kind of a footgun because the wait_for() times out but the process still lives on in the background. See Fail(1) and Fail(2) in attached test1(). To solve this i tried to catch the CancelledError and in the exception handler kill the process myself. While this worked it's also semi dangerous because it takes some time for the process to get killed and the wait() after kill() runs in the background as some kind of detached task. See Fail(3) in attached test2(). This i can sortof understand because after TimeoutError something would have to block for wait() to actually finish and this is impossible. After writing this i feel myself there is no good solution for Fail#3 because again, timeouts can't be blocking. Maybe some warning in the documentation would be appropriate for Fail(1+2) because the suggestion in the documentation right now is quite misleading, the wait_for()-alternative to timeout-parameter does not behave like the timeout-parameter in ordinary subprocess.Popen.wait() ---------- components: asyncio files: subprocess_timeout.py messages: 357930 nosy: asvetlov, dontbugme, yselivanov priority: normal severity: normal status: open title: Killing asyncio subprocesses on timeout? type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48761/subprocess_timeout.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 6 13:49:45 2019 From: report at bugs.python.org (Ryan Thornton) Date: Fri, 06 Dec 2019 18:49:45 +0000 Subject: [New-bugs-announce] [issue38989] pip install selects 32 bit wheels for 64 bit python if vcvarsall.bat amd64_x86 in environment Message-ID: <1575658185.88.0.621824601443.issue38989@roundup.psfhosted.org> New submission from Ryan Thornton : ## Expected Behavior pip install should download dependencies matching the architecture of the python executable being used. ## Actual Behavior When calling pip install from a Visual Studio command prompt configured to cross compile from x64 to x86, pip installs wheels matching the architecture of Visual Studio's cross compile target (i.e. `VSCMD_ARG_TGT_ARCH=x86`) and not the architecture of python itself (x64). This results in a broken installation of core libraries. ## Steps to Reproduce System Details: Windows 10 x64 Python 3.8 x64 Visual Studio 2017 15.9.14 Environment Details: vcvarsall.bat amd64_x86 1. "C:\Program Files\Python38\python.exe" -mvenv "test" 2. cd test\Scripts 3. pip install cffi==1.13.2 Results in the following: > Collecting cffi > Using cached https://files.pythonhosted.org/packages/f8/26/5da5cafef77586e4f7a136b8a24bc81fd2cf1ecb71b6ec3998ffe78ea2cf/cffi-1.13.2-cp38-cp38-win32.whl ## Context I think the regression was introduced here: 62dfd7d6fe11bfa0cd1d7376382c8e7b1275e38c https://github.com/python/cpython/commit/62dfd7d6fe11bfa0cd1d7376382c8e7b1275e38c ---------- components: Distutils messages: 357936 nosy: Ryan Thornton, dstufft, eric.araujo priority: normal severity: normal status: open title: pip install selects 32 bit wheels for 64 bit python if vcvarsall.bat amd64_x86 in environment type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 6 16:42:19 2019 From: report at bugs.python.org (Brittany Reynoso) Date: Fri, 06 Dec 2019 21:42:19 +0000 Subject: [New-bugs-announce] [issue38990] Import genericpath fails with python -S Message-ID: <1575668539.27.0.598357359198.issue38990@roundup.psfhosted.org> New submission from Brittany Reynoso : When running python -S, attempting to run "import genericpath" fails with an attribute error due to a circular dependency between posixpath and genericpath that's triggered when "import os" is called from within genericpath.py. Traceback (most recent call last): File "", line 1, in File "/usr/local/fbcode/platform007/lib/python3.7/genericpath.py", line 6, in import os File "/usr/local/fbcode/platform007/lib/python3.7/os.py", line 57, in import posixpath as path File "/usr/local/fbcode/platform007/lib/python3.7/posixpath.py", line 130, in splitext.__doc__ = genericpath._splitext.__doc__ AttributeError: module 'genericpath' has no attribute '_splitext' ---------- components: Library (Lib) messages: 357947 nosy: brittanyrey priority: normal severity: normal status: open title: Import genericpath fails with python -S type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 6 18:39:15 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 06 Dec 2019 23:39:15 +0000 Subject: [New-bugs-announce] [issue38991] Remove test.support.strip_python_stderr() Message-ID: <1575675555.17.0.954661823697.issue38991@roundup.psfhosted.org> New submission from STINNER Victor : Python 3.3 compiled in debug mode dumps the total number of references at exit into stderr. Something like: $ python3.3-dbg -X showrefcount -c pass [18563 refs, 6496 blocks] In Python 3.4, bpo-17323 disabled this feature by default and added -X showrefcount command line option: commit 1f8898a5916b942c86ee8716c37d2db388e7bf2f Author: Ezio Melotti Date: Tue Mar 26 01:59:56 2013 +0200 #17323: The "[X refs, Y blocks]" printed by debug builds has been disabled by default. It can be re-enabled with the `-X showrefcount` option. test.support module still has strip_python_stderr() function to remove "[18563 refs, 6496 blocks]" from stderr, but it's now useless. Attached PR removes the function. The PR also avoids calling str.strip(). ---------- components: Tests messages: 357955 nosy: vstinner priority: normal severity: normal status: open title: Remove test.support.strip_python_stderr() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 7 06:32:08 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 07 Dec 2019 11:32:08 +0000 Subject: [New-bugs-announce] [issue38992] testFsum failure caused by constant folding of a float expression Message-ID: <1575718328.32.0.686761855241.issue38992@roundup.psfhosted.org> New submission from Xavier de Gaye : Title: testFsum failure caused by constant folding of a float expression Description: ------------ Python (Python 3.9.0a1+ heads/master-dirty:ea9835c5d1) is built on a Linux x86_64. This native interpreter is used to cross-compile Python (using the same source) to Android API 24. Next the installation is done locally to DESTDIR by running 'make install' with the env var DESTDIR set and the standard library modules are compiled by the native interpreter during this process. The content of DESTDIR is then copied to an arm64 android device (Huawei FWIW). The test_math.MathTests.testFsum test fails on the android device with: AssertionError: -4.309103330548428e+214 != -1.0 This occurs when testing '([1.7**(i+1)-1.7**i for i in range(1000)] + [-1.7**1000], -1.0)' in test_values. Next the test_math.py file is touched on the android device to force recompilation of the module and testFsum becomes surprisingly successful. Investigation: -------------- The hexadecimal representation of 1.7**n on x86_64 and arm64 are: * different for n in (10, 100, 1000) * equal for n in [0, 9] or 11 on x86_64: >>> 1.7**10 201.59939004489993 >>> (1.7**10).hex() '0x1.9332e34080c95p+7' on arm64: >>> 1.7**10 201.59939004489996 >>> (1.7**10).hex() '0x1.9332e34080c96p+7' The output of the following foo.py module that has been run on x86_64 and arm64 are attached to this issue: ####################### import math, dis def test_fsum(): x = [1.7**(i+1)-1.7**i for i in range(10)] + [-1.7**10] return x y = test_fsum() print(y) print(math.fsum(y)) dis.dis(test_fsum) ####################### The only difference between both dissasembly of test_fsum() is at bytecode 16 that loads the folded constant 1.7**10. Conclusion: ----------- The compilation of the expression '[1.7**(i+1)-1.7**i for i in range(1000)] + [-1.7**1000]' on x86_64 folds '1.7**1000' to 2.8113918290273277e+230 When the list comprehension (the first term of the expression) is executed on arm64, then 1.7**1000 is evaluated as 2.8113918290273273e+230. On arm64 1.7**1000 - 2.8113918290273277e+230 = -4.309103330548428e+214, hence the AssertionError above. This is confirmed by changing testFsum to prevent constant folding by replacing 1000 in the testFsum expression with a variable whose value is 1000. In that case the test_math module compiled on x86_64 is successful on arm64. This could be a fix for this issue unless this fix would be hiding another problem such as .pyc files portability across different platforms and my knowledge of IEEE 754 is too superficial to answer that point. ---------- components: Tests files: foo.x86_64 messages: 357969 nosy: tim.peters, vstinner, xdegaye priority: normal severity: normal stage: needs patch status: open title: testFsum failure caused by constant folding of a float expression type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file48762/foo.x86_64 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 7 08:24:12 2019 From: report at bugs.python.org (AVicennA) Date: Sat, 07 Dec 2019 13:24:12 +0000 Subject: [New-bugs-announce] [issue38993] cProfile behaviour issue with decorator and math.factorial() lib. Message-ID: <1575725052.32.0.828064495717.issue38993@roundup.psfhosted.org> Change by AVicennA : ---------- components: Library (Lib) files: cProfiling.txt nosy: AvicennA priority: normal severity: normal status: open title: cProfile behaviour issue with decorator and math.factorial() lib. type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48764/cProfiling.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 7 10:57:34 2019 From: report at bugs.python.org (Batuhan) Date: Sat, 07 Dec 2019 15:57:34 +0000 Subject: [New-bugs-announce] [issue38994] Implement __class_getitem__ for PathLike Message-ID: <1575734254.5.0.717990589072.issue38994@roundup.psfhosted.org> New submission from Batuhan : Typeshed already using __class_getitem__ syntax for PathLike https://github.com/python/typeshed/search?q=PathLike&unscoped_q=PathLike ---------- components: Library (Lib) messages: 357978 nosy: BTaskaya, asvetlov priority: normal severity: normal status: open title: Implement __class_getitem__ for PathLike versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 7 20:04:55 2019 From: report at bugs.python.org (sush) Date: Sun, 08 Dec 2019 01:04:55 +0000 Subject: [New-bugs-announce] [issue38995] reverse search (ctrl-r) doest not work Message-ID: <1575767095.84.0.787675686179.issue38995@roundup.psfhosted.org> New submission from sush : On my MacOS Mojave 10.14.6 (18G103), after upgrading python to python 3.8, the ctrl-r on the python interpreter does not work. Here is the working python 3.7 version: ``` $ python3.7 --version Python 3.7.3 $ python3.7 Python 3.7.3 (default, Mar 27 2019, 09:23:15) [Clang 10.0.1 (clang-1001.0.46.3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> a = 10 (reverse-i-search)`a': a = 10 ``` Note that I pressed 'ctrl-r' on my keyboard to bring up the 'reverse-i-search'. On python3.8, ctrl-r has no response from the interpreter: ``` $ python3.8 --version Python 3.8.0 $ python3.8 Python 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> a = 10 >>> ``` Interestingly, the ~/.python_history files seems to be updated. Also, when I enter the 'UP' key, the older command comes up. Just that ctrl-r doesn't work. Here is the output showing that python_history files is being written to: ``` $ rm ~/.python_history $ python3.8 Python 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> a = 33 >>> ^D $ cat ~/.python_history _HiStOrY_V2_ a\040=\04033 ``` ---------- components: Interpreter Core messages: 357991 nosy: sush priority: normal severity: normal status: open title: reverse search (ctrl-r) doest not work type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 8 08:32:11 2019 From: report at bugs.python.org (Arno-Can Uestuensoez) Date: Sun, 08 Dec 2019 13:32:11 +0000 Subject: [New-bugs-announce] [issue38996] introduction of default values for collection.namedtuple Message-ID: <1575811931.27.0.501884177047.issue38996@roundup.psfhosted.org> New submission from Arno-Can Uestuensoez : Hello, I had the requirement to make excessive use of named tuples in an extended way. The applications are variable data sets with optional items. Typical in protocol dat a units, or e.g. mixed abstract filesystem types for heterogeneous file system types including URL and UNC. As I saw the required changes are a couple of lines which I see as harmless. The implementation is available for Python2.7 and Python3.5+ in the project namedtupledefs, which is the patched code extracted from the *collections*. The detailed descriptions for both versions are available at: Python3: https://namedtupledefs3.sourceforge.io/ Python3: https://namedtupledefs.sourceforge.io/ Python2: https://namedtupledefs2.sourceforge.io/ Checked in PyPi + Sourceforge + github - the links are in the documents. https://github.com/ArnoCan/namedtupledefs3/ https://github.com/ArnoCan/namedtupledefs2/ https://github.com/ArnoCan/namedtupledefs/ https://pypi.org/project/namedtupledefs[23]/ My proposal is to introduce the changes. It would be great for Python2.7 too, before the EOL. WKR WKR ---------- components: Library (Lib) files: namedtupled-uml-patches.jpg messages: 358002 nosy: acue priority: normal severity: normal status: open title: introduction of default values for collection.namedtuple type: enhancement versions: Python 2.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48765/namedtupled-uml-patches.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 8 10:35:12 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 08 Dec 2019 15:35:12 +0000 Subject: [New-bugs-announce] [issue38997] test__xxsubinterpreters test_atexit test_capi test_threading are leaking references Message-ID: <1575819312.04.0.64733813465.issue38997@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Similar to https://bugs.python.org/issue38962, test__xxsubinterpreters test_atexit test_capi test_threading are leaking references. Example: https://buildbot.python.org/all/#/builders/158/builds/10 https://buildbot.python.org/all/#/builders/16/builds/11 https://buildbot.python.org/all/#/builders/157/builds/11 test__xxsubinterpreters test_atexit test_capi test_threading == Tests result: FAILURE then FAILURE == 386 tests OK. 10 slowest tests: - test_multiprocessing_spawn: 26 min 23 sec - test_mailbox: 23 min 17 sec - test_asyncio: 20 min 25 sec - test_venv: 14 min 54 sec - test_concurrent_futures: 13 min 35 sec - test_zipfile: 11 min 10 sec - test_regrtest: 9 min 34 sec - test_distutils: 9 min 19 sec - test_compileall: 9 min 9 sec - test_lib2to3: 5 min 52 sec 4 tests failed: test__xxsubinterpreters test_atexit test_capi test_threading ---------- assignee: pablogsal components: Tests messages: 358006 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test__xxsubinterpreters test_atexit test_capi test_threading are leaking references type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 8 13:47:22 2019 From: report at bugs.python.org (da-dada) Date: Sun, 08 Dec 2019 18:47:22 +0000 Subject: [New-bugs-announce] [issue38998] dict.setdefault (setdefault of dictionary) Message-ID: <1575830842.98.0.261556946938.issue38998@roundup.psfhosted.org> New submission from da-dada : from the docu I expected at the second call just a return of value and not a second calculation: there is room for improvement, as Elon Musk would say.. class Ddefault: def __init__(self): vars(self).setdefault('default', self.set_default()) vars(self).setdefault('default', self.set_default()) def set_default(self): print(vars(self)) return 'default' if __name__ == "__main__": Ddefault() ---------- messages: 358016 nosy: da-dada priority: normal severity: normal status: open title: dict.setdefault (setdefault of dictionary) type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 8 13:49:03 2019 From: report at bugs.python.org (Alexandros Karypidis) Date: Sun, 08 Dec 2019 18:49:03 +0000 Subject: [New-bugs-announce] [issue38999] Python launcher on Windows does not detect active venv Message-ID: <1575830943.65.0.538772164668.issue38999@roundup.psfhosted.org> New submission from Alexandros Karypidis : When you activate a venv on Windows and use a shebang with a major verion qualifier, the python launcer does not properly detect that a venv is active and uses the system installation instead. The incorrect behavior is documented in this SO question where another user has confirmed and suggested it is a bug: https://stackoverflow.com/questions/59238326 Steps to reproduce (needs script.py attached below): 1. Install Python 3.7 on Windows 10 (64 bit) 2. Run script.py you should see: PS C:\pytest> .\script.py EXECUTABLE: C:\Program Files\Python37\python.exe PREFIX: C:\Program Files\Python37 BASE PREFIX: C:\Program Files\Python37 3. Create and activate a virtual environment with: PS C:\pytest> python -m venv .venv PS C:\pytest> . .\.venv\Scripts\Activate.ps1 4. Run script.py you should see it ignore the active virtual environment: (.venv) PS C:\pytest> .\script.py EXECUTABLE: C:\Program Files\Python37\python.exe PREFIX: C:\Program Files\Python37 BASE PREFIX: C:\Program Files\Python37 I am using Windows 10 64-bit, update 1903 and Python 3.7.5-64 ---------- components: Windows messages: 358017 nosy: Alexandros Karypidis, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python launcher on Windows does not detect active venv type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 8 15:09:03 2019 From: report at bugs.python.org (Sean Moss) Date: Sun, 08 Dec 2019 20:09:03 +0000 Subject: [New-bugs-announce] [issue39000] Range causing unstable output(Windows64) Message-ID: <1575835743.62.0.012269649572.issue39000@roundup.psfhosted.org> New submission from Sean Moss : I was doing this year's Advent of Code and found that the following program produces unstable output when run using the given file as input: """ from itertools import permutations import gc def runProgram(amp_input, program, counter): while program[counter] != 99: # print('*' * 99) instruction = str(program[counter]) opcode = instruction[-2:] value1 = program[counter + 1] # print('2:{}'.format(counter)) try: if opcode in ['01', '02', '1', '2', '5', '05', '6', '06', '7', '07', '8', '08']: value1 = program[counter + 1] value2 = program[counter + 2] param_modes = instruction[::-1][2:] # print('{} {} {} {}'.format(instruction, value1, value2, value3)) param_modes += '0' * (3 - len(param_modes)) # print(param_modes) if param_modes[0] == '0': value1 = program[value1] if param_modes[1] == '0': value2 = program[value2] # print('{} {} {} {}'.format(instruction, value1, value2, value3)) if opcode in ['01', '02', '1', '2', '7', '07', '8', '08']: value3 = program[counter + 3] if opcode.endswith('1'): program[value3] = value1 + value2 elif opcode.endswith('2'): program[value3] = value1 * value2 elif opcode in ['7', '07']: program[value3] = 1 if value1 < value2 else 0 elif opcode in ['8', '08']: program[value3] = 1 if value1 == value2 else 0 counter += 4 elif opcode in ['5', '05']: if value1 != 0: counter = value2 else: counter += 3 elif opcode in ['6', '06']: if value1 == 0: counter = value2 else: counter += 3 elif opcode in ['03', '3']: program[value1] = amp_input.pop(0) counter += 2 elif opcode in ['4', '04']: # print('{} {}'.format(instruction, value1)) if instruction != '104': value1 = program[value1] # print('Output value: {}'.format(value1)) counter += 2 return value1, counter else: print("Something broke at {}".format(counter)) print("program state {}".format(program)) print(instruction) return False except Exception as e: print("Out of bounds at {}".format(counter)) print("program state {}".format(program)) print(instruction) print(e) print(len(program)) return return program, True outputs = [] max_output = 0 # initial_program = list(map(int, open('input7.txt').read().split(','))) amp_ids = ['A', 'B', 'C', 'D', 'E'] permutation = [5, 6, 7, 8, 9] # for permutation in permutations([5, 6, 7, 8, 9]): amp_programs = {amp_id: [list(map(int, open('input7.txt').read().split(',')))[:], 0] for amp_id in ['A', 'B', 'C', 'D', 'E']} loops = 0 prev_output = 0 for x in range(0, 5): gc.collect() new_output, outer_counter = runProgram([permutation[x], prev_output], amp_programs[amp_ids[x]][0], amp_programs[amp_ids[x]][1]) if outer_counter is not True: prev_output = new_output amp_programs[amp_ids[x]][1] = outer_counter # print(new_output) while amp_programs['E'][1] is not True: gc.collect() for amp_id in amp_programs.keys(): amp = amp_programs[amp_id] # print(prev_output) # print('1:{}'.format(amp[1])) new_output, outer_counter = runProgram([prev_output], amp[0], amp[1]) if outer_counter is not True: prev_output = new_output amp[1] = outer_counter # print('{}, {}'.format(amp[1], outer_counter)) # outputs.append(prev_output) # print(prev_output) outputs.append(prev_output) # if prev_output > max_output: # max_output = prev_output print(max(outputs)) # print(outputs) """ However when this program is run on the same input it produces stable input: """ from itertools import permutations def runProgram(amp_input, program, counter): while program[counter] != 99: # print('*' * 99) instruction = str(program[counter]) opcode = instruction[-2:] value1 = program[counter + 1] # print('2:{}'.format(counter)) try: if opcode in ['01', '02', '1', '2', '5', '05', '6', '06', '7', '07', '8', '08']: value1 = program[counter + 1] value2 = program[counter + 2] param_modes = instruction[::-1][2:] # print('{} {} {} {}'.format(instruction, value1, value2, value3)) param_modes += '0' * (3 - len(param_modes)) # print(param_modes) if param_modes[0] == '0': value1 = program[value1] if param_modes[1] == '0': value2 = program[value2] # print('{} {} {} {}'.format(instruction, value1, value2, value3)) if opcode in ['01', '02', '1', '2', '7', '07', '8', '08']: value3 = program[counter + 3] if opcode.endswith('1'): program[value3] = value1 + value2 elif opcode.endswith('2'): program[value3] = value1 * value2 elif opcode in ['7', '07']: program[value3] = 1 if value1 < value2 else 0 elif opcode in ['8', '08']: program[value3] = 1 if value1 == value2 else 0 counter += 4 elif opcode in ['5', '05']: if value1 != 0: counter = value2 else: counter += 3 elif opcode in ['6', '06']: if value1 == 0: counter = value2 else: counter += 3 elif opcode in ['03', '3']: program[value1] = amp_input.pop(0) counter += 2 elif opcode in ['4', '04']: # print('{} {}'.format(instruction, value1)) if instruction != '104': value1 = program[value1] # print('Output value: {}'.format(value1)) counter += 2 return value1, counter else: print("Something broke at {}".format(counter)) print("program state {}".format(program)) print(instruction) return False except Exception as e: print("Out of bounds at {}".format(counter)) print("program state {}".format(program)) print(instruction) print(e) print(len(program)) return return program, True outputs = [] max_output = 0 # initial_program = list(map(int, open('input7.txt').read().split(','))) amp_ids = ['A', 'B', 'C', 'D', 'E'] permutation = [5, 6, 7, 8, 9] # for permutation in permutations([5, 6, 7, 8, 9]): amp_programs = {amp_id: [list(map(int, open('input7.txt').read().split(',')))[:], 0] for amp_id in ['A', 'B', 'C', 'D', 'E']} loops = 0 prev_output = 0 for amp, perm in zip(amp_programs.values(), permutation): new_output, outer_counter = runProgram([perm, prev_output], amp[0], amp[1]) if outer_counter is not True: prev_output = new_output amp[1] = outer_counter # print(new_output) while amp_programs['E'][1] is not True: for amp_id in amp_programs.keys(): amp = amp_programs[amp_id] # print(prev_output) # print('1:{}'.format(amp[1])) new_output, outer_counter = runProgram([prev_output], amp[0], amp[1]) if outer_counter is not True: prev_output = new_output amp[1] = outer_counter # print('{}, {}'.format(amp[1], outer_counter)) # outputs.append(prev_output) # print(prev_output) outputs.append(prev_output) # if prev_output > max_output: # max_output = prev_output print(max(outputs)) # print(outputs) """ The only difference is that the second program uses the zip function to iterate while the first uses the range function to iterate. Again this is not a case of divergent output, it's that the first program doesn't always have the same output, the second program always has the same output. ---------- components: Library (Lib) files: input7.txt messages: 358022 nosy: Sean Moss priority: normal severity: normal status: open title: Range causing unstable output(Windows64) type: behavior versions: Python 3.5 Added file: https://bugs.python.org/file48766/input7.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 8 16:07:58 2019 From: report at bugs.python.org (Dave Lawrence) Date: Sun, 08 Dec 2019 21:07:58 +0000 Subject: [New-bugs-announce] [issue39001] possible problem with 64-bit mingw DECREF Message-ID: <1575839278.99.0.426657382539.issue39001@roundup.psfhosted.org> New submission from Dave Lawrence : I am calling a python method from C using the attached code. The Site.py file is: import os def find_site(): path = os.path.abspath(".") return path Cross compiled to Windows from Linux using mxe.cc and python 2.7.17 On 32-bit this runs as expected: module = 028BC710 result = 0283D6B0 Found Site at \\wsl$\Ubuntu\home\dl result = 0283D6B0 decref module = 028BC710 decref Site = \\wsl$\Ubuntu\home\dl but crashes on 64-bit, failing to DECREF result: module = 0000000002750408 result = 0000000000E62EF0 Found Site at \\wsl$\Ubuntu\home\dl result = 0000000000E62EF0 decref In both cases the libpython was made using the .dll copied from the target Windows machine and pexports and dlltool to create the .a if the return value of the python is return "C:/Test/Path" it works. if you add test2 = test and return test2 it fails. if you say test2 = "".join(c for c in path) and return test2 it fails. if you set path2 = "C:/Test/Path and return test2 it works using Py_REFCNT [in the C code] shows a value of 2 for a return "c:/test" but a value of 1 a return test ---------- components: Library (Lib), Windows files: py.cc messages: 358033 nosy: Dave Lawrence, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: possible problem with 64-bit mingw DECREF type: crash versions: Python 2.7 Added file: https://bugs.python.org/file48767/py.cc _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 05:04:23 2019 From: report at bugs.python.org (Tim Gates) Date: Mon, 09 Dec 2019 10:04:23 +0000 Subject: [New-bugs-announce] [issue39002] Small typo in Lib/test/test_statistics.py: tranlation -> translation Message-ID: <1575885863.07.0.931006878505.issue39002@roundup.psfhosted.org> New submission from Tim Gates : There is a small typo in Lib/test/test_statistics.py. Should read translation rather than tranlation. ---------- assignee: docs at python components: Documentation messages: 358063 nosy: docs at python, timgates42 priority: normal pull_requests: 16995 severity: normal status: open title: Small typo in Lib/test/test_statistics.py: tranlation -> translation type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 08:20:45 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 09 Dec 2019 13:20:45 +0000 Subject: [New-bugs-announce] [issue39003] test_unparse leaked [35, 5, 6] references: when pass when run again Message-ID: <1575897645.46.0.79915802392.issue39003@roundup.psfhosted.org> New submission from STINNER Victor : Sometimes, test_unparse leak references, sometimes it pass. $ ./python -m test -R 3:3 test_unparse (...) test_unparse leaked [35, 5, 6] references, sum=46 (...) Tests result: FAILURE $ ./python -m test -R 3:3 test_unparse (...) Tests result: SUCCESS ---------- components: Tests messages: 358074 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_unparse leaked [35, 5, 6] references: when pass when run again versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 08:24:37 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 09 Dec 2019 13:24:37 +0000 Subject: [New-bugs-announce] [issue39004] test_largefile: test_it() failed on Message-ID: <1575897877.42.0.877682602403.issue39004@roundup.psfhosted.org> New submission from STINNER Victor : 0:22:15 load avg: 15.82 [288/420/3] test_largefile failed (11 min 19 sec) -- running: test_capi (2 min 40 sec), test_compileall (3 min 42 sec), test_shelve (14 min 46 sec), test_sax (10 min 4 sec), test_dbm (3 min 36 sec), test_multiprocessing_spawn (13 min 21 sec), test_posix (3 min 39 sec), test_mailbox (12 min 46 sec), test_asyncio (9 min 23 sec) beginning 6 repetitions 123456 .Warning -- threading._dangling was modified by test_largefile Before: {} After: {, } test test_largefile failed -- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-stable-x86_64.refleak/build/Lib/test/test_largefile.py", line 211, in test_it self.assertEqual(os.path.getsize(TESTFN2), size) AssertionError: 2496925696 != 2500000001 ---------- messages: 358075 nosy: vstinner priority: normal severity: normal status: open title: test_largefile: test_it() failed on _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 08:28:39 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 09 Dec 2019 13:28:39 +0000 Subject: [New-bugs-announce] [issue39005] test_faulthandler: test_dump_traceback_later_file() fails randomly on AMD64 RHEL8 Refleaks 3.x Message-ID: <1575898119.77.0.459055301338.issue39005@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 RHEL8 Refleaks 3.x: https://buildbot.python.org/all/#/builders/206/builds/13 The system load was quite high: 12.64. 0:23:25 load avg: 12.64 [283/420/1] test_faulthandler failed (1 min 7 sec) -- running: test_bz2 (8 min 20 sec), test_dbm (12 min 51 sec), test_asyncio (14 min 46 sec), test_mmap (9 min 3 sec), test_zipfile (13 min 9 sec), test_mailbox (8 min 49 sec), test_largefile (8 min 15 sec), test_shelve (19 min 51 sec) beginning 6 repetitions 123456 .test test_faulthandler failed -- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_faulthandler.py", line 610, in test_dump_traceback_later_file self.check_dump_traceback_later(filename=filename) File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-x86_64.refleak/build/Lib/test/test_faulthandler.py", line 594, in check_dump_traceback_later self.assertRegex(trace, regex) AssertionError: Regex didn't match: '^Timeout \\(0:00:00.500000\\)!\\nThread 0x[0-9a-f]+ \\(most recent call first\\):\\n File "", line 17 in func\n File "", line 26 in $' not found in 'Timeout (0:00:00.500000)!\nThread 0x00007f69484b2740 (most recent call first):\n File "", line 18 in func\n File "", line 26 in ' The test passed when re-run in verbose mode. So it may be related to the high system load while tests were run in parallel. ---------- components: Tests messages: 358077 nosy: vstinner priority: normal severity: normal status: open title: test_faulthandler: test_dump_traceback_later_file() fails randomly on AMD64 RHEL8 Refleaks 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 08:33:26 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 09 Dec 2019 13:33:26 +0000 Subject: [New-bugs-announce] [issue39006] test_ssl: sendfile tests fail on AMD64 Debian root 3.7 Message-ID: <1575898406.57.0.107596401277.issue39006@roundup.psfhosted.org> New submission from STINNER Victor : Pablo wrote on the buildbot-status mailing list: "It seems that this worker has some bad upgrade to SSL as all branches fail at the same time:" AMD64 Debian root 3.7: https://buildbot.python.org/all/#builders/3/builds/8 ====================================================================== ERROR: test_sock_sendfile_exception (test.test_asyncio.test_unix_events.SelectorEventLoopUnixSockSendfileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 613, in test_sock_sendfile_exception sock, proto = self.prepare() File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 491, in prepare self.run_loop(self.loop.sock_connect(sock, (support.HOST, port))) File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 481, in run_loop return self.loop.run_until_complete(coro) File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/asyncio/base_events.py", line 579, in run_until_complete return future.result() File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/asyncio/selector_events.py", line 460, in sock_connect if isinstance(sock, ssl.SSLSocket): AttributeError: 'NoneType' object has no attribute 'SSLSocket' ====================================================================== ERROR: test_sock_sendfile_iobuffer (test.test_asyncio.test_unix_events.SelectorEventLoopUnixSockSendfileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 524, in test_sock_sendfile_iobuffer sock, proto = self.prepare() File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 491, in prepare self.run_loop(self.loop.sock_connect(sock, (support.HOST, port))) File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 481, in run_loop return self.loop.run_until_complete(coro) File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/asyncio/base_events.py", line 579, in run_until_complete return future.result() File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/asyncio/selector_events.py", line 460, in sock_connect if isinstance(sock, ssl.SSLSocket): AttributeError: 'NoneType' object has no attribute 'SSLSocket' ====================================================================== ERROR: test_sock_sendfile_not_a_file (test.test_asyncio.test_unix_events.SelectorEventLoopUnixSockSendfileTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 515, in test_sock_sendfile_not_a_file sock, proto = self.prepare() File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 491, in prepare self.run_loop(self.loop.sock_connect(sock, (support.HOST, port))) File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/test/test_asyncio/test_unix_events.py", line 481, in run_loop return self.loop.run_until_complete(coro) File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/asyncio/base_events.py", line 579, in run_until_complete return future.result() File "/root/buildarea/3.7.angelico-debian-amd64/build/Lib/asyncio/selector_events.py", line 460, in sock_connect if isinstance(sock, ssl.SSLSocket): AttributeError: 'NoneType' object has no attribute 'SSLSocket' ---------- assignee: christian.heimes components: SSL, Tests messages: 358079 nosy: christian.heimes, pablogsal, vstinner priority: normal severity: normal status: open title: test_ssl: sendfile tests fail on AMD64 Debian root 3.7 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 12:27:29 2019 From: report at bugs.python.org (Steve Dower) Date: Mon, 09 Dec 2019 17:27:29 +0000 Subject: [New-bugs-announce] [issue39007] Add audit hooks to winreg module Message-ID: <1575912449.96.0.963204741941.issue39007@roundup.psfhosted.org> New submission from Steve Dower : The winreg module should have hooks added. ---------- assignee: steve.dower components: Windows messages: 358119 nosy: christian.heimes, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Add audit hooks to winreg module type: security versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 13:34:13 2019 From: report at bugs.python.org (Steve Dower) Date: Mon, 09 Dec 2019 18:34:13 +0000 Subject: [New-bugs-announce] [issue39008] PySys_Audit should require PY_SSIZE_T_CLEAN Message-ID: <1575916453.13.0.923153204552.issue39008@roundup.psfhosted.org> New submission from Steve Dower : Currently, calls to PySys_Audit() that use "#" format strings will raise a deprecation warning because Python/sysmodule.c is not PY_SSIZE_T_CLEAN Since PySys_Audit is a new API, we should just define it as always requiring Py_ssize_t. (Discovered while implementing issue39007). ---------- assignee: steve.dower messages: 358125 nosy: steve.dower priority: normal severity: normal stage: needs patch status: open title: PySys_Audit should require PY_SSIZE_T_CLEAN type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 14:27:25 2019 From: report at bugs.python.org (Tim Gates) Date: Mon, 09 Dec 2019 19:27:25 +0000 Subject: [New-bugs-announce] [issue39009] Small typo in Lib/test/test__locale.py: thousauds -> thousands Message-ID: <1575919645.59.0.654742872958.issue39009@roundup.psfhosted.org> New submission from Tim Gates : In "Lib/test/test__locale.py" the text should read thousands rather than thousauds. ---------- messages: 358134 nosy: timgates42 priority: normal severity: normal status: open title: Small typo in Lib/test/test__locale.py: thousauds -> thousands _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 15:47:38 2019 From: report at bugs.python.org (Jonathan Slenders) Date: Mon, 09 Dec 2019 20:47:38 +0000 Subject: [New-bugs-announce] [issue39010] ProactorEventLoop raises unhandled ConnectionResetError Message-ID: <1575924458.67.0.403977169674.issue39010@roundup.psfhosted.org> New submission from Jonathan Slenders : We have a snippet of code that runs perfectly fine using the `SelectorEventLoop`, but crashes *sometimes* using the `ProactorEventLoop`. The traceback is the following. The exception cannot be caught within the asyncio application itself (e.g., it is not attached to any Future or propagated in a coroutine). It probably propagates in `run_until_complete()`. File "C:\Python38\lib\asyncio\proactor_events.py", line 768, in _loop_self_reading f.result() # may raise File "C:\Python38\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "C:\Python38\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) I can see that in `IocpProactor._poll`, `OSError` is caught and attached to the future, but not `ConnectionResetError`. I would expect that `ConnectionResetError` too will be attached to the future. In order to reproduce, run the following snippet on Python 3.8: from prompt_toolkit import prompt # pip install prompt_toolkit while 1: prompt('>') Hold down the enter key, and it'll trigger quickly. See also: https://github.com/prompt-toolkit/python-prompt-toolkit/issues/1023 ---------- components: asyncio messages: 358140 nosy: Jonathan Slenders, asvetlov, yselivanov priority: normal severity: normal status: open title: ProactorEventLoop raises unhandled ConnectionResetError versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 9 18:40:50 2019 From: report at bugs.python.org (mefistotelis) Date: Mon, 09 Dec 2019 23:40:50 +0000 Subject: [New-bugs-announce] [issue39011] ElementTree attributes replace "\r" with "\n" Message-ID: <1575934850.24.0.848331688216.issue39011@roundup.psfhosted.org> New submission from mefistotelis : TLDR: If I place "\r" in an Element attribute, it is handled and idiomized to " " in the XML file. But wait - \r is not really code 10, right? Real description: If I create ElementTree and read it just after creation, I'm getting what I put there - "\r". But if I save and re-load, it transforms into "\n". The character is incorrectly converted before being idiomized, and saved XML file has invalid value stored. Quick repro: # python3 -i Python 3.8.0 (default, Oct 25 2019, 06:23:40) [GCC 9.2.0 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import xml.etree.ElementTree as ET >>> elem = ET.Element('TEST') >>> elem.set("Attt", "a\x0db") >>> tree = ET.ElementTree(elem) >>> with open("_test1.xml", "wb") as xml_fh: ... tree.write(xml_fh, encoding='utf-8', xml_declaration=True) ... >>> tree.getroot().get("Attt") 'a\rb' >>> tree = ET.parse("_test1.xml") >>> tree.getroot().get("Attt") 'a\nb' >>> Related issue: https://bugs.python.org/issue5752 (keeping this one separate as it seem to be a simple bug, easy to fix outside of the discussion there) If there's a good workaround - please let me know. Tested on Windows, v3.8 and v3.6 ---------- components: XML messages: 358154 nosy: mefistotelis priority: normal severity: normal status: open title: ElementTree attributes replace "\r" with "\n" type: behavior versions: Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 00:13:50 2019 From: report at bugs.python.org (Steve Dower) Date: Tue, 10 Dec 2019 05:13:50 +0000 Subject: [New-bugs-announce] [issue39012] nuget package published at 3.8.1-c1 instead of rc1 Message-ID: <1575954830.8.0.575755006659.issue39012@roundup.psfhosted.org> New submission from Steve Dower : Should be rc1 ---------- assignee: steve.dower components: Windows messages: 358163 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: nuget package published at 3.8.1-c1 instead of rc1 type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 04:48:48 2019 From: report at bugs.python.org (Artem Tepanov) Date: Tue, 10 Dec 2019 09:48:48 +0000 Subject: [New-bugs-announce] [issue39013] SyntaxError: 'break' outside loop for legal Expression Message-ID: <1575971328.9.0.440165492985.issue39013@roundup.psfhosted.org> New submission from Artem Tepanov : Why I can't execute this code: while False: if False: break print('WTF?') When I use repl.it or PyCharm on my work (Python 3.7) all works fine, yes I know this code looks silly, but it is a legal expression. About CPython Interpreter: C:\WINDOWS\system32>python Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. ---------- messages: 358175 nosy: Artem Tepanov priority: normal severity: normal status: open title: SyntaxError: 'break' outside loop for legal Expression type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 07:42:57 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 10 Dec 2019 12:42:57 +0000 Subject: [New-bugs-announce] [issue39014] test_concurrent_futures: test_crash() timed out on AMD64 Fedora Rawhide Refleaks 3.x Message-ID: <1575981777.91.0.383152425566.issue39014@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Fedora Rawhide Refleaks 3.x: https://buildbot.python.org/all/#/builders/82/builds/12 0:15:30 load avg: 12.25 [308/420/1] test_concurrent_futures failed (15 min 26 sec) -- running: test_largefile (11 min 22 sec), test_io (2 min 9 sec), test_mailbox (12 min 37 sec), test_faulthandler (53.4 sec), test_shelve (13 min 39 sec) beginning 6 repetitions 123456 .... Traceback: Thread 0x00007fae68fb6700 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 303 in wait File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/multiprocessing/queues.py", line 227 in _feed File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 882 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 944 in _bootstrap_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 902 in _bootstrap Thread 0x00007fae637fe700 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/selectors.py", line 415 in select File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/multiprocessing/connection.py", line 930 in wait File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/concurrent/futures/process.py", line 362 in _queue_management_worker File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 882 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 944 in _bootstrap_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 902 in _bootstrap Current thread 0x00007fae77c14740 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 946 in _fail_on_deadlock File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 1007 in test_crash File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 616 in _callTestMethod File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 659 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 719 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/testresult.py", line 162 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/__init__.py", line 2079 in _run_suite File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/__init__.py", line 2201 in run_unittest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 209 in _test_module File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/refleak.py", line 87 in dash_R File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 232 in _runtest_inner2 File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 153 in _runtest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 193 in runtest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 654 in _main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 634 in main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 712 in main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/regrtest.py", line 43 in _main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/regrtest.py", line 47 in File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/runpy.py", line 86 in _run_code File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/runpy.py", line 193 in _run_module_as_main test test_concurrent_futures failed -- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 1003, in test_crash res.result(timeout=self.TIMEOUT) File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/concurrent/futures/_base.py", line 441, in result raise TimeoutError() concurrent.futures._base.TimeoutError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 1007, in test_crash self._fail_on_deadlock(executor) File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 955, in _fail_on_deadlock self.fail(f"Executor deadlock:\n\n{tb}") AssertionError: Executor deadlock: Thread 0x00007fae68fb6700 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 303 in wait File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/multiprocessing/queues.py", line 227 in _feed File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 882 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 944 in _bootstrap_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 902 in _bootstrap Thread 0x00007fae637fe700 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/selectors.py", line 415 in select File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/multiprocessing/connection.py", line 930 in wait File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/concurrent/futures/process.py", line 362 in _queue_management_worker File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 882 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 944 in _bootstrap_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 902 in _bootstrap Current thread 0x00007fae77c14740 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 946 in _fail_on_deadlock File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 1007 in test_crash File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 616 in _callTestMethod File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 659 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 719 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/testresult.py", line 162 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/__init__.py", line 2079 in _run_suite File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/__init__.py", line 2201 in run_unittest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 209 in _test_module File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/refleak.py", line 87 in dash_R File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 232 in _runtest_inner2 File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 153 in _runtest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 193 in runtest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 654 in _main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 634 in main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 712 in main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/regrtest.py", line 43 in _main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/regrtest.py", line 47 in File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/runpy.py", line 86 in _run_code File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/runpy.py", line 193 in _run_module_as_main (...) ====================================================================== FAIL: test_crash (test.test_concurrent_futures.ProcessPoolForkserverExecutorDeadlockTest) [crash at task unpickle] ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 1003, in test_crash res.result(timeout=self.TIMEOUT) File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/concurrent/futures/_base.py", line 441, in result raise TimeoutError() concurrent.futures._base.TimeoutError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 1007, in test_crash self._fail_on_deadlock(executor) File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 955, in _fail_on_deadlock self.fail(f"Executor deadlock:\n\n{tb}") AssertionError: Executor deadlock: Thread 0x00007f38c9484700 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 303 in wait File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/multiprocessing/queues.py", line 227 in _feed File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 882 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 944 in _bootstrap_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 902 in _bootstrap Thread 0x00007f38cac87700 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/selectors.py", line 415 in select File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/multiprocessing/connection.py", line 930 in wait File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/concurrent/futures/process.py", line 362 in _queue_management_worker File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 882 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 944 in _bootstrap_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/threading.py", line 902 in _bootstrap Current thread 0x00007f38d9b9a740 (most recent call first): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 946 in _fail_on_deadlock File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/test_concurrent_futures.py", line 1007 in test_crash File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 616 in _callTestMethod File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 659 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/case.py", line 719 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/unittest/runner.py", line 176 in run File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/__init__.py", line 2079 in _run_suite File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/support/__init__.py", line 2201 in run_unittest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 209 in _test_module File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/refleak.py", line 87 in dash_R File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 232 in _runtest_inner2 File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 153 in _runtest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/runtest.py", line 193 in runtest File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 318 in rerun_failed_tests File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 691 in _main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 634 in main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/libregrtest/main.py", line 712 in main File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/test/__main__.py", line 2 in File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/runpy.py", line 86 in _run_code File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64.refleak/build/Lib/runpy.py", line 193 in _run_module_as_main ---------------------------------------------------------------------- Ran 168 tests in 172.838s FAILED (failures=1, skipped=3) 1 test failed again: test_concurrent_futures ---------- components: Tests messages: 358185 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_concurrent_futures: test_crash() timed out on AMD64 Fedora Rawhide Refleaks 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 10:03:06 2019 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Tue, 10 Dec 2019 15:03:06 +0000 Subject: [New-bugs-announce] [issue39015] DeprecationWarnings of implicitly truncations by __int__ appearing in the standard library Message-ID: <1575990186.74.0.753297592604.issue39015@roundup.psfhosted.org> New submission from ?ukasz Langa : The original issue was bpo-36048. Some call sites were not updated and now 3.8.0 and 3.8.1rc1 are emitting a lot of warnings like: :219: DeprecationWarning: an integer is required (got type float). Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python. Adding authors of GH-11952 as nosy. ---------- components: Library (Lib) keywords: 3.8regression messages: 358195 nosy: lukasz.langa, serhiy.storchaka, vstinner priority: normal severity: normal stage: needs patch status: open title: DeprecationWarnings of implicitly truncations by __int__ appearing in the standard library type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 10:37:03 2019 From: report at bugs.python.org (Christian Tismer) Date: Tue, 10 Dec 2019 15:37:03 +0000 Subject: [New-bugs-announce] [issue39016] Negative Refcount in Python 3.8 Message-ID: <1575992223.28.0.659364408655.issue39016@roundup.psfhosted.org> New submission from Christian Tismer : By the new Py_TPFLAGS_METHOD_DESCRIPTOR flag, a new code path is activated, and when extension types like PySide create a new class, we observe negative refcounts. The reason is that the code in typeobject.c fkt. type_mro_modified calls lookup_maybe_method which returns a _borrowed_ reference. This happens in the "if (custom) {" branch. Removing all Py_XDECREF calls from the function fixes that. ---------- components: Extension Modules messages: 358198 nosy: Christian.Tismer priority: critical severity: normal status: open title: Negative Refcount in Python 3.8 type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 11:19:56 2019 From: report at bugs.python.org (jvoisin) Date: Tue, 10 Dec 2019 16:19:56 +0000 Subject: [New-bugs-announce] [issue39017] Infinite loop in the tarfile module Message-ID: <1575994796.67.0.46582695163.issue39017@roundup.psfhosted.org> New submission from jvoisin : While playing with fuzzing and Python, I stumbled upon an infinite loop in Python's tarfile module: just open the attached file with `tarfile.open('timeout-a52710a313fdb35fb428c3399277cb640fe2f686')`, and Python will be endlessly stuck in the `_proc_pax` function in tarfile.py, likely due to a missing check of `length` being strictly superior to zero. ---------- files: timeout-a52710a313fdb35fb428c3399277cb640fe2f686 messages: 358200 nosy: ethan.furman, jvoisin priority: normal severity: normal status: open title: Infinite loop in the tarfile module type: security versions: Python 3.7 Added file: https://bugs.python.org/file48768/timeout-a52710a313fdb35fb428c3399277cb640fe2f686 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 11:47:07 2019 From: report at bugs.python.org (jvoisin) Date: Tue, 10 Dec 2019 16:47:07 +0000 Subject: [New-bugs-announce] [issue39018] IndexError exception on corrupted zip file Message-ID: <1575996427.98.0.820358175016.issue39018@roundup.psfhosted.org> New submission from jvoisin : The attached file raises an `IndexError: tuple index out of range` exception when trying to open it with `zipfile.Zipfile('crash-23b7d72644702df94bfcfaab4c25b01ff31c0b38')`, with the following stacktrace: ``` $ cat test_zip.py import zipfile import sys with zipfile.ZipFile(sys.argv[1]) as f: pass $ python3 ./test_zip.py ./crash-23b7d72644702df94bfcfaab4c25b01ff31c0b38 Traceback (most recent call last): File "./test_zip.py", line 4, in with zipfile.ZipFile(sys.argv[1]) as f: File "/usr/lib/python3.7/zipfile.py", line 1225, in __init__ self._RealGetContents() File "/usr/lib/python3.7/zipfile.py", line 1348, in _RealGetContents x._decodeExtra() File "/usr/lib/python3.7/zipfile.py", line 480, in _decodeExtra self.file_size = counts[idx] IndexError: tuple index out of range $ ``` The zipfile documentation doesn't mention that IndexError is a possible exception for this method. ---------- components: Library (Lib) files: crash-23b7d72644702df94bfcfaab4c25b01ff31c0b38 messages: 358202 nosy: jvoisin priority: normal severity: normal status: open title: IndexError exception on corrupted zip file type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48769/crash-23b7d72644702df94bfcfaab4c25b01ff31c0b38 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 12:55:54 2019 From: report at bugs.python.org (Batuhan) Date: Tue, 10 Dec 2019 17:55:54 +0000 Subject: [New-bugs-announce] [issue39019] Missing class getitems in standard library classes Message-ID: <1576000554.09.0.555887969918.issue39019@roundup.psfhosted.org> New submission from Batuhan : After working on issue 38994 and issue 38978, I decided to write a simple AST analyzer to find class getitem syntax usage in typeshed. It discovered a few classes (I am not sure if there are more). As @brett.cannon suggested in PR 17498 I'll prepare individual pull requests. typeshed/stdlib/3/subprocess.pyi:868 => Popen typeshed/stdlib/3/subprocess.pyi:82 => CompletedProcess typeshed/stdlib/3/tempfile.pyi:98 => SpooledTemporaryFile typeshed/stdlib/3/os/__init__.pyi:463 => DirEntry typeshed/stdlib/3/http/cookies.pyi:5 => Morsel ---------- messages: 358209 nosy: BTaskaya, brett.cannon priority: normal severity: normal status: open title: Missing class getitems in standard library classes _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 13:41:14 2019 From: report at bugs.python.org (Michael Felt) Date: Tue, 10 Dec 2019 18:41:14 +0000 Subject: [New-bugs-announce] [issue39020] [AIX] module _ctypes fails to build since ESCDELAY has been added Message-ID: <1576003274.61.0.235026296751.issue39020@roundup.psfhosted.org> New submission from Michael Felt : Did not notice this earlier - as the buildbot does not report it: issue38312 introduced a regression with regard to AIX. Not sure how to classify component (as Build, C API, or Library, so left blank) Failed to build these modules: _curses commit b32cb97bce472dad337c6b2f071883f6234e21d8 Author: Anthony Sottile Date: Thu Oct 31 02:13:48 2019 -0700 bpo-38312: Add curses.{get,set}_escdelay and curses.{get,set}_tabsize. (GH-16938) Background: ncurses is not part of AIX; curses. ncurses packages provided by other parties are not stable enough for, among other things, allow the buildbot to pass. Prior to this commit AIX passed all tests related to _curses. ---------- messages: 358210 nosy: Michael.Felt priority: normal severity: normal status: open title: [AIX] module _ctypes fails to build since ESCDELAY has been added type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 18:22:54 2019 From: report at bugs.python.org (Matt) Date: Tue, 10 Dec 2019 23:22:54 +0000 Subject: [New-bugs-announce] [issue39021] multiprocessing is_alive() between children processes Message-ID: <1576020174.84.0.795117240122.issue39021@roundup.psfhosted.org> New submission from Matt : I'm trying to evaluate process' state between two "sibling" processes (processes created by the same parent process); using the .is_alive() and exitcode to evaluate whether a process has been init'd, started, finished successfully or unsuccessfully. The reference to one process is passed to the other and I'd like to call .is_alive(). This raises the following assertion error: Process C-2: Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/timms/.PyCharm2019.2/config/scratches/is_alive_method.py", line 59, in run print("brother - ",self.brother.state) File "/home/timms/.PyCharm2019.2/config/scratches/is_alive_method.py", line 16, in state if self.is_alive(): File "/usr/lib/python3.7/multiprocessing/process.py", line 151, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process It's obvious that the assertion fails given the family structure of the processes. Overwriting the is_alive() method in my own process class appears to produce my desired output behaviour - with assertion and discarding self removed (see attachment). Is there something fundamental to how process' operate that I should be weary of? I understand that is_alive also joins itself if possible; is that the sole reason for the assertion? Could a method that mirrors is_alive() without the assertion and discard method work with the desired intention I've described above? ---------- files: is_alive_method.py messages: 358232 nosy: matttimms priority: normal severity: normal status: open title: multiprocessing is_alive() between children processes type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48770/is_alive_method.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 18:24:23 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Tue, 10 Dec 2019 23:24:23 +0000 Subject: [New-bugs-announce] [issue39022] Synchronize importlib.metadata with importlib_metadata 1.2 Message-ID: <1576020263.92.0.109449278441.issue39022@roundup.psfhosted.org> New submission from Jason R. Coombs : Calling for another refresh of importlib.metadata from the third-party package. History at https://importlib-metadata.readthedocs.io/en/latest/changelog%20(links).html. ---------- messages: 358233 nosy: jaraco priority: normal severity: normal status: open title: Synchronize importlib.metadata with importlib_metadata 1.2 versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 10 19:25:38 2019 From: report at bugs.python.org (Michael Thompson) Date: Wed, 11 Dec 2019 00:25:38 +0000 Subject: [New-bugs-announce] [issue39023] random.seed with string and version 1 not deterministic in 3.5.2 Message-ID: <1576023938.27.0.564451634943.issue39023@roundup.psfhosted.org> New submission from Michael Thompson : Version 3.5.2, the "rand string seed" is not deterministic in code sample below across multiple invocations of the program. Python 3.6.8 works fine. #!/usr/bin/env python3 import random lis = '94' random.seed(lis, version=1) w = random.random() * 100 print('rand string seed: %d' % w) lis = 94 random.seed(lis, version=1) w = random.random() * 100 print('rand int seed: %d' % w) Running in a Docker container: uname -a: Linux formatstring-igrader 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ---------- messages: 358236 nosy: mfthomps priority: normal severity: normal status: open title: random.seed with string and version 1 not deterministic in 3.5.2 type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 11 01:33:06 2019 From: report at bugs.python.org (John94) Date: Wed, 11 Dec 2019 06:33:06 +0000 Subject: [New-bugs-announce] [issue39024] Compiling relative paths test fails (install via pyenv) Message-ID: <1576045986.0.0.10589382757.issue39024@roundup.psfhosted.org> New submission from John94 : Installed the below versions using pyenv on macOS 10.15.2, once installed I ran tests on all versions and they all failed on the "test_py_compile" test. 2.7.17 - https://pastebin.com/iFCA7FZb 3.6.9 - https://pastebin.com/UYfUqK9p 3.7.5 - https://pastebin.com/dzKeYVZD 3.8.0 - https://pastebin.com/sT7DT1WQ ---------- components: Tests files: 2_7_17.txt messages: 358250 nosy: John94 priority: normal severity: normal status: open title: Compiling relative paths test fails (install via pyenv) type: behavior versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48771/2_7_17.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 11 06:30:56 2019 From: report at bugs.python.org (Bluebird) Date: Wed, 11 Dec 2019 11:30:56 +0000 Subject: [New-bugs-announce] [issue39025] Windows Python Launcher does not update PATH to Scripts directory Message-ID: <1576063856.43.0.431001018192.issue39025@roundup.psfhosted.org> New submission from Bluebird : The py Python launcher is a great improvement over a few years ago when managing multiple Python installation was tedious. However, it does not solve one annoying problem: the Scripts directory. If you install tools like mypy, pyqt-tools or pyinstaller, to be able to use them, you need to have the Scripts directory in your PATH variable. And if you have multiple Python installations, you are back to square 1 where you have to explicitely modify your PATH according to the version of Python where the tools have been installed. To give a practical example, at work, I have Python 3.1 because some of our software are distributed as .pyc for this version of Python, Python 3.5 for the same reason and Python 3.7 for all the developments which do not incur the previous dependencies. The default environement with Python and Python\Scripts added to the path is Python 3.5 . However, for all PyQt developments, I use Python 3.7 . I launch my program with py -3.7 myfancygui.py but when I want to access some of the pyqt tools like pyuic5, I need to explicitely add Python3.7\Scripts to my PATH. The technical solution is not clear to me, but it would be nice to duplicate the benefits of the py launcher to the Scripts directory. Some random propositions: 1. Create a pyscript launcher, which would work like py but would take as command-line the name of the script to run: pyscript -3.7 pyuic5 2. Let py execute a command-line with proper environment for that specific version of python: > py -env-3.7 Environment adjusted for Python 3.7 > pyuic5 ... works fine > exit Back to default environement There are probably other stratagic ways to reach the same result. ---------- components: Windows messages: 358256 nosy: bluebird, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows Python Launcher does not update PATH to Scripts directory type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 11 10:34:04 2019 From: report at bugs.python.org (Gaige Paulsen) Date: Wed, 11 Dec 2019 15:34:04 +0000 Subject: [New-bugs-announce] [issue39026] pystate.h contains non-relative of initconfig.h include causing macOS Framework include failure Message-ID: <1576078444.86.0.577960577024.issue39026@roundup.psfhosted.org> New submission from Gaige Paulsen : The cpython/pystate.h includes cpython/initconfig.h using the relative path "cpython/initconfig.h", which probably works fine if your include path explicitly contains the top of the python directory, however when developing with a framework in macOS, the framework's root path cannot be referred to relatively. Since cpython/pystate.h and cpython/initconfig.h live in the same directory, any C compiler should include them correctly, regardless of include path if the cpython/pystate.h includes "initconfig.h" instead of "cpython/initconfig.h", since I believe the very first path is always relative to the file including the next file. In this case, cpython is the parent of pystate.h and thus including initconfig.h directly should work fine. Previous 3.x versions worked fine, but the cpython directory wasn't in Headers for macOS. Although I wasn't able to exhaustively test this on all platforms and with all compilers, changing the include in cpython/pystate.h to "initconfig.h" solved the compilation/include problem. ---------- components: C API, macOS messages: 358266 nosy: gaige, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: pystate.h contains non-relative of initconfig.h include causing macOS Framework include failure type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 11 17:04:07 2019 From: report at bugs.python.org (janust) Date: Wed, 11 Dec 2019 22:04:07 +0000 Subject: [New-bugs-announce] [issue39027] run_coroutine_threadsafe uses wrong TimeoutError Message-ID: <1576101847.74.0.586846433042.issue39027@roundup.psfhosted.org> New submission from janust : https://docs.python.org/3.8/library/asyncio-task.html#asyncio.run_coroutine_threadsafe has a code example that catches a asyncio.TimeoutError from run_coroutine_threadsafe. In Python 3.7, this exception was equal to concurrent.futures.TimeoutError, but since https://github.com/python/cpython/commit/431b540bf79f0982559b1b0e420b1b085f667bb7 that is not the case anymore. ---------- assignee: docs at python components: Documentation messages: 358281 nosy: docs at python, janust priority: normal severity: normal status: open title: run_coroutine_threadsafe uses wrong TimeoutError versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 11 23:04:55 2019 From: report at bugs.python.org (Sebastian Berg) Date: Thu, 12 Dec 2019 04:04:55 +0000 Subject: [New-bugs-announce] [issue39028] ENH: Fix performance issue in keyword extraction Message-ID: <1576123495.69.0.334207747364.issue39028@roundup.psfhosted.org> New submission from Sebastian Berg : The keyword argument extraction/finding function seems to have a performance bug/enhancement (unless I am missing something here). It reads: ``` for (i=0; i < nkwargs; i++) { PyObject *kwname = PyTuple_GET_ITEM(kwnames, i); /* ptr==ptr should match in most cases since keyword keys should be interned strings */ if (kwname == key) { return kwstack[i]; } assert(PyUnicode_Check(kwname)); if (_PyUnicode_EQ(kwname, key)) { return kwstack[i]; } } ``` However, it should be split into two separate for loops, using the `PyUnicode_EQ` check only if it failed for _all_ other arguments. I will open a PR for this (it seemed like a bpo number is wanted for almost everything. ---------- components: C API messages: 358287 nosy: seberg priority: normal severity: normal status: open title: ENH: Fix performance issue in keyword extraction versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 12 03:59:34 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Thu, 12 Dec 2019 08:59:34 +0000 Subject: [New-bugs-announce] [issue39029] TestMaildir.test_clean fails randomly under parallel tests Message-ID: <1576141174.61.0.510141920657.issue39029@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I can reproduce frequently the failure of test_clean on my Mac machine. It checks for removal of foo_path. It's removed by Maildir.clean that removes files based on the access time as below. The test also does similar thing with os.utime(foo_path, (time.time() - 129600 - 2, foo_stat.st_mtime)) but I guess the file is not really deleted in some cases. https://github.com/python/cpython/blob/7772b1af5ebc9d72d0cfc8332aea6b2143eafa27/Lib/mailbox.py#L482 def clean(self): """Delete old files in "tmp".""" now = time.time() for entry in os.listdir(os.path.join(self._path, 'tmp')): path = os.path.join(self._path, 'tmp', entry) if now - os.path.getatime(path) > 129600: # 60 * 60 * 36 os.remove(path) $ ./python.exe -Wall -m test -R 3:3 -j 4 test_mailbox -m test_clean 0:00:00 load avg: 2.12 Run tests in parallel using 4 child processes 0:00:00 load avg: 2.12 [1/1/1] test_mailbox failed beginning 6 repetitions 123456 .....test test_mailbox failed -- Traceback (most recent call last): File "/Users/kasingar/stuff/python/cpython/Lib/test/test_mailbox.py", line 737, in test_clean self.assertFalse(os.path.exists(foo_path)) AssertionError: True is not false == Tests result: FAILURE == 1 test failed: test_mailbox Total duration: 951 ms Tests result: FAILURE ---------- components: Tests messages: 358295 nosy: barry, maxking, r.david.murray, xtreak priority: normal severity: normal status: open title: TestMaildir.test_clean fails randomly under parallel tests type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 12 06:15:24 2019 From: report at bugs.python.org (dankreso) Date: Thu, 12 Dec 2019 11:15:24 +0000 Subject: [New-bugs-announce] [issue39030] Ctypes unions with bitfield members that do not share memory Message-ID: <1576149324.34.0.405331329456.issue39030@roundup.psfhosted.org> New submission from dankreso : I've found what looks like a corner case bug. Specifically, the behaviour that looks suspicious is when a ctypes union has bit field members, where the members have bit widths that are smaller than the size types: class BitFieldUnion(Union): _fields_ = [("a", c_uint32, 16), ("b", c_uint32, 16)] buff = bytearray(4) bitfield_union = BitFieldUnion.from_buffer(buff) bitfield_union.a = 1 bitfield_union.b = 2 print("a is {}".format(bitfield_union.a)) # Prints "a is 1" print("b is {}".format(bitfield_union.b)) # Prints "b is 2" print("Buffer: {}".format(buff)) # Prints "Buffer: b'\x01\x00\x00\x00'". (Example of this script can be found at https://rextester.com/XJFGAK37131. I've also tried it on my system which is Ubuntu 16.04.2 LTS with Python 3.6.) Here I would expect both 'a' and 'b' to be set to 2, and for the buffer to look like '\x02\x00\x00\x00'. Here's the equivalent code in C which behaves as expected: https://rextester.com/HWDUMB56821. If at least one of the bitwidths in the above example are changed from 16 to 32, however, then 'a', 'b', and the buffer look as expected. I've also attached some further examples of weird behaviour with unions with bitfield members - online version can be found at https://rextester.com/VZRB77320. ---------- components: ctypes files: bitfield_union.py messages: 358300 nosy: dankreso priority: normal severity: normal status: open title: Ctypes unions with bitfield members that do not share memory type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48772/bitfield_union.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 12 15:15:07 2019 From: report at bugs.python.org (Lysandros Nikolaou) Date: Thu, 12 Dec 2019 20:15:07 +0000 Subject: [New-bugs-announce] [issue39031] Inconsistent lineno and col_offset info when parsing elif Message-ID: <1576181707.03.0.620704542473.issue39031@roundup.psfhosted.org> New submission from Lysandros Nikolaou : While working on pegen, we came across an inconsistency on how line number and column offset info is stored for (el)if nodes. When parsing a very simple if-elif construct like if a: pass elif b: pass the following parse tree gets generated: Module( body=[ If( test=Name(id="a", ctx=Load(), lineno=1, col_offset=3, end_lineno=1, end_col_offset=4), body=[Pass(lineno=2, col_offset=4, end_lineno=2, end_col_offset=8)], orelse=[ If( test=Name( id="b", ctx=Load(), lineno=3, col_offset=5, end_lineno=3, end_col_offset=6 ), body=[Pass(lineno=4, col_offset=4, end_lineno=4, end_col_offset=8)], orelse=[], lineno=3, col_offset=5, end_lineno=4, end_col_offset=8, ) ], lineno=1, col_offset=0, end_lineno=4, end_col_offset=8, ) ], type_ignores=[], ) There is the inconsistency that the column offset for the if statement is 0, thus the if statement starts with the keyword if, whereas the column offset for elif if 5, which means that the elif keyword is skipped. As Guido suggests over at https://github.com/gvanrossum/pegen/issues/107#issuecomment-565135047 we could very easily change Python/ast.c so that the elif statement start with the elif keyword as well. I have a PR ready! ---------- messages: 358304 nosy: lys.nikolaou priority: normal severity: normal status: open title: Inconsistent lineno and col_offset info when parsing elif type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 12 17:11:47 2019 From: report at bugs.python.org (Chris) Date: Thu, 12 Dec 2019 22:11:47 +0000 Subject: [New-bugs-announce] [issue39032] wait_for and Condition.wait still not playing nicely Message-ID: <1576188707.28.0.976402286233.issue39032@roundup.psfhosted.org> New submission from Chris : This is related to https://bugs.python.org/issue22970, https://bugs.python.org/issue33638, and https://bugs.python.org/issue32751. I've replicated the issue on Python 3.6.9, 3.7.4, and 3.8.0. Looking at the source, I'm fairly sure the bug is still in master right now. The problem is yet another case of wait_for returning early, before the child has been fully cancelled and terminated. The issue arises if wait_for itself is cancelled. Take the following minimal example: cond = asyncio.Condition() async def coro(): async with cond: await asyncio.wait_for(cond.wait(), timeout=999) If coro is cancelled a few seconds after being run, wait_for will cancel the cond.wait(), then immediately re-raise the CancelledError inside coro, leading to "RuntimeError: Lock is not acquired." Relevant source code plucked from the 3.8 branch is as follows: try: # wait until the future completes or the timeout try: await waiter except exceptions.CancelledError: fut.remove_done_callback(cb) fut.cancel() raise if fut.done(): return fut.result() else: fut.remove_done_callback(cb) # We must ensure that the task is not running # after wait_for() returns. # See https://bugs.python.org/issue32751 await _cancel_and_wait(fut, loop=loop) raise exceptions.TimeoutError() finally: timeout_handle.cancel() Note how if the timeout occurs, the method waits for the future to complete before raising. If CancelledError is thrown, it doesn't. A simple fix seems to be replacing the "fut.cancel()" with "await _cancel_and_wait(fut, loop=loop)" so the behaviour is the same in both cases, however I'm only superficially familiar with the code, and am unsure if this would cause other problems. ---------- components: asyncio messages: 358307 nosy: asvetlov, criches, yselivanov priority: normal severity: normal status: open title: wait_for and Condition.wait still not playing nicely type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 12 17:20:17 2019 From: report at bugs.python.org (Mihail Georgiev) Date: Thu, 12 Dec 2019 22:20:17 +0000 Subject: [New-bugs-announce] [issue39033] zipimport raises NameError: name '_boostrap_external' is not defined Message-ID: <1576189217.79.0.0424351023121.issue39033@roundup.psfhosted.org> New submission from Mihail Georgiev : I think there's a "t" missing: Lib/zipimport.py 609- 610- try: 611: _boostrap_external._validate_hash_pyc( 612- data, source_hash, fullname, exc_details) 613- except ImportError: 614- return None 615- else: 616- source_mtime, source_size = \ 617- _get_mtime_and_size_of_source(self, fullpath) 618- 619- if source_mtime: ---------- components: Library (Lib) messages: 358310 nosy: misho88 priority: normal severity: normal status: open title: zipimport raises NameError: name '_boostrap_external' is not defined type: crash versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 01:49:03 2019 From: report at bugs.python.org (Rustam Agakishiev) Date: Fri, 13 Dec 2019 06:49:03 +0000 Subject: [New-bugs-announce] [issue39034] Documentation: Coroutines Message-ID: <1576219743.99.0.330194096424.issue39034@roundup.psfhosted.org> New submission from Rustam Agakishiev : Here: https://docs.python.org/3/library/asyncio-task.html?? it says:"To actually run a coroutine, asyncio provides three main mechanisms:" and a few pages down it gives you a fourth mechanism: "awaitable asyncio.gather(*aws, loop=None, return_exceptions=False) Run awaitable objects in the aws sequence concurrently." And it really runs awaitables: future = asyncio.gather(*awslist) # aws are run... ... # some other heavy tasks result = async future # collect results Shouldn't it be added to docs? ---------- assignee: docs at python components: Documentation messages: 358320 nosy: agarus, docs at python priority: normal severity: normal status: open title: Documentation: Coroutines type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 05:28:51 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 13 Dec 2019 10:28:51 +0000 Subject: [New-bugs-announce] [issue39035] Travis CI fail on backports: pyvenv not installed Message-ID: <1576232931.72.0.607976145841.issue39035@roundup.psfhosted.org> New submission from STINNER Victor : Example of failure of a backport from 3.8 to 3.7, PR 17577: https://github.com/python/cpython/pull/17577 """ $ python --version Python 3.6.9 $ pip --version pip 19.3.1 from /home/travis/virtualenv/python3.6.9/lib/python3.6/site-packages/pip (python 3.6) before_install.1 0.00s$ set -e $ pyenv global 3.7.1 pyenv: version `3.7.1' not installed """ Travis CI logs: https://travis-ci.org/python/cpython/jobs/624160244 Thread on python-dev: https://mail.python.org/archives/list/python-dev at python.org/thread/YCTLWAYIC44YTVGNN4EDLWKMER2LAPDA/ ---------- components: Tests messages: 358325 nosy: inada.naoki, pablogsal, vstinner priority: normal severity: normal status: open title: Travis CI fail on backports: pyvenv not installed versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 07:42:53 2019 From: report at bugs.python.org (Lovi) Date: Fri, 13 Dec 2019 12:42:53 +0000 Subject: [New-bugs-announce] [issue39036] Add center_char attribute to str type Message-ID: <1576240973.63.0.296149851981.issue39036@roundup.psfhosted.org> New submission from Lovi <1668151593 at qq.com>: I think Python3.9 needs to add the center_char attribute which means the center character of strings to string type, such as the center_char of '12345' is '1' and the center_char of 'abcd' is 'bc'. ---------- components: Windows messages: 358328 nosy: lovi, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add center_char attribute to str type type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 10:19:37 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 13 Dec 2019 15:19:37 +0000 Subject: [New-bugs-announce] [issue39037] Wrong trial order of __exit__ and __enter__ in the with statement Message-ID: <1576250377.83.0.345840572176.issue39037@roundup.psfhosted.org> New submission from G?ry : >>> class A: pass ... >>> with A(): pass ... Traceback (most recent call last): File "", line 1, in AttributeError: __enter__ I expected `AttributeError: __exit__`, since PEP 343 states (https://www.python.org/dev/peps/pep-0343/#specification-the-with-statement): > The details of the above translation are intended to prescribe the exact semantics. If either of the relevant methods are not found as expected, the interpreter will raise AttributeError, in the order that they are tried (__exit__, __enter__). and the language documentation states (https://docs.python.org/3/reference/compound_stmts.html#the-with-statement): > The execution of the with statement with one ?item? proceeds as follows: > 1. The context expression (the expression given in the with_item) is evaluated to obtain a context manager. > 2. The context manager?s __exit__() is loaded for later use. > 3. The context manager?s __enter__() method is invoked. ---------- components: Interpreter Core messages: 358333 nosy: gvanrossum, maggyero, ncoghlan priority: normal severity: normal status: open title: Wrong trial order of __exit__ and __enter__ in the with statement type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 10:57:25 2019 From: report at bugs.python.org (jvoisin) Date: Fri, 13 Dec 2019 15:57:25 +0000 Subject: [New-bugs-announce] [issue39038] OverflowError in tarfile.open Message-ID: <1576252645.43.0.77972989062.issue39038@roundup.psfhosted.org> New submission from jvoisin : The attached file produces the following stacktrace when opened via `tarfile.open`, on Python 3.7.5rc1: ``` $ cat test.py import sys import tarfile tarfile.open(sys.argv[1]) $ python3 test.py ./crash-83a6e7d4b810c6a0bd4fd9dfd6a0b36550034ccf Traceback (most recent call last): File "test.py", line 4, in tarfile.open(sys.argv[1]) File "/usr/lib/python3.7/tarfile.py", line 1573, in open return func(name, "r", fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1645, in gzopen t = cls.taropen(name, mode, fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1621, in taropen return cls(name, mode, fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1484, in __init__ self.firstmember = self.next() File "/usr/lib/python3.7/tarfile.py", line 2289, in next tarinfo = self.tarinfo.fromtarfile(self) File "/usr/lib/python3.7/tarfile.py", line 1097, in fromtarfile return obj._proc_member(tarfile) File "/usr/lib/python3.7/tarfile.py", line 1119, in _proc_member return self._proc_pax(tarfile) File "/usr/lib/python3.7/tarfile.py", line 1230, in _proc_pax match = regex.match(buf, pos) OverflowError: Python int too large to convert to C ssize ``` ---------- components: Library (Lib) files: crash-83a6e7d4b810c6a0bd4fd9dfd6a0b36550034ccf messages: 358336 nosy: jvoisin priority: normal severity: normal status: open title: OverflowError in tarfile.open type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48773/crash-83a6e7d4b810c6a0bd4fd9dfd6a0b36550034ccf _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 11:00:48 2019 From: report at bugs.python.org (jvoisin) Date: Fri, 13 Dec 2019 16:00:48 +0000 Subject: [New-bugs-announce] [issue39039] zlib.error with tarfile.open Message-ID: <1576252848.69.0.760468308355.issue39039@roundup.psfhosted.org> New submission from jvoisin : The attached file produces the following stacktrace when opened via `tarfile.open`, on Python 3.7.5rc1: ``` $ cat test.py import sys import tarfile tarfile.open(sys.argv[1]) $ python3 test.py ./crash-c10c9839d987fa0df6912cb4084f43f3ce08ca82 Traceback (most recent call last): File "test.py", line 4, in tarfile.open(sys.argv[1]) File "/usr/lib/python3.7/tarfile.py", line 1573, in open return func(name, "r", fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1645, in gzopen t = cls.taropen(name, mode, fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1621, in taropen return cls(name, mode, fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1484, in __init__ self.firstmember = self.next() File "/usr/lib/python3.7/tarfile.py", line 2289, in next tarinfo = self.tarinfo.fromtarfile(self) File "/usr/lib/python3.7/tarfile.py", line 1094, in fromtarfile buf = tarfile.fileobj.read(BLOCKSIZE) File "/usr/lib/python3.7/gzip.py", line 276, in read return self._buffer.read(size) File "/usr/lib/python3.7/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/usr/lib/python3.7/gzip.py", line 471, in read uncompress = self._decompressor.decompress(buf, size) zlib.error: Error -3 while decompressing data: invalid distances se ``` ---------- components: Library (Lib) files: crash-c10c9839d987fa0df6912cb4084f43f3ce08ca82 messages: 358337 nosy: jvoisin priority: normal severity: normal status: open title: zlib.error with tarfile.open type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48774/crash-c10c9839d987fa0df6912cb4084f43f3ce08ca82 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 11:59:51 2019 From: report at bugs.python.org (Manfred Kaiser) Date: Fri, 13 Dec 2019 16:59:51 +0000 Subject: [New-bugs-announce] [issue39040] Wrong filename in when mime header was too long Message-ID: <1576256391.68.0.734335189474.issue39040@roundup.psfhosted.org> New submission from Manfred Kaiser : I'm working on a mailfilter in python and used the method "get_filename" of the "EmailMessage" class. In some cases a wrong filename was returned. The reason was, that the Content-Disposition Header had a line break and the following intention was interpreted as part of the filename. After fixing this bug, I was able to get the right filename. I had to change "linesep_splitter" in "email.policy" to match the intention. Old Value: linesep_splitter = re.compile(r'\n|\r') New Value: linesep_splitter = re.compile(r'\n\s+|\r\s+') ---------- components: email messages: 358343 nosy: barry, mkaiser, r.david.murray priority: normal severity: normal status: open title: Wrong filename in when mime header was too long type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 16:56:07 2019 From: report at bugs.python.org (Steve Dower) Date: Fri, 13 Dec 2019 21:56:07 +0000 Subject: [New-bugs-announce] [issue39041] Support GitHub Actions in CI Message-ID: <1576274167.03.0.917402415265.issue39041@roundup.psfhosted.org> New submission from Steve Dower : Enable support for GitHub Actions CI to do PR build and test runs. Once stable, we can deprecate Azure Pipelines PR builds. The only regression right now is that test results are not collected in a nice view like AP has. But I think that view is not widely used, and searching the logs on GitHub is probably good enough. ---------- assignee: steve.dower components: Build messages: 358361 nosy: brett.cannon, steve.dower priority: normal pull_requests: 17066 severity: normal stage: patch review status: open title: Support GitHub Actions in CI versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 17:21:22 2019 From: report at bugs.python.org (Eric Snow) Date: Fri, 13 Dec 2019 22:21:22 +0000 Subject: [New-bugs-announce] [issue39042] Use the runtime's main thread ID in the threading module. Message-ID: <1576275682.19.0.998306709118.issue39042@roundup.psfhosted.org> New submission from Eric Snow : The threading module has a "main_thread()" function that returns a Thread instance for the "main" thread. The main thread is the one running when the runtime is initialized and has a specific role in various parts of the runtime. Currently the threading module instead uses the ID of the thread where the module is imported for the first time. Usually this isn't a problem. (perhaps only in embedding cases?) Since 3.8 we store the ID of the thread where the runtime was initialized (_PyRuntime.main_thread). By using this in the threading module we can be consistent across the runtime about what the main thread is. This is particularly significant because in 3.8 we also updated the signal module to use _PyRuntime.main_thread (instead of calling PyThread_get_thread_ident() when the module is loaded). See issue38904. We should also consider backporting this change to 3.8, to resolve the difference between the threading and signal modules. ---------- components: Library (Lib) messages: 358362 nosy: eric.snow priority: normal severity: normal stage: test needed status: open title: Use the runtime's main thread ID in the threading module. type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 13 23:43:56 2019 From: report at bugs.python.org (Lovi) Date: Sat, 14 Dec 2019 04:43:56 +0000 Subject: [New-bugs-announce] [issue39043] Add math.fib() generator Message-ID: <1576298636.18.0.950547793455.issue39043@roundup.psfhosted.org> New submission from Lovi <1668151593 at qq.com>: I think it's appropriate to add the generator fib() to the math module. With fib(), some operations will be easier. The generator is like this: def fib(count=None): if count is not None and not isinstance(count, int): raise ValueError(f"Parameter count has an unexpected type: {count.__class__.__name__}.") a, b = 0, 1 while True: a, b = b, a + b if count is not None: if not count: return count -= 1 yield a ---------- components: Library (Lib) messages: 358375 nosy: lovi priority: normal severity: normal status: open title: Add math.fib() generator type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 01:34:52 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Sat, 14 Dec 2019 06:34:52 +0000 Subject: [New-bugs-announce] [issue39044] Segfault on build for the master branch Message-ID: <1576305292.12.0.476771129808.issue39044@roundup.psfhosted.org> New submission from Joannah Nanjekye : I just pulled changes from upstream and when I build with: ./configure --with-pydebug && make -j Am getting a Segmentation fault: ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi CC='gcc -pthread' LDSHARED='gcc -pthread -shared ' OPT='-g -Og -Wall' _TCLTK_INCLUDES='' _TCLTK_LIBS='' ./python -E ./setup.py build Segmentation fault (core dumped) make: *** [Makefile:614: sharedmods] Error 139 I hope someone else can replicate this on: o LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 19.04 Release: 19.04 Codename: disco ---------- components: Build messages: 358379 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Segfault on build for the master branch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 02:54:44 2019 From: report at bugs.python.org (Lovi) Date: Sat, 14 Dec 2019 07:54:44 +0000 Subject: [New-bugs-announce] [issue39045] Segmentation of string Message-ID: <1576310084.51.0.108674240792.issue39045@roundup.psfhosted.org> New submission from Lovi <1668151593 at qq.com>: I thought for a long time. I think it's necessary to add a segment method to str type or string module. This method is used to split a string into m parts and return all cases. For example: segment('1234', m=3) -> [('1', '2', '34'), ('1', '23', '4'), ('12', '3', '4')] segment('12345', m=3) -> [('1', '2', '345'), ('1', '23', '45'), ('1', '234', '5'), ('12', '3', '45'), ('12', '34', '5'), ('123', '4', '5')] I hope this proposal can be adopted. ---------- components: Library (Lib) messages: 358383 nosy: lovi priority: normal severity: normal status: open title: Segmentation of string type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 05:04:07 2019 From: report at bugs.python.org (Zac Hatfield-Dodds) Date: Sat, 14 Dec 2019 10:04:07 +0000 Subject: [New-bugs-announce] [issue39046] collections.abc.Reversible should not be a subclass of Hashable Message-ID: <1576317847.63.0.718393349876.issue39046@roundup.psfhosted.org> New submission from Zac Hatfield-Dodds : >>> from collections.abc import Hashable, Reversible >>> assert issubclass(Reversible, Hashable) However, this is trivially wrong - lists are Reversible but not Hashable, and there is no reason to thing that reversible objects should all be hashable. The versions of these classes in the typing module have the same problem. ---------- components: Library (Lib) messages: 358386 nosy: Zac Hatfield-Dodds priority: normal severity: normal status: open title: collections.abc.Reversible should not be a subclass of Hashable type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 12:10:08 2019 From: report at bugs.python.org (Attila Jeges) Date: Sat, 14 Dec 2019 17:10:08 +0000 Subject: [New-bugs-announce] [issue39047] TestTemporaryDirectory.test_flags fails on FreeBSD/ZFS Message-ID: <1576343408.28.0.119673970593.issue39047@roundup.psfhosted.org> New submission from Attila Jeges : When I run test_tempfle.py on FreeBSD/ZFS I get the following error: ====================================================================== ERROR: test_flags (__main__.TestTemporaryDirectory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/attilaj/cpython/Lib/test/test_tempfile.py", line 1498, in test_flags os.chflags(os.path.join(root, name), flags) OSError: [Errno 45] Operation not supported: '/tmp/awxj9cgb/dir0/dir0/dir0/test1.txt' ---------------------------------------------------------------------- Ran 90 tests in 1.133s FAILED (errors=1, skipped=1) I think this is similar to Issue #15747. ---------- components: Tests messages: 358398 nosy: attilajeges priority: normal severity: normal status: open title: TestTemporaryDirectory.test_flags fails on FreeBSD/ZFS type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 14:58:37 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sat, 14 Dec 2019 19:58:37 +0000 Subject: [New-bugs-announce] [issue39048] Reorder the __aenter__ and __aexit__ checks for async with Message-ID: <1576353517.6.0.763131039625.issue39048@roundup.psfhosted.org> New submission from G?ry : Following https://bugs.python.org/issue27100 which did it for the with statement, what was left to do was to reorder the __aenter__ and __aexit__ method checks for the async with statement. I have opened a PR for this here: https://github.com/python/cpython/pull/17609 ---------- components: Interpreter Core messages: 358403 nosy: brett.cannon, maggyero, rhettinger priority: normal pull_requests: 17080 severity: normal status: open title: Reorder the __aenter__ and __aexit__ checks for async with type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 15:27:23 2019 From: report at bugs.python.org (Andrey) Date: Sat, 14 Dec 2019 20:27:23 +0000 Subject: [New-bugs-announce] [issue39049] Add "elif" to "for_stmt" and "while_stmt" Message-ID: <1576355243.3.0.84895867469.issue39049@roundup.psfhosted.org> New submission from Andrey : Add an ability to use "elif" in for statement and while statement besides "else" Example Now: ```python3 for i in range(j): ... else: if i > 5: ... else: ... ``` Shall be: ```python3 for i in range(j): ... elif i > 5: ... else: ... ``` ---------- components: Build messages: 358406 nosy: moff4 priority: normal severity: normal status: open title: Add "elif" to "for_stmt" and "while_stmt" type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 17:01:41 2019 From: report at bugs.python.org (Zackery Spytz) Date: Sat, 14 Dec 2019 22:01:41 +0000 Subject: [New-bugs-announce] [issue39050] The "Help" button in IDLE's config dialog does not work Message-ID: <1576360901.2.0.273955220755.issue39050@roundup.psfhosted.org> New submission from Zackery Spytz : When I click the button, I see a traceback. Exception in Tkinter callback Traceback (most recent call last): File "/home/lubuntu2/cpython/Lib/tkinter/__init__.py", line 1885, in __call__ return self.func(*args) File "/home/lubuntu2/cpython/Lib/idlelib/configdialog.py", line 212, in help view_text(self, title='Help for IDLE preferences', TypeError: view_text() got an unexpected keyword argument 'text' It appears that this bug was introduced in bpo-37628 / 3221a63c692. ---------- assignee: terry.reedy components: IDLE messages: 358408 nosy: ZackerySpytz, terry.reedy priority: normal severity: normal status: open title: The "Help" button in IDLE's config dialog does not work type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 14 21:13:14 2019 From: report at bugs.python.org (Rafael Dominiquini) Date: Sun, 15 Dec 2019 02:13:14 +0000 Subject: [New-bugs-announce] [issue39051] Python not working on Windows 10 Message-ID: <1576375994.78.0.555277558727.issue39051@roundup.psfhosted.org> New submission from Rafael Dominiquini : I have Python installed on my computer for a while now and everything worked fine. But today, I can't run it anymore: Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' Current thread 0x00007e84 (most recent call first): I have already tried to download the installer and use the "Repair" option, but even though the installation indicates that everything is fine, the error continue... Attached is the complete terminal output: https://pastebin.com/fcFZkUSV https://pastebin.com/Nx9J4fPu SO: Windows 10 Python Version: 3.8.0 (64 bits) Thanks. ---------- components: Windows messages: 358411 nosy: paul.moore, rafaeldominiquini, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python not working on Windows 10 type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 15 02:19:59 2019 From: report at bugs.python.org (chengyang) Date: Sun, 15 Dec 2019 07:19:59 +0000 Subject: [New-bugs-announce] [issue39052] import error when in python -m pdb debug mode Message-ID: <1576394399.54.0.25584896953.issue39052@roundup.psfhosted.org> New submission from chengyang : D:\data\mypython\photosort ??? 2019/09/08 15:51 . 2019/09/08 15:51 .. 2019/09/08 15:51 88 myfilesort.py 2019/09/08 15:38 220 myhome.py 2019/12/15 10:44 275 mymain_menu.py 2019/08/22 21:24 39 myphotosort.py 2019/09/08 15:51 siproject 2019/09/08 17:16 __pycache__ 4 ??? 622 ?? 4 ??? 253,973,061,632 ???? D:\data\mypython\photosort>python -m pdb myhome.py > d:\data\mypython\photosort\myhome.py(1)() -> import os (Pdb) s > d:\data\mypython\photosort\myhome.py(2)() -> import re (Pdb) s > d:\data\mypython\photosort\myhome.py(3)() -> import sys (Pdb) s > d:\data\mypython\photosort\myhome.py(4)() -> sys.path.append('d:\data\mypath\photosort') (Pdb) s > d:\data\mypython\photosort\myhome.py(5)() -> import myfilesort (Pdb) s --Call-- > (978)_find_and_load() (Pdb) ---------- components: Windows files: myfilesort.py messages: 358412 nosy: chengyang, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: import error when in python -m pdb debug mode type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48778/myfilesort.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 15 03:50:47 2019 From: report at bugs.python.org (YoSTEALTH) Date: Sun, 15 Dec 2019 08:50:47 +0000 Subject: [New-bugs-announce] [issue39053] Hide manually raised exception formatting Message-ID: <1576399847.23.0.606573681326.issue39053@roundup.psfhosted.org> New submission from YoSTEALTH : class Some_Class: def error(self): if not getattr(self, 'boo', None): raise Exception(f'`class {self.__class__.__name__}:` raised some error!') something = Some_Class() something.error() # This is how Error looks # ----------------------- Traceback (most recent call last): File "/test.py", line 9, in something.error() File "/test.py", line 5, in error raise Exception(f'`class {self.__class__.__name__}:` raised some error!') Exception: `class Some_Class:` raised some error! # This is how Error should look # ----------------------------- Traceback (most recent call last): File "/test.py", line 9, in something.error() File "/test.py", line 5, in error raise Exception(...) Exception: `class Some_Class:` raised some error! When a developer manually raises an error they want the user/developer debugging the error to see the final, nicely formatted error message itself "Exception: `class Some_Class:` raised some error!" not the ugly formating of the error message itself "raise Exception(f'`class {self.__class__.__name__}:` raised some error!')" which can also lead to confusion, thus it should be hidden as "raise Exception(...)" It could also be said that "raise Exception(...)" shouldn't even be shown but what raises this error condition "if not getattr(self, 'boo', None):" but this seems more work so i am keeping it simple by saying lets just hide the ugly formatting part. ---------- components: Interpreter Core messages: 358413 nosy: YoSTEALTH priority: normal severity: normal status: open title: Hide manually raised exception formatting versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 15 07:30:37 2019 From: report at bugs.python.org (Lovi) Date: Sun, 15 Dec 2019 12:30:37 +0000 Subject: [New-bugs-announce] [issue39054] Add an parameter to list.remove() Message-ID: <1576413037.77.0.199193038416.issue39054@roundup.psfhosted.org> New submission from Lovi <1668151593 at qq.com>: I think the list can add a parameter to remove(): remove(value, appear_time=1, /) The parameter appear_time indicates the number of times the value appears in the list. I want this effect: >>> list1 = [1, 2, 3, 2, 1, 2, 1] >>> list1.remove(2, 2) >>> list1 [1, 2, 3, 1, 2, 1] >>> list1.remove(1, 3) >>> list1 [1, 2, 3, 1, 2] ---------- messages: 358420 nosy: lovi priority: normal severity: normal status: open title: Add an parameter to list.remove() type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 15 12:21:28 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 15 Dec 2019 17:21:28 +0000 Subject: [New-bugs-announce] [issue39055] base64.b64decode() with validate=True does not raise for a trailing \n Message-ID: <1576430488.57.0.854621145537.issue39055@roundup.psfhosted.org> New submission from Serhiy Storchaka : If validate=True is passed to base64.b64decode(), it should raise a binascii.Error if the input contains any character not from the acceptable alphabet. But it does not raise if the input ends with a single \n. It raises if the input ends with a multiple \n or with any other whitespace character. Only a single \n is accepted. This is an implementation artifact. A regular exception ending with $ is used to validate an input. But $ matches not only end of string. It matches also an empty string before the trailing \n. Similar errors are also occurred in other sites. I'll open separate issues for different cases. ---------- components: Library (Lib) messages: 358438 nosy: serhiy.storchaka priority: normal severity: normal status: open title: base64.b64decode() with validate=True does not raise for a trailing \n type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 15 13:43:33 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 15 Dec 2019 18:43:33 +0000 Subject: [New-bugs-announce] [issue39056] Issues with handling the -W option Message-ID: <1576435413.35.0.296920431157.issue39056@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are some issues with handling the -W option: 1. A traceback is printed for some invalid category names. $ ./python -Wignore::0 'import warnings' failed; traceback: Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/warnings.py", line 542, in _processoptions(sys.warnoptions) File "/home/serhiy/py/cpython/Lib/warnings.py", line 208, in _processoptions _setoption(arg) File "/home/serhiy/py/cpython/Lib/warnings.py", line 224, in _setoption category = _getcategory(category) File "/home/serhiy/py/cpython/Lib/warnings.py", line 271, in _getcategory if not issubclass(cat, Warning): TypeError: issubclass() arg 1 must be a class $ ./python -Wignore::0a 'import warnings' failed; traceback: Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/warnings.py", line 542, in _processoptions(sys.warnoptions) File "/home/serhiy/py/cpython/Lib/warnings.py", line 208, in _processoptions _setoption(arg) File "/home/serhiy/py/cpython/Lib/warnings.py", line 224, in _setoption category = _getcategory(category) File "/home/serhiy/py/cpython/Lib/warnings.py", line 256, in _getcategory cat = eval(category) File "", line 1 0a ^ SyntaxError: unexpected EOF while parsing $ ./python -Wignore::= 'import warnings' failed; traceback: Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/warnings.py", line 542, in _processoptions(sys.warnoptions) File "/home/serhiy/py/cpython/Lib/warnings.py", line 208, in _processoptions _setoption(arg) File "/home/serhiy/py/cpython/Lib/warnings.py", line 224, in _setoption category = _getcategory(category) File "/home/serhiy/py/cpython/Lib/warnings.py", line 264, in _getcategory m = __import__(module, None, None, [klass]) ValueError: Empty module name In normal case Python just complains: $ ./python -Wignore::unknown Invalid -W option ignored: unknown warning category: 'unknown' 2. For non-ascii warning names Python complains about a module and strips the last character: $ ./python -Wignore::W?rning Invalid -W option ignored: invalid module name: 'W?rnin' 3. The re module is always imported is the -W option is used, even if this is not needed. ---------- components: Library (Lib) messages: 358439 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Issues with handling the -W option type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 15 14:45:57 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 15 Dec 2019 19:45:57 +0000 Subject: [New-bugs-announce] [issue39057] Issues with urllib.request.proxy_bypass_environment Message-ID: <1576439157.59.0.210686585011.issue39057@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are several issues with urllib.request.proxy_bypass_environment: 1. Leading dots are ignored in the proxy list, but not in the checked hostname. So ".localhost" does not matches ".localhost" in the proxy list. 2. A single trailing \n in the checked hostname is ignored, so "localhost\n" passes the check if the proxy list contains "localhost". But "localhost\n\n" and "localhost " do not pass. This is an artifact of using $ in the regular expression. ---------- components: Library (Lib) messages: 358444 nosy: orsenthil, serhiy.storchaka priority: normal severity: normal status: open title: Issues with urllib.request.proxy_bypass_environment type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 15 17:16:04 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 15 Dec 2019 22:16:04 +0000 Subject: [New-bugs-announce] [issue39058] argparse should preserve argument ordering in Namespace Message-ID: <1576448164.39.0.171350779582.issue39058@roundup.psfhosted.org> New submission from Raymond Hettinger : Currently, Namespace() objects sort the attributes in the __repr__. This is annoying because argument order matters and because everywhere else in the module we preserve order (i.e. users see help in the order that arguments are added). Note, the docs do not promise that Namespace is displayed with a sort. This is likely just an artifact of older dictionaries having arbitrary or randomised ordering. >>> from argparse import ArgumentParser >>> parser = ArgumentParser() >>> _ = parser.add_argument('source') >>> _ = parser.add_argument('destination') # Order matters to the user inputing the arguments # (source must go first and destination must go last >>> args = parser.parse_args(['input.txt', 'output.txt']) # Order is preserved internally >>> vars(args) {'source': 'input.txt', 'destination': 'output.txt'} # Despite this, the Namespace() repr alphabetizes the output >>> args Namespace(destination='output.txt', source='input.txt') # Order is preserved in help() >>> parser.parse_args(['-h']) usage: [-h] source destination positional arguments: source destination optional arguments: -h, --help show this help message and exit ---------- components: Library (Lib) messages: 358455 nosy: rhettinger priority: normal severity: normal status: open title: argparse should preserve argument ordering in Namespace type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 03:50:51 2019 From: report at bugs.python.org (AVicennA) Date: Mon, 16 Dec 2019 08:50:51 +0000 Subject: [New-bugs-announce] [issue39059] Getting incorrect results in rounding procedures Message-ID: <1576486251.62.0.0931327693234.issue39059@roundup.psfhosted.org> New submission from AVicennA : This is about rounding process and getting incorrect results. In documentation written that, "This is not a bug: it?s a result of the fact that most decimal fractions can?t be represented exactly as a float". - https://docs.python.org/3/library/functions.html?highlight=round#round It is also related with hardware. I wrote some code parts that shows it and used decimal value as in documentation sample: ''' 2.675(4) - (4) or (3) or (2) etc. I have given range 2, and the result is influenced not just by one number after those 2 ranges, but also the another number consistently. ''' >>> round(2.675, 2) 2.67 >>> >>> round(5.765, 2) 5.76 >>> >>> round(2.6754, 2) 2.68 >>> >>> round(5.7652, 2) 5.77 ''' "format" is also not working properly. Gives incorrect results. ''' >>> format(2.675, ".2f") '2.67' >>> >>> format(2.678, ".2f") '2.68' >>> >>> '{:0.2f}'.format(2.675) '2.67' >>> >>> '{:0.2f}'.format(2.678) '2.68' ''' Because, when the decimal string is converted to a binary floating-point number, it's again replaced with a binary approximation: Whose exact value is 5.765 --> 5.76499999999999968025576890795491635799407958984375 && 2.675 --> 2.67499999999999982236431605997495353221893310546875 It means that, the 76(5) --> 5 replaced in a memory as 4.(999999999999) && 67(5) --> 5 replaced in a memory as 4.(999999999999) ''' >>> from decimal import Decimal >>> Decimal(2.675) Decimal('2.67499999999999982236431605997495353221893310546875') >>> >>> Decimal(5.765) Decimal('5.76499999999999968025576890795491635799407958984375') ''' Used float point precision(FPU) with math lib to apply a certain correct form. I propose to use some tricks. But again incorrect result in third sample: ''' >>> import math >>> math.ceil(2.675 * 100) / 100 2.68 >>> >>> print("%.2f" % (math.ceil(2.675 * 100) / 100)) 2.68 >>> math.ceil(2.673 * 100) / 100 2.68 ''' The most correct form is using by round: ''' >>> round(2.675 * 100) / 100 2.68 >>> >>> round(2.673 * 100) / 100 2.67 >>> round(2.674 * 100) / 100 2.67 >>> round(2.676 * 100) / 100 2.68 ''' In this case, whatever the range the full right result is a return. Mostly can be using in fraction side correctness. ''' >>> def my_round(val, n): ... return round(val * 10 ** n) / 10 ** n ... >>> my_round(2.675, 2) 2.68 >>> >>> my_round(2.676, 2) 2.68 >>> >>> my_round(2.674, 2) 2.67 >>> >>> my_round(2.673, 2) 2.67 >>> >>> my_round(2.674, 3) 2.674 >>> >>> my_round(55.37678, 3) 55.377 >>> >>> my_round(55.37678, 2) 55.38 >>> >>> my_round(55.37478, 2) 55.37 >>> >>> my_round(224.562563, 2) 224.56 >>> >>> my_round(224.562563, 3) 224.563 >>> >>> my_round(224.562563, 4) 224.5626 >>> >>> my_round(224.562563, 5) 224.56256 >>> >>> my_round(224.562563, 7) 224.562563 >>> >>> my_round(224.562563, 11) 224.562563 ''' my_round - function tested on Windows and Linux platforms(x64). This can be added in Python next releases to solve this problem which related with the IEEE 754 and PEP 754 problems. ''' ---------- assignee: docs at python components: Documentation, FreeBSD, IDLE, Interpreter Core, Library (Lib), Tests, Windows, macOS messages: 358467 nosy: AVicennA, docs at python, koobs, ned.deily, paul.moore, ronaldoussoren, steve.dower, terry.reedy, tim.golden, zach.ware priority: normal severity: normal status: open title: Getting incorrect results in rounding procedures type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 04:51:58 2019 From: report at bugs.python.org (Amit Itzkovitch) Date: Mon, 16 Dec 2019 09:51:58 +0000 Subject: [New-bugs-announce] [issue39060] asyncio.Task.print_stack doesn't print the full stack Message-ID: <1576489918.95.0.196774987509.issue39060@roundup.psfhosted.org> New submission from Amit Itzkovitch : Hi! I think I found some issue in the "print_stack()" function of asyncio.Task. When I try to print the stack of some task, I only see the first few lines of the stack. Attached an example file, that contains a recursive function that after 10 calls prints the stack of the task. You can see that the stack that it prints only shows the first call of the recursive function, although you would expect to see it 10 times. Tested on python3.7 and 3.8 on both MacOS and CentOS, the result is the same. Your help will be appreciated very much! :) ---------- components: asyncio files: example.py messages: 358468 nosy: amit7itz, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio.Task.print_stack doesn't print the full stack type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48779/example.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 05:01:34 2019 From: report at bugs.python.org (Maxime Istasse) Date: Mon, 16 Dec 2019 10:01:34 +0000 Subject: [New-bugs-announce] [issue39061] Garbage Collection optimizations cause "memory leak" Message-ID: <1576490494.99.0.0417763312986.issue39061@roundup.psfhosted.org> New submission from Maxime Istasse : When working on a self-referencing object in the young generation and the middle-generation collection kicks in, that object is directly moved to the old generation. (if I understood this well: https://github.com/python/cpython/blob/d68b592dd67cb87c4fa862a8d3b3fd0a7d05e113/Modules/gcmodule.c#L1192) Then, it won't be freed until the old generation is collected, which happens to be much later. (because of this: https://github.com/python/cpython/blob/d68b592dd67cb87c4fa862a8d3b3fd0a7d05e113/Modules/gcmodule.c#L1388) It happens to cause huge memory leaks if the self-referencing objects occupies a lot of RAM, which should be expected. This is of course the kind of problem that I expect with garbage collection with bad parameters. However, I also expected that playing with threshold0 could have been sufficient to solve it. However, the fact that we move the object to old generation every time the middle collection pops in forces the problem to happen once in a while, and in the end reaching very high memory consumption. I think the best and simplest solution would be to move the objects one generation at a time. This would avoid the heavy but short-lived objects to make it to the old generation. ---------- components: Interpreter Core files: late_gc.py messages: 358469 nosy: mistasse priority: normal severity: normal status: open title: Garbage Collection optimizations cause "memory leak" type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file48780/late_gc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 05:43:36 2019 From: report at bugs.python.org (jvoisin) Date: Mon, 16 Dec 2019 10:43:36 +0000 Subject: [New-bugs-announce] [issue39062] ValueError in TarFile.getmembers Message-ID: <1576493016.16.0.2184811666.issue39062@roundup.psfhosted.org> New submission from jvoisin : The attached file produces the following stacktrace when opened via `tarfile.open` and iterated with `TarFile.getmembers`, on Python 3.7.5rc1: ``` $ cat tarrepro.py import tarfile import sys with tarfile.open(sys.argv[1]) as t: for member in t.getmembers(): pass ``` ``` $ python3 tarrepro.py crash-7221297307ab37ac87be6ea6dd9b28d4d453c557aa3da8a2138ab98e015cd42a Traceback (most recent call last): File "tarrepro.py", line 5, in for member in t.getmembers(): File "/usr/lib/python3.7/tarfile.py", line 1763, in getmembers self._load() # all members, we first have to File "/usr/lib/python3.7/tarfile.py", line 2350, in _load tarinfo = self.next() File "/usr/lib/python3.7/tarfile.py", line 2281, in next self.fileobj.seek(self.offset - 1) ValueError: cannot fit 'int' into an offset-sized integer ``` This file isn't a valid tar file, it was created by a fuzzer. ---------- components: Library (Lib) files: crash-7221297307ab37ac87be6ea6dd9b28d4d453c557aa3da8a2138ab98e015cd42a messages: 358472 nosy: jvoisin priority: normal severity: normal status: open title: ValueError in TarFile.getmembers type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48781/crash-7221297307ab37ac87be6ea6dd9b28d4d453c557aa3da8a2138ab98e015cd42a _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 07:27:44 2019 From: report at bugs.python.org (Ramon Medeiros) Date: Mon, 16 Dec 2019 12:27:44 +0000 Subject: [New-bugs-announce] [issue39063] Format string does not work with "in" statement Message-ID: <1576499264.15.0.522320667153.issue39063@roundup.psfhosted.org> New submission from Ramon Medeiros : Tried to use "in" statement to check if a string exists in a array and failed using string format. How to reproduce: ~/ ipython Python 3.7.4 (default, Oct 12 2019, 18:55:28) Type 'copyright', 'credits' or 'license' for more information IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help. In [2]: "a" in ["a", "b"] Out[2]: True In [4]: z = "a" In [5]: f"{z}" in ["a", "b"] Out[5]: True In [6]: z = "b" In [7]: f"{z}" in ["a", "b"] Out[7]: True ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 358479 nosy: ramon.rnm at gmail.com priority: normal severity: normal status: open title: Format string does not work with "in" statement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 07:58:42 2019 From: report at bugs.python.org (jvoisin) Date: Mon, 16 Dec 2019 12:58:42 +0000 Subject: [New-bugs-announce] [issue39064] ValueError in zipfile.ZipFile Message-ID: <1576501122.43.0.655067405413.issue39064@roundup.psfhosted.org> New submission from jvoisin : The attached file produces the following stacktrace when opened via `zipfile.ZipFile`, on Python 3.7.5rc1: ``` $ cat ziprepro.py import zipfile import sys zipfile.ZipFile(sys.argv[1]) ``` ``` $ python3 ziprepro.py crash-4da08e9ababa495ac51ecad588fd61081a66b5bb6e7a0e791f44907fa274ec62 Traceback (most recent call last): File "ziprepro.py", line 4, in zipfile.ZipFile(sys.argv[1]) File "/usr/lib/python3.7/zipfile.py", line 1225, in __init__ self._RealGetContents() File "/usr/lib/python3.7/zipfile.py", line 1310, in _RealGetContents fp.seek(self.start_dir, 0) ValueError: cannot fit 'int' into an offset-sized integer ``` The ValueError exception isn't documented as a possible exception when using zipfile.ZipFile ( https://docs.python.org/3/library/tarfile.html ). ---------- components: Library (Lib) files: crash-4da08e9ababa495ac51ecad588fd61081a66b5bb6e7a0e791f44907fa274ec62 messages: 358484 nosy: jvoisin priority: normal severity: normal status: open title: ValueError in zipfile.ZipFile type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48782/crash-4da08e9ababa495ac51ecad588fd61081a66b5bb6e7a0e791f44907fa274ec62 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 08:07:11 2019 From: report at bugs.python.org (jvoisin) Date: Mon, 16 Dec 2019 13:07:11 +0000 Subject: [New-bugs-announce] [issue39065] OSError in TarFile.getmembers() Message-ID: <1576501631.49.0.641796091958.issue39065@roundup.psfhosted.org> New submission from jvoisin : The attached file produces the following stacktrace when opened via `tarfile.open` and iterated with `TarFile.getmembers`, on Python 3.7.5rc1: ``` $ cat tarrepro.py import tarfile import sys with tarfile.open(sys.argv[1]) as t: for member in t.getmembers(): pass ``` ``` $ python3 tarrepro.py crash-462a00f845e737bff6df2fe6467fc7cdd4c39cd8e27ef1d3011ec68a9808ca8e Traceback (most recent call last): File "tarrepro.py", line 5, in for member in t.getmembers(): File "/usr/lib/python3.7/tarfile.py", line 1763, in getmembers self._load() # all members, we first have to File "/usr/lib/python3.7/tarfile.py", line 2350, in _load tarinfo = self.next() File "/usr/lib/python3.7/tarfile.py", line 2281, in next self.fileobj.seek(self.offset - 1) File "/usr/lib/python3.7/gzip.py", line 368, in seek return self._buffer.seek(offset, whence) File "/usr/lib/python3.7/_compression.py", line 143, in seek data = self.read(min(io.DEFAULT_BUFFER_SIZE, offset)) File "/usr/lib/python3.7/gzip.py", line 454, in read self._read_eof() File "/usr/lib/python3.7/gzip.py", line 501, in _read_eof hex(self._crc))) OSError: CRC check failed 0x21e25017 != 0x7c839e8b ``` The OSError exception isn't documented as a possible exception when using TarFile.getmembers ( https://docs.python.org/3/library/tarfile.html ). ---------- components: Library (Lib) files: crash-462a00f845e737bff6df2fe6467fc7cdd4c39cd8e27ef1d3011ec68a9808ca8e messages: 358485 nosy: jvoisin priority: normal severity: normal status: open title: OSError in TarFile.getmembers() type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48783/crash-462a00f845e737bff6df2fe6467fc7cdd4c39cd8e27ef1d3011ec68a9808ca8e _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 08:52:16 2019 From: report at bugs.python.org (Ben Boeckel) Date: Mon, 16 Dec 2019 13:52:16 +0000 Subject: [New-bugs-announce] [issue39066] Expose SOABI setting in the header Message-ID: <1576504336.84.0.358536323475.issue39066@roundup.psfhosted.org> New submission from Ben Boeckel : Currently, the SOABI suffix is only available by running the Python interpreter to ask `sysconfig` about the setting. This complicates cross compilation because the target platform's Python may not be runnable on the build platform. Exposing this in the header would allow for build processes to know what suffix to add to modules without having to run the interpreter. ---------- components: C API messages: 358489 nosy: mathstuf priority: normal severity: normal status: open title: Expose SOABI setting in the header type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 09:21:45 2019 From: report at bugs.python.org (jvoisin) Date: Mon, 16 Dec 2019 14:21:45 +0000 Subject: [New-bugs-announce] [issue39067] EOFError in tarfile.open Message-ID: <1576506105.27.0.112129461214.issue39067@roundup.psfhosted.org> New submission from jvoisin : The attached file produces the following stacktrace when opened via `tarfile.open`, on Python 3.7.5rc1: ``` $ cat tarrepro.py import tarfile import sys with tarfile.open(sys.argv[1], errorlevel=2) as t: for member in t.getmembers(): pass $ ``` ``` $ python3 tarrepro.py crash-f4032ed3c7c2ae59a8f4424e0e73ce8b11ad3ef90155b008968f5b1b08499bc4 Traceback (most recent call last): File "tarrepro.py", line 4, in with tarfile.open(sys.argv[1], errorlevel=2) as t: File "/usr/lib/python3.7/tarfile.py", line 1574, in open return func(name, "r", fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1646, in gzopen t = cls.taropen(name, mode, fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1622, in taropen return cls(name, mode, fileobj, **kwargs) File "/usr/lib/python3.7/tarfile.py", line 1485, in __init__ self.firstmember = self.next() File "/usr/lib/python3.7/tarfile.py", line 2290, in next tarinfo = self.tarinfo.fromtarfile(self) File "/usr/lib/python3.7/tarfile.py", line 1094, in fromtarfile buf = tarfile.fileobj.read(BLOCKSIZE) File "/usr/lib/python3.7/gzip.py", line 276, in read return self._buffer.read(size) File "/usr/lib/python3.7/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/usr/lib/python3.7/gzip.py", line 463, in read if not self._read_gzip_header(): File "/usr/lib/python3.7/gzip.py", line 421, in _read_gzip_header self._read_exact(extra_len) File "/usr/lib/python3.7/gzip.py", line 400, in _read_exact raise EOFError("Compressed file ended before the " EOFError: Compressed file ended before the end-of-stream marker was reached ``` ---------- components: Library (Lib) files: crash-f4032ed3c7c2ae59a8f4424e0e73ce8b11ad3ef90155b008968f5b1b08499bc4 messages: 358490 nosy: jvoisin priority: normal severity: normal status: open title: EOFError in tarfile.open type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48784/crash-f4032ed3c7c2ae59a8f4424e0e73ce8b11ad3ef90155b008968f5b1b08499bc4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 12:55:04 2019 From: report at bugs.python.org (Brandon Stansbury) Date: Mon, 16 Dec 2019 17:55:04 +0000 Subject: [New-bugs-announce] [issue39068] Base 85 encoding initialization race conditiong Message-ID: <1576518904.56.0.627490233403.issue39068@roundup.psfhosted.org> New submission from Brandon Stansbury : Under multi-threading scenarios a race condition may occur where a thread sees an initialized `_b85chars` table but an uninitialized `_b85chars2` table due to the guard only checking the first table. This causes an exception like: ``` File "/usr/lib/python3.6/base64.py", line 434, in b85encode return _85encode(b, _b85chars, _b85chars2, pad), File "/usr/lib/python3.6/base64.py", line 294, in _85encode for word in words], File "/usr/lib/python3.6/base64.py", line 294, in for word in words], "TypeError: 'NoneType' object is not subscriptable ``` ---------- components: Library (Lib) messages: 358495 nosy: drmonkeysee priority: normal pull_requests: 17096 severity: normal status: open title: Base 85 encoding initialization race conditiong type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 13:12:54 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 16 Dec 2019 18:12:54 +0000 Subject: [New-bugs-announce] [issue39069] Move ast.unparse() function to a different module Message-ID: <1576519974.46.0.601705000564.issue39069@roundup.psfhosted.org> New submission from STINNER Victor : Pablo Galingo Salgado recently moved Tools/parser/unparse.py to a ast.unparse() function. Pablo made a change using functools. The additional "import functools" made "import ast" slower and so Pablo reverted his change: * https://github.com/python/cpython/pull/17376 * https://bugs.python.org/issue38870 The question of using contextlib comes back in another ast.unparse change to get @contextlib.contextmanager: * https://github.com/python/cpython/pull/17377#discussion_r350239415 On the same PR, I also proposed to use import enum: * https://github.com/python/cpython/pull/17377#discussion_r357921289 There are different options to not impact "import ast" performance: * Move ast.unparse() code into a new module * Write to code so imports can be done lazily "from ast import unparse" or "ast.unparse()" can be kept using a private _ast_unparse module and add a __getattr__() function to Lib/ast.py to lazily bind _ast_unparse.unparse() to ast.unparse(). Other options: * Avoid cool enum and functools modules, and use simpler Python code (that makes me sad, but it's ok) * Accept to make "import ast" slower * Remove ast.unparse(): I don't think that anyone wants this, ast.unparse() was approved on python-dev. ---------- components: Library (Lib) messages: 358496 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: Move ast.unparse() function to a different module versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 16 18:42:54 2019 From: report at bugs.python.org (tuijatuulia) Date: Mon, 16 Dec 2019 23:42:54 +0000 Subject: [New-bugs-announce] [issue39070] Uninstalling 3.8.0 fails but it says it succeeds.. Message-ID: <1576539774.23.0.330588412038.issue39070@roundup.psfhosted.org> New submission from tuijatuulia : I installed 3.8.0 on Windows 10 without problems. Using windows user that has no admin rights - and system was asking for admin info. Then I wanted to uninstall it because I got a wrong version and 32 bit version when I wanted 64 bit, but uninstalling did nothing, although it said it was successful and asked for admin login and ended into successful -screen - although it was too quick so I figured it did nothing actually, and it did not remove anything. Only when I gave this same windows user the admin rights to the computer, I could uninstall 3.8.0. ---------- components: Windows messages: 358526 nosy: paul.moore, steve.dower, tim.golden, tuijatuulia, zach.ware priority: normal severity: normal status: open title: Uninstalling 3.8.0 fails but it says it succeeds.. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 03:32:29 2019 From: report at bugs.python.org (Manfred Kaiser) Date: Tue, 17 Dec 2019 08:32:29 +0000 Subject: [New-bugs-announce] [issue39071] email.parser.BytesParser - parse and parsebytes work not equivalent Message-ID: <1576571549.52.0.292971717442.issue39071@roundup.psfhosted.org> New submission from Manfred Kaiser : I used email.parser.BytesParser for parsing mails. In one programm I used parse, because the email was stored in a file. In a second programm the email was stored in memory as a bytes object. I created hash values from each parts an compared them, to check if a part is already known to my programs. This works for attachments, but not for html and plain text parts. Documentation for parsebytes: Similar to the parse() method, except it takes a bytes-like object instead of a file-like object. Calling this method on a bytes-like object is equivalent to wrapping bytes in a BytesIO instance first and calling parse(). When I read the documentation, I expected that both methods will produce the same output. The testmail contains 2 mimeparts. One with html and one with plain text. The parse method with a file and the parse method with bytes-data, wrapped in a BytesIO produces the same hashes. The paesebytes method creates different hashes. Output of my testprogram: MD5 sums with parsebytes with bytes data 3f4ee7303378b62f723a8d958797507a 45c72465b931d32c7e700d2dd96f8383 ------------------------ MD5 sums with parse and BytesIO with bytes data fb0599d92750b72c25923139670e5127 9a54b64425b9003a9e6bf199ab6ba603 ------------------------ MD5 sums with parse from file fb0599d92750b72c25923139670e5127 9a54b64425b9003a9e6bf199ab6ba603 Is this an expected behavior or is this an error? ---------- components: email files: test.eml messages: 358533 nosy: barry, mkaiser, r.david.murray priority: normal severity: normal status: open title: email.parser.BytesParser - parse and parsebytes work not equivalent versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48785/test.eml _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 03:52:32 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Dec 2019 08:52:32 +0000 Subject: [New-bugs-announce] [issue39072] Azure Pipelines: git clone failed with: OpenSSL SSL_read: Connection was reset Message-ID: <1576572752.23.0.326503724405.issue39072@roundup.psfhosted.org> New submission from STINNER Victor : On https://github.com/python/cpython/pull/17612/ the Windows x86 job failed on git clone: https://github.com/python/cpython/pull/17612/checks?check_run_id=352102371 Logs: 2019-12-17T08:12:13.9180842Z ##[section]Starting: Request a runner to run this job 2019-12-17T08:12:14.0771595Z Requesting a hosted runner in current repository's account/organization with labels: 'windows-latest', require runner match: True 2019-12-17T08:12:14.1721951Z Labels matched hosted runners has been found, waiting for one of them get assigned for this job. 2019-12-17T08:12:14.1872022Z ##[section]Finishing: Request a runner to run this job 2019-12-17T08:12:23.3365552Z Current runner version: '2.162.0' 2019-12-17T08:12:23.3366903Z Prepare workflow directory 2019-12-17T08:12:23.6967390Z Prepare all required actions 2019-12-17T08:12:23.7007224Z Download action repository 'actions/checkout at v1' 2019-12-17T08:12:27.0952634Z ##[group]Run actions/checkout at v1 2019-12-17T08:12:27.0953142Z with: 2019-12-17T08:12:27.0953435Z clean: true 2019-12-17T08:12:27.0953668Z ##[endgroup] 2019-12-17T08:12:28.4859915Z Added matchers: 'checkout-git'. Problem matchers scan action output for known warning or error strings and report these inline. 2019-12-17T08:12:28.4860923Z Syncing repository: python/cpython 2019-12-17T08:12:28.4869958Z ##[command]git version 2019-12-17T08:12:28.4952567Z git version 2.24.0.windows.2 2019-12-17T08:12:28.4996331Z ##[command]git lfs version 2019-12-17T08:12:34.2013921Z git-lfs/2.9.0 (GitHub; windows amd64; go 1.12.7; git 8ab05aa7) 2019-12-17T08:12:34.2017699Z ##[command]git init "d:\a\cpython\cpython" 2019-12-17T08:12:34.2020576Z Initialized empty Git repository in d:/a/cpython/cpython/.git/ 2019-12-17T08:12:34.2024465Z ##[command]git remote add origin https://github.com/python/cpython 2019-12-17T08:12:34.2027306Z ##[command]git config gc.auto 0 2019-12-17T08:12:34.2029544Z ##[command]git config --get-all http.https://github.com/python/cpython.extraheader 2019-12-17T08:12:34.2031492Z ##[command]git config --get-all http.proxy 2019-12-17T08:12:34.2038718Z ##[command]git -c http.extraheader="AUTHORIZATION: basic ***" fetch --tags --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/pull/17612/merge:refs/remotes/pull/17612/merge 2019-12-17T08:14:45.6102189Z remote: Enumerating objects: 778568 2019-12-17T08:14:45.6110192Z remote: Enumerating objects: 36, done. 2019-12-17T08:14:45.6114528Z remote: Counting objects: 2% (1/36) 2019-12-17T08:14:45.6114769Z remote: Counting objects: 5% (2/36) 2019-12-17T08:14:45.6114916Z remote: Counting objects: 8% (3/36) 2019-12-17T08:14:45.6115364Z remote: Counting objects: 11% (4/36) 2019-12-17T08:14:45.6115460Z remote: Counting objects: 13% (5/36) 2019-12-17T08:14:45.6115597Z remote: Counting objects: 16% (6/36) 2019-12-17T08:14:45.6115773Z remote: Counting objects: 19% (7/36) 2019-12-17T08:14:45.6115911Z remote: Counting objects: 22% (8/36) 2019-12-17T08:14:45.6116006Z remote: Counting objects: 25% (9/36) 2019-12-17T08:14:45.6116140Z remote: Counting objects: 27% (10/36) 2019-12-17T08:14:45.6116276Z remote: Counting objects: 30% (11/36) 2019-12-17T08:14:45.6116410Z remote: Counting objects: 33% (12/36) 2019-12-17T08:14:45.6116505Z remote: Counting objects: 36% (13/36) 2019-12-17T08:14:45.6117519Z remote: Counting objects: 38% (14/36) 2019-12-17T08:14:45.6117673Z remote: Counting objects: 41% (15/36) 2019-12-17T08:14:45.6117797Z remote: Counting objects: 44% (16/36) 2019-12-17T08:14:45.6117880Z remote: Counting objects: 47% (17/36) 2019-12-17T08:14:45.6118018Z remote: Counting objects: 50% (18/36) 2019-12-17T08:14:45.6118143Z remote: Counting objects: 52% (19/36) 2019-12-17T08:14:45.6118746Z remote: Counting objects: 55% (20/36) 2019-12-17T08:14:45.6118837Z remote: Counting objects: 58% (21/36) 2019-12-17T08:14:45.6118967Z remote: Counting objects: 61% (22/36) 2019-12-17T08:14:45.6119092Z remote: Counting objects: 63% (23/36) 2019-12-17T08:14:45.6119213Z remote: Counting objects: 66% (24/36) 2019-12-17T08:14:45.6119467Z remote: Counting objects: 69% (25/36) 2019-12-17T08:14:45.6119944Z remote: Counting objects: 72% (26/36) 2019-12-17T08:14:45.6120097Z remote: Counting objects: 75% (27/36) 2019-12-17T08:14:45.6120223Z remote: Counting objects: 77% (28/36) 2019-12-17T08:14:45.6120345Z remote: Counting objects: 80% (29/36) 2019-12-17T08:14:45.6120429Z remote: Counting objects: 83% (30/36) 2019-12-17T08:14:45.6122262Z remote: Counting objects: 86% (31/36) 2019-12-17T08:14:45.6122487Z remote: Counting objects: 88% (32/36) 2019-12-17T08:14:45.6123370Z remote: Counting objects: 91% (33/36) 2019-12-17T08:14:45.6123564Z remote: Counting objects: 94% (34/36) 2019-12-17T08:14:45.6123649Z remote: Counting objects: 97% (35/36) 2019-12-17T08:14:45.6123775Z remote: Counting objects: 100% (36/36) 2019-12-17T08:14:45.6124092Z remote: Counting objects: 100% (36/36), done. 2019-12-17T08:14:45.6124450Z remote: Compressing objects: 2% (1/36) 2019-12-17T08:14:45.6124550Z remote: Compressing objects: 5% (2/36) 2019-12-17T08:14:45.6124688Z remote: Compressing objects: 8% (3/36) 2019-12-17T08:14:45.6124824Z remote: Compressing objects: 11% (4/36) 2019-12-17T08:14:45.6124959Z remote: Compressing objects: 13% (5/36) 2019-12-17T08:14:45.6125052Z remote: Compressing objects: 16% (6/36) 2019-12-17T08:14:45.6125204Z remote: Compressing objects: 19% (7/36) 2019-12-17T08:14:45.6125342Z remote: Compressing objects: 22% (8/36) 2019-12-17T08:14:45.6125484Z remote: Compressing objects: 25% (9/36) 2019-12-17T08:14:45.6125579Z remote: Compressing objects: 27% (10/36) 2019-12-17T08:14:45.6125718Z remote: Compressing objects: 30% (11/36) 2019-12-17T08:14:45.6125852Z remote: Compressing objects: 33% (12/36) 2019-12-17T08:14:45.6125989Z remote: Compressing objects: 36% (13/36) 2019-12-17T08:14:45.6126083Z remote: Compressing objects: 38% (14/36) 2019-12-17T08:14:45.6126220Z remote: Compressing objects: 41% (15/36) 2019-12-17T08:14:45.6126355Z remote: Compressing objects: 44% (16/36) 2019-12-17T08:14:45.6126489Z remote: Compressing objects: 47% (17/36) 2019-12-17T08:14:45.6126961Z remote: Compressing objects: 50% (18/36) 2019-12-17T08:14:45.6127119Z remote: Compressing objects: 52% (19/36) 2019-12-17T08:14:45.6127262Z remote: Compressing objects: 55% (20/36) 2019-12-17T08:14:45.6127558Z remote: Compressing objects: 58% (21/36) 2019-12-17T08:14:45.6127653Z remote: Compressing objects: 61% (22/36) 2019-12-17T08:14:45.6127935Z remote: Compressing objects: 63% (23/36) 2019-12-17T08:14:45.6128056Z remote: Compressing objects: 66% (24/36) 2019-12-17T08:14:45.6128177Z remote: Compressing objects: 69% (25/36) 2019-12-17T08:14:45.6128261Z remote: Compressing objects: 72% (26/36) 2019-12-17T08:14:45.6128388Z remote: Compressing objects: 75% (27/36) 2019-12-17T08:14:45.6128509Z remote: Compressing objects: 77% (28/36) 2019-12-17T08:14:45.6128628Z remote: Compressing objects: 80% (29/36) 2019-12-17T08:14:45.6128712Z remote: Compressing objects: 83% (30/36) 2019-12-17T08:14:45.6128834Z remote: Compressing objects: 86% (31/36) 2019-12-17T08:14:45.6128958Z remote: Compressing objects: 88% (32/36) 2019-12-17T08:14:45.6129085Z remote: Compressing objects: 91% (33/36) 2019-12-17T08:14:45.6129169Z remote: Compressing objects: 94% (34/36) 2019-12-17T08:14:45.6129299Z remote: Compressing objects: 97% (35/36) 2019-12-17T08:14:45.6129422Z remote: Compressing objects: 100% (36/36) 2019-12-17T08:14:45.6129546Z remote: Compressing objects: 100% (36/36), done. 2019-12-17T08:14:46.6103415Z Receiving objects: 0% (1/778604) 2019-12-17T08:14:47.9351300Z Receiving objects: 0% (66/778604), 28.00 KiB | 33.00 KiB/s 2019-12-17T08:14:54.9869233Z Receiving objects: 0% (111/778604), 60.00 KiB | 18.00 KiB/s 2019-12-17T08:14:56.9150492Z Receiving objects: 0% (168/778604), 92.00 KiB | 8.00 KiB/s 2019-12-17T08:14:57.1452937Z Receiving objects: 0% (239/778604), 132.00 KiB | 10.00 KiB/s 2019-12-17T08:14:58.1193320Z Receiving objects: 0% (249/778604), 132.00 KiB | 10.00 KiB/s 2019-12-17T08:15:01.8268616Z Receiving objects: 0% (293/778604), 148.00 KiB | 11.00 KiB/s 2019-12-17T08:15:02.9750358Z Receiving objects: 0% (304/778604), 172.00 KiB | 9.00 KiB/s 2019-12-17T08:15:04.3532052Z Receiving objects: 0% (332/778604), 188.00 KiB | 10.00 KiB/s 2019-12-17T08:15:05.3109995Z Receiving objects: 0% (347/778604), 196.00 KiB | 9.00 KiB/s 2019-12-17T08:15:06.6874725Z Receiving objects: 0% (409/778604), 228.00 KiB | 10.00 KiB/s 2019-12-17T08:15:07.0736627Z Receiving objects: 0% (425/778604), 236.00 KiB | 9.00 KiB/s 2019-12-17T08:15:08.0524512Z Receiving objects: 0% (644/778604), 236.00 KiB | 9.00 KiB/s 2019-12-17T08:15:09.8870060Z Receiving objects: 0% (712/778604), 364.00 KiB | 15.00 KiB/s 2019-12-17T08:15:11.6491806Z Receiving objects: 0% (751/778604), 404.00 KiB | 20.00 KiB/s 2019-12-17T08:15:14.9081539Z Receiving objects: 0% (765/778604), 412.00 KiB | 19.00 KiB/s 2019-12-17T08:15:28.7720884Z Receiving objects: 0% (779/778604), 420.00 KiB | 18.00 KiB/s 2019-12-17T08:18:54.7618604Z Receiving objects: 0% (837/778604), 452.00 KiB | 10.00 KiB/s 2019-12-17T08:18:54.7729845Z Receiving objects: 0% (849/778604), 460.00 KiB | 1024 bytes/s 2019-12-17T08:18:54.7730362Z Receiving objects: 0% (878/778604), 460.00 KiB | 1024 bytes/s 2019-12-17T08:18:54.7783522Z error: RPC failed; curl 56 OpenSSL SSL_read: No error 2019-12-17T08:18:54.7792069Z ##[error]fatal: the remote end hung up unexpectedly 2019-12-17T08:18:54.7801052Z ##[error]fatal: early EOF 2019-12-17T08:18:54.7802004Z ##[error]fatal: index-pack failed 2019-12-17T08:18:54.7815117Z ##[warning]Git fetch failed with exit code 128, back off 1.361 seconds before retry. 2019-12-17T08:18:56.1289014Z ##[command]git -c http.extraheader="AUTHORIZATION: basic ***" fetch --tags --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/pull/17612/merge:refs/remotes/pull/17612/merge 2019-12-17T08:22:39.0654492Z ##[error]fatal: unable to access 'https://github.com/python/cpython/': OpenSSL SSL_read: Connection was reset 2019-12-17T08:22:39.0708951Z ##[warning]Git fetch failed with exit code 128, back off 4.414 seconds before retry. 2019-12-17T08:22:43.4941710Z ##[command]git -c http.extraheader="AUTHORIZATION: basic ***" fetch --tags --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/pull/17612/merge:refs/remotes/pull/17612/merge 2019-12-17T08:23:40.2697794Z remote: Enumerating objects: 36, done. 2019-12-17T08:23:40.2698302Z remote: Counting objects: 2% (1/36) 2019-12-17T08:23:40.2699155Z remote: Counting objects: 5% (2/36) 2019-12-17T08:23:40.2699929Z remote: Counting objects: 8% (3/36) 2019-12-17T08:23:40.2700026Z remote: Counting objects: 11% (4/36) 2019-12-17T08:23:40.2700198Z remote: Counting objects: 13% (5/36) 2019-12-17T08:23:40.2700330Z remote: Counting objects: 16% (6/36) 2019-12-17T08:23:40.2700464Z remote: Counting objects: 19% (7/36) 2019-12-17T08:23:40.2700684Z remote: Counting objects: 22% (8/36) 2019-12-17T08:23:40.2700890Z remote: Counting objects: 25% (9/36) 2019-12-17T08:23:40.2701056Z remote: Counting objects: 27% (10/36) 2019-12-17T08:23:40.2701241Z remote: Counting objects: 30% (11/36) 2019-12-17T08:23:40.2701423Z remote: Counting objects: 33% (12/36) 2019-12-17T08:23:40.2708741Z remote: Counting objects: 36% (13/36) 2019-12-17T08:23:40.2709004Z remote: Counting objects: 38% (14/36) 2019-12-17T08:23:40.2709281Z remote: Counting objects: 41% (15/36) 2019-12-17T08:23:40.2713438Z remote: Counting objects: 44% (16/36) 2019-12-17T08:23:40.2713793Z remote: Counting objects: 47% (17/36) 2019-12-17T08:23:40.2714090Z remote: Counting objects: 50% (18/36) 2019-12-17T08:23:40.2714831Z remote: Counting objects: 52% (19/36) 2019-12-17T08:23:40.2715095Z remote: Counting objects: 55% (20/36) 2019-12-17T08:23:40.2715289Z remote: Counting objects: 58% (21/36) 2019-12-17T08:23:40.2715827Z remote: Counting objects: 61% (22/36) 2019-12-17T08:23:40.2715998Z remote: Counting objects: 63% (23/36) 2019-12-17T08:23:40.2716087Z remote: Counting objects: 66% (24/36) 2019-12-17T08:23:40.2716912Z remote: Counting objects: 69% (25/36) 2019-12-17T08:23:40.2717526Z remote: Counting objects: 72% (26/36) 2019-12-17T08:23:40.2717885Z remote: Counting objects: 75% (27/36) 2019-12-17T08:23:40.2718001Z remote: Counting objects: 77% (28/36) 2019-12-17T08:23:40.2718155Z remote: Counting objects: 80% (29/36) 2019-12-17T08:23:40.2718406Z remote: Counting objects: 83% (30/36) 2019-12-17T08:23:40.2718563Z remote: Counting objects: 86% (31/36) 2019-12-17T08:23:40.2718901Z remote: Counting objects: 88% (32/36) 2019-12-17T08:23:40.2719600Z remote: Counting objects: 91% (33/36) 2019-12-17T08:23:40.2719952Z remote: Counting objects: 94% (34/36) 2019-12-17T08:23:40.2720161Z remote: Counting objects: 97% (35/36) 2019-12-17T08:23:40.2720401Z remote: Counting objects: 100% (36/36) 2019-12-17T08:23:40.2720560Z remote: Counting objects: 100% (36/36), done. 2019-12-17T08:23:40.2720731Z remote: Compressing objects: 2% (1/36) 2019-12-17T08:23:40.2720942Z remote: Compressing objects: 5% (2/36) 2019-12-17T08:23:40.2721116Z remote: Compressing objects: 8% (3/36) 2019-12-17T08:23:40.2721320Z remote: Compressing objects: 11% (4/36) 2019-12-17T08:23:40.2721478Z remote: Compressing objects: 13% (5/36) 2019-12-17T08:23:40.2721649Z remote: Compressing objects: 16% (6/36) 2019-12-17T08:23:40.2779881Z remote: Compressing objects: 19% (7/36) 2019-12-17T08:23:40.2780891Z remote: Compressing objects: 22% (8/36) 2019-12-17T08:23:40.2781217Z remote: Compressing objects: 25% (9/36) 2019-12-17T08:23:40.2781437Z remote: Compressing objects: 27% (10/36) 2019-12-17T08:23:40.2781583Z remote: Compressing objects: 30% (11/36) 2019-12-17T08:23:40.2781777Z remote: Compressing objects: 33% (12/36) 2019-12-17T08:23:40.2782005Z remote: Compressing objects: 36% (13/36) 2019-12-17T08:23:40.2782183Z remote: Compressing objects: 38% (14/36) 2019-12-17T08:23:40.2782319Z remote: Compressing objects: 41% (15/36) 2019-12-17T08:23:40.2782539Z remote: Compressing objects: 44% (16/36) 2019-12-17T08:23:40.2782668Z remote: Compressing objects: 47% (17/36) 2019-12-17T08:23:40.2783053Z remote: Compressing objects: 50% (18/36) 2019-12-17T08:23:40.2783406Z remote: Compressing objects: 52% (19/36) 2019-12-17T08:23:40.2783589Z remote: Compressing objects: 55% (20/36) 2019-12-17T08:23:40.2783882Z remote: Compressing objects: 58% (21/36) 2019-12-17T08:23:40.2783998Z remote: Compressing objects: 61% (22/36) 2019-12-17T08:23:40.2784133Z remote: Compressing objects: 63% (23/36) 2019-12-17T08:23:40.2784362Z remote: Compressing objects: 66% (24/36) 2019-12-17T08:23:40.2793412Z remote: Compressing objects: 69% (25/36) 2019-12-17T08:23:40.2793703Z remote: Compressing objects: 72% (26/36) 2019-12-17T08:23:40.2793814Z remote: Compressing objects: 75% (27/36) 2019-12-17T08:23:40.2794060Z remote: Compressing objects: 77% (28/36) 2019-12-17T08:23:40.2794195Z remote: Compressing objects: 80% (29/36) 2019-12-17T08:23:40.2794322Z remote: Compressing objects: 83% (30/36) 2019-12-17T08:23:40.2794453Z remote: Compressing objects: 86% (31/36) 2019-12-17T08:23:40.2794667Z remote: Compressing objects: 88% (32/36) 2019-12-17T08:23:40.2794837Z remote: Compressing objects: 91% (33/36) 2019-12-17T08:23:40.2795045Z remote: Compressing objects: 94% (34/36) 2019-12-17T08:23:40.2795284Z remote: Compressing objects: 97% (35/36) 2019-12-17T08:23:40.2795494Z remote: Compressing objects: 100% (36/36) 2019-12-17T08:23:40.2795860Z remote: Compressing objects: 100% (36/36), done. 2019-12-17T08:23:41.5636274Z Receiving objects: 0% (1/778604) 2019-12-17T08:23:43.1211697Z Receiving objects: 0% (38/778604), 20.00 KiB | 8.00 KiB/s 2019-12-17T08:23:49.2419812Z Receiving objects: 0% (53/778604), 28.00 KiB | 7.00 KiB/s 2019-12-17T08:23:55.7195787Z Receiving objects: 0% (65/778604), 36.00 KiB | 3.00 KiB/s 2019-12-17T08:23:55.9927602Z Receiving objects: 0% (152/778604), 84.00 KiB | 5.00 KiB/s 2019-12-17T08:23:57.4292004Z Receiving objects: 0% (199/778604), 84.00 KiB | 5.00 KiB/s 2019-12-17T08:23:59.0310311Z Receiving objects: 0% (263/778604), 148.00 KiB | 8.00 KiB/s 2019-12-17T08:24:05.6808838Z Receiving objects: 0% (317/778604), 180.00 KiB | 9.00 KiB/s 2019-12-17T08:24:05.9258262Z Receiving objects: 0% (347/778604), 196.00 KiB | 7.00 KiB/s 2019-12-17T08:24:07.8177821Z Receiving objects: 0% (363/778604), 196.00 KiB | 7.00 KiB/s 2019-12-17T08:24:08.0739166Z Receiving objects: 0% (409/778604), 228.00 KiB | 10.00 KiB/s 2019-12-17T08:24:10.4459540Z Receiving objects: 0% (426/778604), 228.00 KiB | 10.00 KiB/s 2019-12-17T08:24:11.0221152Z Receiving objects: 0% (505/778604), 276.00 KiB | 11.00 KiB/s 2019-12-17T08:24:11.9673927Z Receiving objects: 0% (553/778604), 300.00 KiB | 11.00 KiB/s 2019-12-17T08:24:19.0148937Z Receiving objects: 0% (581/778604), 316.00 KiB | 10.00 KiB/s 2019-12-17T08:24:21.2353599Z Receiving objects: 0% (612/778604), 332.00 KiB | 7.00 KiB/s 2019-12-17T08:24:21.4127654Z Receiving objects: 0% (643/778604), 348.00 KiB | 10.00 KiB/s 2019-12-17T08:24:22.0971637Z Receiving objects: 0% (673/778604), 364.00 KiB | 10.00 KiB/s 2019-12-17T08:24:24.6868402Z Receiving objects: 0% (686/778604), 372.00 KiB | 10.00 KiB/s 2019-12-17T08:24:25.0468108Z Receiving objects: 0% (711/778604), 380.00 KiB | 7.00 KiB/s 2019-12-17T08:24:26.6093023Z Receiving objects: 0% (752/778604), 380.00 KiB | 7.00 KiB/s 2019-12-17T08:24:29.8735977Z Receiving objects: 0% (765/778604), 412.00 KiB | 8.00 KiB/s 2019-12-17T08:24:30.1876849Z Receiving objects: 0% (779/778604), 420.00 KiB | 7.00 KiB/s 2019-12-17T08:24:31.0328125Z Receiving objects: 0% (878/778604), 420.00 KiB | 7.00 KiB/s 2019-12-17T08:24:32.0533595Z Receiving objects: 0% (970/778604), 500.00 KiB | 10.00 KiB/s 2019-12-17T08:24:33.3650717Z Receiving objects: 0% (1133/778604), 628.00 KiB | 22.00 KiB/s 2019-12-17T08:24:34.0756946Z Receiving objects: 0% (1216/778604), 676.00 KiB | 26.00 KiB/s 2019-12-17T08:24:36.5852563Z Receiving objects: 0% (1246/778604), 692.00 KiB | 26.00 KiB/s 2019-12-17T08:24:37.0530086Z Receiving objects: 0% (1272/778604), 708.00 KiB | 29.00 KiB/s 2019-12-17T08:24:41.2974359Z Receiving objects: 0% (1290/778604), 708.00 KiB | 29.00 KiB/s 2019-12-17T08:24:43.0892725Z Receiving objects: 0% (1333/778604), 740.00 KiB | 23.00 KiB/s 2019-12-17T08:24:50.7378262Z Receiving objects: 0% (1395/778604), 772.00 KiB | 16.00 KiB/s 2019-12-17T08:31:27.4207316Z Receiving objects: 0% (1470/778604), 804.00 KiB | 8.00 KiB/s 2019-12-17T08:31:28.0933462Z Receiving objects: 0% (1491/778604), 812.00 KiB | 0 bytes/s 2019-12-17T08:31:28.4118531Z Receiving objects: 0% (1528/778604), 828.00 KiB | 0 bytes/s 2019-12-17T08:31:30.7460515Z Receiving objects: 0% (1569/778604), 828.00 KiB | 0 bytes/s 2019-12-17T08:31:31.6458381Z Receiving objects: 0% (1623/778604), 876.00 KiB | 0 bytes/s 2019-12-17T08:31:32.1419609Z Receiving objects: 0% (1645/778604), 884.00 KiB | 0 bytes/s 2019-12-17T08:31:32.1420718Z error: RPC failed; curl 56 OpenSSL SSL_read: No error 2019-12-17T08:31:32.1423571Z ##[error]fatal: the remote end hung up unexpectedly 2019-12-17T08:31:32.1446744Z ##[error]fatal: early EOF 2019-12-17T08:31:32.1464680Z ##[error]fatal: index-pack failed 2019-12-17T08:31:32.1650020Z Removed matchers: 'checkout-git' 2019-12-17T08:31:32.1651061Z ##[error]Git fetch failed with exit code: 128 2019-12-17T08:31:32.1876486Z ##[error]Exit code 1 returned from process: file name 'c:\runners\2.162.0\bin\Runner.PluginHost.exe', arguments 'action "GitHub.Runner.Plugins.Repository.v1_0.CheckoutTask, Runner.Plugins"'. 2019-12-17T08:31:32.1934364Z Cleaning up orphan processes ---------- components: Tests messages: 358534 nosy: steve.dower, vstinner priority: normal severity: normal status: open title: Azure Pipelines: git clone failed with: OpenSSL SSL_read: Connection was reset versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 07:46:43 2019 From: report at bugs.python.org (Jasper Spaans) Date: Tue, 17 Dec 2019 12:46:43 +0000 Subject: [New-bugs-announce] [issue39073] email regression in 3.8: folding Message-ID: <1576586803.42.0.942657577592.issue39073@roundup.psfhosted.org> New submission from Jasper Spaans : big-bob:t spaans$ cat fak.py import sys from email.message import EmailMessage from email.policy import SMTP from email.headerregistry import Address msg = EmailMessage(policy=SMTP) a = Address(display_name='Extra Extra Read All About It This Line Does Not Fit In 80 Characters So Should Be Wrapped \r\nX:', addr_spec='evil at local') msg['To'] = a print(sys.version) print(msg.as_string()) big-bob:t spaans$ python3.5 fak.py 3.5.2 (default, Jul 16 2019, 13:40:43) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)] To: "Extra Extra Read All About It This Line Does Not Fit In 80 Characters So Should Be Wrapped X:" big-bob:t spaans$ python3.8 fak.py 3.8.0 (default, Dec 17 2019, 13:32:18) [Clang 11.0.0 (clang-1100.0.33.16)] To: Extra Extra Read All About It This Line Does Not Fit In 80 Characters So Should Be Wrapped X: ---------- components: email messages: 358544 nosy: barry, jap, r.david.murray priority: normal severity: normal status: open title: email regression in 3.8: folding type: security versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 10:51:34 2019 From: report at bugs.python.org (Adam) Date: Tue, 17 Dec 2019 15:51:34 +0000 Subject: [New-bugs-announce] [issue39074] Threading memory leak in _shutdown_locks for non-daemon threads Message-ID: <1576597894.7.0.119079561961.issue39074@roundup.psfhosted.org> New submission from Adam : When running 3.7, we noticed a memory leak in threading._shutdown_locks when non-deamon threads are started but "join()" or "is_alive()" is never called. Here's a test to illustrate the growth: ========= import threading import time import tracemalloc def test_leaking_locks(): tracemalloc.start(10) snap1 = tracemalloc.take_snapshot() def print_things(): print('.', end='') for x in range(500): t = threading.Thread(target=print_things) t.start() time.sleep(5) print('') gc.collect() snap2 = tracemalloc.take_snapshot() filters = [] for stat in snap2.filter_traces(filters).compare_to(snap1.filter_traces(filters), 'traceback')[:10]: print("New Bytes: {}\tTotal Bytes {}\tNew blocks: {}\tTotal blocks: {}: ".format(stat.size_diff, stat.size, stat.count_diff ,stat.count)) for line in stat.traceback.format(): print(line) ========= ========= Output in v3.6.8: New Bytes: 840 Total Bytes 840 New blocks: 1 Total blocks: 1: File "/usr/local/lib/python3.6/threading.py", line 884 self._bootstrap_inner() New Bytes: 608 Total Bytes 608 New blocks: 4 Total blocks: 4: File "/usr/local/lib/python3.6/tracemalloc.py", line 387 self.traces = _Traces(traces) File "/usr/local/lib/python3.6/tracemalloc.py", line 524 return Snapshot(traces, traceback_limit) File "/gems/tests/integration/endpoint_connection_test.py", line 856 snap1 = tracemalloc.take_snapshot() File "/usr/local/lib/python3.6/site-packages/_pytest/python.py", line 198 testfunction(**testargs) File "/usr/local/lib/python3.6/site-packages/pluggy/callers.py", line 187 res = hook_impl.function(*args) File "/usr/local/lib/python3.6/site-packages/pluggy/manager.py", line 87 firstresult=hook.spec.opts.get("firstresult") if hook.spec else False, File "/usr/local/lib/python3.6/site-packages/pluggy/manager.py", line 93 return self._inner_hookexec(hook, methods, kwargs) File "/usr/local/lib/python3.6/site-packages/pluggy/hooks.py", line 286 return self._hookexec(self, self.get_hookimpls(), kwargs) File "/usr/local/lib/python3.6/site-packages/_pytest/python.py", line 1459 self.ihook.pytest_pyfunc_call(pyfuncitem=self) File "/usr/local/lib/python3.6/site-packages/_pytest/runner.py", line 111 item.runtest() ========== Output in v3.7.4: New Bytes: 36000 Total Bytes 36000 New blocks: 1000 Total blocks: 1000: File "/usr/local/lib/python3.7/threading.py", line 890 self._bootstrap_inner() File "/usr/local/lib/python3.7/threading.py", line 914 self._set_tstate_lock() File "/usr/local/lib/python3.7/threading.py", line 904 self._tstate_lock = _set_sentinel() New Bytes: 32768 Total Bytes 32768 New blocks: 1 Total blocks: 1: File "/usr/local/lib/python3.7/threading.py", line 890 self._bootstrap_inner() File "/usr/local/lib/python3.7/threading.py", line 914 self._set_tstate_lock() File "/usr/local/lib/python3.7/threading.py", line 909 _shutdown_locks.add(self._tstate_lock) ================= It looks like this commit didn't take into account the tstate_lock cleanup that happens in the C code, and it's not removing the _tstate_lock of completed threads from the _shutdown_locks once the thread finishes, unless the code manually calls "join()" or "is_alive()" on the thread: https://github.com/python/cpython/commit/468e5fec8a2f534f1685d59da3ca4fad425c38dd Let me know if I can provide more clarity on this! ---------- messages: 358551 nosy: krypticus priority: normal severity: normal status: open title: Threading memory leak in _shutdown_locks for non-daemon threads type: resource usage versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 11:21:21 2019 From: report at bugs.python.org (Eric Snow) Date: Tue, 17 Dec 2019 16:21:21 +0000 Subject: [New-bugs-announce] [issue39075] types.SimpleNamespace should preserve attribute ordering (?) Message-ID: <1576599681.13.0.0769751822925.issue39075@roundup.psfhosted.org> New submission from Eric Snow : types.SimpleNamespace was added in 3.3 (for use in sys.implementation; see PEP 421), which predates the change to preserving insertion order in dict. At the time we chose to sort the attributes in the repr, both for ease of reading and for a consistent output. The question is, should SimpleNamespace stay as it is (sorted repr) or should it show the order in which attributes were added? On the one hand, alphabetical order can be useful since it makes it easier for readers to find attributes, especially when there are many. However, for other cases it is helpful for the repr to show the order in which attributes were added. FWIW, I favor changing the ordering in the repr to insertion-order. Either is relatively trivial to get after the fact (whether "sorted(vars(ns))" or "list(vars(ns))"), so I don't think any folks that benefit from alphabetical order will be seriously impacted. ---------- components: Interpreter Core messages: 358553 nosy: eric.snow priority: normal severity: normal status: open title: types.SimpleNamespace should preserve attribute ordering (?) type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 11:35:51 2019 From: report at bugs.python.org (Eric Snow) Date: Tue, 17 Dec 2019 16:35:51 +0000 Subject: [New-bugs-announce] [issue39076] Use types.SimpleNamespace for argparse.Namespace Message-ID: <1576600551.73.0.829151608014.issue39076@roundup.psfhosted.org> New submission from Eric Snow : types.SimpleNamespace does pretty much exactly the same thing as argparse.Namespace. We should have the latter subclass the former. I expect the only reason that wasn't done before is because SimpleNamespace is newer. The only thing argparse.Namespace does differently is it supports the contains() builtin. So the subclass would look like this: class Namespace(types.SimpleNamespace): """...""" def __contains__(self, key): return key in self.__dict__ Alternately, we could add __contains__() to SimpleNamespace and then the subclass would effectively have an empty body. ---------- components: Library (Lib) messages: 358555 nosy: eric.snow priority: normal severity: normal stage: test needed status: open title: Use types.SimpleNamespace for argparse.Namespace versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 12:48:02 2019 From: report at bugs.python.org (Michael Amrhein) Date: Tue, 17 Dec 2019 17:48:02 +0000 Subject: [New-bugs-announce] [issue39077] Numeric formatting inconsistent between int, float and Decimal Message-ID: <1576604882.94.0.74204520541.issue39077@roundup.psfhosted.org> New submission from Michael Amrhein : The __format__ methods of int, float and Decimal (C and Python implementation) do not interpret the Format Specification Mini-Language in the same way: >>> import decimal as cdec ... cdec.__file__ ... '/usr/lib64/python3.6/decimal.py' >>> import _pydecimal as pydec ... pydec.__file__ ... '/usr/lib64/python3.6/_pydecimal.py' >>> i = -1234567890 ... f = float(i) ... d = cdec.Decimal(i) ... p = pydec.Decimal(i) ... >>> # Case 1: no fill, no align, no zeropad ... fmt = "28," >>> format(i, fmt) ' -1,234,567,890' >>> format(f, fmt) ' -1,234,567,890.0' >>> format(d, fmt) ' -1,234,567,890' >>> format(p, fmt) ' -1,234,567,890' >>> # Case 2: no fill, no align, but zeropad ... fmt = "028," >>> format(i, fmt) '-000,000,000,001,234,567,890' >>> format(f, fmt) '-0,000,000,001,234,567,890.0' >>> format(d, fmt) '-000,000,000,001,234,567,890' >>> format(p, fmt) '-000,000,000,001,234,567,890' >>> # Case 3: no fill, but align '>' + zeropad ... fmt = ">028," >>> format(i, fmt) '00000000000000-1,234,567,890' >>> format(f, fmt) '000000000000-1,234,567,890.0' >>> format(d, fmt) ValueError: invalid format string >>> format(p, fmt) ValueError: Alignment conflicts with '0' in format specifier: >028, >>> # Case 4: no fill, but align '=' + zeropad ... fmt = "=028," >>> format(i, fmt) '-000,000,000,001,234,567,890' >>> format(f, fmt) '-0,000,000,001,234,567,890.0' >>> format(d, fmt) ValueError: invalid format string >>> format(p, fmt) ValueError: Alignment conflicts with '0' in format specifier: =028, >>> # Case 5: fill '0', align '=' + zeropad ... fmt = "0=028," >>> format(i, fmt) '-000,000,000,001,234,567,890' >>> format(f, fmt) '-0,000,000,001,234,567,890.0' >>> format(d, fmt) ValueError: invalid format string >>> format(p, fmt) ValueError: Fill character conflicts with '0' in format specifier: 0=028, >>> # Case 6: fill ' ', align '=' + zeropad ... fmt = " =028," >>> format(i, fmt) '- 1,234,567,890' >>> format(f, fmt) '- 1,234,567,890.0' >>> format(d, fmt) ValueError: invalid format string >>> format(p, fmt) ValueError: Fill character conflicts with '0' in format specifier: =028, >>> # Case 7: fill ' ', align '>' + zeropad ... fmt = " >028," >>> format(i, fmt) ' -1,234,567,890' >>> format(f, fmt) ' -1,234,567,890.0' >>> format(d, fmt) ValueError: invalid format string >>> format(p, fmt) ValueError: Fill character conflicts with '0' in format specifier: >028, >>> # Case 8: fill ' ', no align, but zeropad ... fmt = " 028," >>> format(i, fmt) '-000,000,000,001,234,567,890' >>> format(f, fmt) '-0,000,000,001,234,567,890.0' >>> format(d, fmt) '-000,000,000,001,234,567,890' >>> format(p, fmt) '-000,000,000,001,234,567,890' >>> # Case 9: fill '_', no align, but zeropad ... fmt = "_028," >>> format(i, fmt) ValueError: Invalid format specifier >>> format(f, fmt) ValueError: Invalid format specifier >>> format(d, fmt) ValueError: invalid format string >>> format(p, fmt) ValueError: Invalid format specifier: _028, >>> # Case 10: fill '_', no align, no zeropad ... fmt = "_28," >>> format(i, fmt) ValueError: Invalid format specifier >>> format(f, fmt) ValueError: Invalid format specifier >>> format(d, fmt) ValueError: Invalid format string >>> format(p, fmt) ValueError: Invalid format specifier: _28, >>> # Case 11: fill '0', align '>', no zeropad ... fmt = "0>28," >>> format(i, fmt) '00000000000000-1,234,567,890' >>> format(f, fmt) '000000000000-1,234,567,890.0' >>> format(d, fmt) '00000000000000-1,234,567,890' >>> format(p, fmt) '00000000000000-1,234,567,890' >>> # Case 12: fill '0', align '<', no zeropad ... fmt = "0<28," >>> format(i, fmt) '-1,234,567,89000000000000000' >>> format(f, fmt) '-1,234,567,890.0000000000000' >>> format(d, fmt) '-1,234,567,89000000000000000' >>> format(p, fmt) '-1,234,567,89000000000000000' >>> # Case 13: fixed-point notation w/o precision ... fmt = "f" >>> format(f, fmt) '-1234567890.000000' >>> format(d, fmt) '-1234567890' >>> format(p, fmt) '-1234567890' Case 1 & 2: For a format string not giving a type ("None") the spec says: "Similar to 'g', except that fixed-point notation, when used, has at least one digit past the decimal point." float does follow this rule, Decimal does not. While this may be regarded as reasonable, it should be noted in the doc. Cases 3 to 7: Both implementations of Decimal do not allow to combine align and zeropad, while int and float do. When also fill is given, int and float ignore zeropad, but use '0' instead of ' ' (default), if not. (For an exception see the following case.) The spec says: "When no explicit alignment is given, preceding the width field by a zero ('0') character enables sign-aware zero-padding for numeric types. This is equivalent to a fill character of '0' with an alignment type of '='." That does not explicitly give a rule for align + zeropad together, but IMHO it suggests to use zeropad *only* if no align is given and that it should *not* overwrite the default fill ' '. Cases 8 - 10: The syntax given by the spec IMHO says: no fill without align! There is no mention of an exception for a blank as fill. Case 11 & 12: While all implementation "agree" here, combining '0' as fill with align other than '=' gives really odd results. See also https://bugs.python.org/issue17247. Case 13: For fixed-point notation the spec says: "The default precision is 6." float does follow this rule, Decimal does not. While this may be regarded as reasonable, it should be noted in the doc. ---------- messages: 358561 nosy: mamrhein priority: normal severity: normal status: open title: Numeric formatting inconsistent between int, float and Decimal type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 12:51:54 2019 From: report at bugs.python.org (=?utf-8?q?Veith_R=C3=B6thlingsh=C3=B6fer?=) Date: Tue, 17 Dec 2019 17:51:54 +0000 Subject: [New-bugs-announce] [issue39078] __function.__defaults__ breaks for __init__ of dataclasses with default factory Message-ID: <1576605114.21.0.115662739436.issue39078@roundup.psfhosted.org> New submission from Veith R?thlingsh?fer : When creating a dataclass with a default that is a field with a default factory, the factory is not correctly resolved in cls.__init__.__defaults__. It evaluates to the __repr__ of dataclasses._HAS_DEFAULT_FACTORY_CLASS, which is "". The expected behavior would be to have a value of whatever the default factory produces as a default. This causes issues for example when using inspect.BoundParameters.apply_defaults() on the __init__ of such a dataclass. Code to reproduce: ``` from dataclasses import dataclass, field from typing import Any, Dict @dataclass() class Test: a: int b: Dict[Any, Any] = field(default_factory=dict) print(Test.__init__.__defaults__) # ``` The affected packages are on a high-level dataclasses, on a lower level the issue is in the builtin __function.__defaults__. ---------- components: C API messages: 358562 nosy: RunOrVeith priority: normal severity: normal status: open title: __function.__defaults__ breaks for __init__ of dataclasses with default factory type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 13:52:27 2019 From: report at bugs.python.org (Alfred Morgan) Date: Tue, 17 Dec 2019 18:52:27 +0000 Subject: [New-bugs-announce] [issue39079] help() modifies the string module Message-ID: <1576608747.4.0.324829457478.issue39079@roundup.psfhosted.org> New submission from Alfred Morgan : import string a = string.letters help(int) b = string.letters a == b # False ---------- components: Library (Lib) messages: 358564 nosy: Zectbumo priority: normal severity: normal status: open title: help() modifies the string module versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 16:32:08 2019 From: report at bugs.python.org (Lysandros Nikolaou) Date: Tue, 17 Dec 2019 21:32:08 +0000 Subject: [New-bugs-announce] [issue39080] Inconsistency with Starred Expression line/col info Message-ID: <1576618328.23.0.851602684205.issue39080@roundup.psfhosted.org> New submission from Lysandros Nikolaou : When a starred expression like *[0, 1] is parsed, the following AST gets generated: Module( body=[ Expr( value=Starred( value=List( elts=[ Constant( value=0, kind=None, lineno=1, col_offset=2, end_lineno=1, end_col_offset=3, ), Constant( value=1, kind=None, lineno=1, col_offset=5, end_lineno=1, end_col_offset=6, ), ], ctx=Load(), lineno=1, col_offset=1, end_lineno=1, end_col_offset=7, ), ctx=Load(), lineno=1, col_offset=0, end_lineno=1, end_col_offset=7, ), lineno=1, col_offset=0, end_lineno=1, end_col_offset=7, ) ], type_ignores=[], ) But, when a starred expression is an argument to a function call then the line/col info are wrong (end_col_offset is always equal to col_offset + 1): Module( body=[ Expr( value=Call( func=Name( id="f", ctx=Load(), lineno=1, col_offset=0, end_lineno=1, end_col_offset=1 ), args=[ Starred( value=List( elts=[ Constant( value=0, kind=None, lineno=1, col_offset=4, end_lineno=1, end_col_offset=5, ), Constant( value=1, kind=None, lineno=1, col_offset=7, end_lineno=1, end_col_offset=8, ), ], ctx=Load(), lineno=1, col_offset=3, end_lineno=1, end_col_offset=9, ), ctx=Load(), lineno=1, col_offset=2, end_lineno=1, end_col_offset=9, ) ], keywords=[], lineno=1, col_offset=0, end_lineno=1, end_col_offset=10, ), lineno=1, col_offset=0, end_lineno=1, end_col_offset=10, ) ], type_ignores=[], ) ---------- components: Interpreter Core messages: 358584 nosy: lys.nikolaou priority: normal severity: normal status: open title: Inconsistency with Starred Expression line/col info type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 18:37:20 2019 From: report at bugs.python.org (Andrew Ni) Date: Tue, 17 Dec 2019 23:37:20 +0000 Subject: [New-bugs-announce] [issue39081] pathlib '/' operator does not resolve Enums with str mixin as expected Message-ID: <1576625840.56.0.0179543914349.issue39081@roundup.psfhosted.org> New submission from Andrew Ni : import os import pathlib import enum class MyEnum(str, enum.Enum): RED = 'red' # this resolves to: '/Users/niandrew/MyEnum.RED' # EXPECTED: '/Users/niandrew/red' str(pathlib.Path.home() / MyEnum.RED) # this resolves to: '/Users/niandrew/red' os.path.join(pathlib.Path.home(), MyEnum.RED) ---------- components: Library (Lib) messages: 358598 nosy: andrewni priority: normal severity: normal status: open title: pathlib '/' operator does not resolve Enums with str mixin as expected type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 19:36:45 2019 From: report at bugs.python.org (Aniket Panse) Date: Wed, 18 Dec 2019 00:36:45 +0000 Subject: [New-bugs-announce] [issue39082] AsyncMock is unable to correctly patch static or class methods Message-ID: <1576629405.73.0.979234563097.issue39082@roundup.psfhosted.org> New submission from Aniket Panse : Currently, patch is unable to correctly patch coroutinefunctions decorated with `@staticmethod` or `@classmethod`. Example: ``` [*] aniketpanse [~/git/cpython] -> ./python ?[master] Python 3.9.0a1+ (heads/master:50d4f12958, Dec 17 2019, 16:31:30) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> class Helper: ... @classmethod ... async def async_class_method(cls): ... pass ... >>> from unittest.mock import patch >>> patch("Helper.async_class_method") ``` This should ideally return an `AsyncMock()`. ---------- components: Tests, asyncio messages: 358601 nosy: asvetlov, czardoz, yselivanov priority: normal severity: normal status: open title: AsyncMock is unable to correctly patch static or class methods versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 20:15:17 2019 From: report at bugs.python.org (Martin Meo) Date: Wed, 18 Dec 2019 01:15:17 +0000 Subject: [New-bugs-announce] [issue39083] Dictionary get(key, default-expression) not short circuit behavior Message-ID: <1576631717.88.0.0905545804197.issue39083@roundup.psfhosted.org> New submission from Martin Meo : """ Unexpected behavior report Dictionary get(key, default-expression) not short circuit behavior MacOS 10.14.6 sys.version_info(major=3, minor=6, micro=5, releaselevel='final', serial=0) BACKGROUND A python dictionary is a data structure that associates a set of keys with a set of values. Accessing a non-existent key produces a KeyError. Dictionaries have a get() method. get(key[, default-expression]) Return the value for key if key is in the dictionary, else default-expression. If default-expression is not given, it defaults to None, so that this method never raises a KeyError. EXPECTED BEHAVIOR get() would only evaluate default-expression if it has to, when key is not found. It would have short-circuit behavior like boolean operators. ACTUAL BEHAVIOR The default-expression DOES get evaluated even when the key IS found in the dictionary. And if default-expression is a function call, the function DOES get called. """ denominations = {0:'zero', 1:'one', 2:'two', 3:'three', 4:'four'} def foo(n): print('FOO CALLED. n =', n) return str(n*10) words = [] words.append(denominations[1]) words.append(denominations[4]) words.append(denominations.get(1)) words.append(denominations.get(4)) words.append(denominations.get(1, 'ERROR-A')) words.append(denominations.get(4, 'ERROR-B')) words.append(denominations.get(22, 'ERROR-1')) words.append(denominations.get(88, 'ERROR-2')) words.append(denominations.get(1, foo(1))) words.append(denominations.get(4, foo(4))) print(words) def isItZero(n): print('ISITZERO CALLED. n=', n) return False a = (True or isItZero(9)) # (True or x) is always True so x is not evaluated ---------- messages: 358602 nosy: martinmeo priority: normal severity: normal status: open title: Dictionary get(key, default-expression) not short circuit behavior type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 20:45:28 2019 From: report at bugs.python.org (Manish) Date: Wed, 18 Dec 2019 01:45:28 +0000 Subject: [New-bugs-announce] [issue39084] string.letters is flipped after setlocale is called Message-ID: <1576633528.14.0.930532068678.issue39084@roundup.psfhosted.org> New submission from Manish : Steps to reproduce: >>> import string >>> string.letters 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' >>> help(string) ...... >>> string.letters 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' The help(string) can also be replaced with locale.setlocale(locale.LC_CTYPE, "en_US.UTF-8") What's happening here is that any call to setlocale() (which help() calls internally) recomputes string.letters. The recomputation flips the order in the current implementation. ---------- messages: 358604 nosy: Manishearth priority: normal severity: normal status: open title: string.letters is flipped after setlocale is called versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 17 22:47:41 2019 From: report at bugs.python.org (Kyle Stanley) Date: Wed, 18 Dec 2019 03:47:41 +0000 Subject: [New-bugs-announce] [issue39085] Improve docs for await expression Message-ID: <1576640861.79.0.333385908669.issue39085@roundup.psfhosted.org> New submission from Kyle Stanley : For context, I decided to open this issue after receiving a substantial volume of very similar questions and misconceptions from users of asyncio and trio about what `await` does, mostly within a dedicated "async" topical help chat (in the "Python Discord" community). For the most part, the brief explanation provided in the language reference docs (https://docs.python.org/3/reference/expressions.html#await-expression) did not help to clear up their understanding. Also, speaking from personal experience, I did not have a clear understanding of what `await` actually did until I gained some experience working with asyncio. When I read the language reference definition for the await expression for the first time, it did not make much sense to me either. As a result, I think the documentation for the `await` expression could be made significantly more clear. To users that are already familiar with asynchronous programming it likely makes more sense, but I don't think it's as helpful as it could be for those who are trying to fundamentally understand how `await` works (without having prior experience): "Suspend the execution of coroutine on an awaitable object. Can only be used inside a coroutine function." (https://docs.python.org/3/reference/expressions.html#await-expression) (Also, note that there's a typo in the current version, "of coroutine" should probably be "of a coroutine") While this explanation is technically accurate, it also looks to be the _shortest_ one out of all of the defined expressions on the page. To me, this doesn't seem right considering that the await expression is not the easiest one to learn or understand. The vast majority of the questions and misunderstandings on `await` that I've seen typically fall under some variation of one of the following: 1) What exactly is being suspended? 2) When is it resumed/unsuspended? 3) How is it useful? >From what I can tell, (1) is unclear to them is partly because the awaitable object that is after the `await` can be a coroutine object. It's not at all uncommon to see "await some_coro()". I think this would be much more clear if it were to instead be something along the lines of one the following (changes indicated with *): 1) "Suspend the execution of *the current coroutine function* on an awaitable object. Can only be used inside a coroutine function." Where "the current coroutine function" is the coroutine function that contains the await expression. I think this would help to clear up the first question, "What exactly is being suspended?". 2) "Suspend the execution of *the current coroutine function* on an awaitable object. *The coroutine function is resumed when the awaitable object is completed and returns its result*. Can only be used inside a coroutine function." This would likely help to clear up "When is it resumed/unsuspended?". Optimally, this definition could also include some form of example code like several of the other expressions have. It's not particularly easy to use a demonstrable example without using an async library (such as asyncio), but using a specific async library would not make sense to have in this location of the docs because the language reference is supposed to be as implementation agnostic as possible. However, I think a very brief visual example with some explanation could still be useful for explaining the basics of how await works: 3) ``` async def coro(): # before await await some_awaitable # after await When the coroutine function `coro()` is executed, it will behave roughly the same as any subroutine function in the "before await" section. However, upon reaching `await some_awaitable`, the execution of `coro()` will be suspended on `some_awaitable`, preventing the execution of anything in the "after await" section until `some_awaitable` is completed. This process is repeated with successive await expressions. Also, multiple coroutines can be suspended at the same time. Suspension can be used to indicate that other coroutines can be executed in the meantime. This can be used to write asynchronous and concurrent programs without the usage of callbacks. ``` Including the brief example and explanation would likely help to further clear up all three of the questions. The present version has a high degree of technical accuracy, but I don't think its as helpful as it could be for furthering the understanding of users or providing an introduction to the await expression. I'm sure that there will still be some questions regarding `await` even if any of these changes are made, but it would at least provide a good place to link to for an informative explanation of `await` that's entirely agnostic from any specific implementation. I'm entirely open to any alternative suggestions, or making a change that's some combination or variation of the above three ideas. Alternatively, if there are determined to be no suitable changes that would be both technically accurate and more helpful to users, I could just apply a fix to the typo. If any of these ideas are approved, I'll likely open a PR. ---------- assignee: aeros components: Documentation, asyncio messages: 358610 nosy: aeros, asvetlov, njs, yselivanov priority: normal severity: normal status: open title: Improve docs for await expression type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 06:57:14 2019 From: report at bugs.python.org (Mradul Tiwari) Date: Wed, 18 Dec 2019 11:57:14 +0000 Subject: [New-bugs-announce] [issue39086] Division "/" error on Long Integers Message-ID: <1576670234.02.0.553582573696.issue39086@roundup.psfhosted.org> New submission from Mradul Tiwari : I'm a Competitive programmer and got Wrong Answer because of this division issue, which I figured out post Contest. Please See the attached screenshot. The "/" operator gives float division but for long integers, it's giving integer answer. I've tested it on several values but the result doesn't matched with expected answers. I've also googled a lot about this but can't get the explanation. Please go to link "https://codeforces.com/contest/1266/submission/67106918" and see the actual arise of problem on SEVERAL VALUES, in detail section, in TestCase 3 which have very large integers. In my code at that link, the error arises inside the function get(x) at line "c=(x-i)/14" ---------- assignee: docs at python components: Documentation files: Tested values.png messages: 358622 nosy: Mradul, docs at python priority: normal severity: normal status: open title: Division "/" error on Long Integers type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48787/Tested values.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 07:10:15 2019 From: report at bugs.python.org (Inada Naoki) Date: Wed, 18 Dec 2019 12:10:15 +0000 Subject: [New-bugs-announce] [issue39087] No efficient API to get UTF-8 string from unicode object. Message-ID: <1576671015.48.0.740860223179.issue39087@roundup.psfhosted.org> New submission from Inada Naoki : Assume you are writing an extension module that reads string. For example, HTML escape or JSON encode. There are two courses: (a) Support three KINDs in the flexible unicode representation. (b) Get UTF-8 data from the unicode. (a) will be the fastest on CPython, but there are few drawbacks: * This is tightly coupled with CPython implementation. It will be slow on PyPy. * CPython may change the internal representation to UTF-8 in the future, like PyPy. * You can not easily reuse algorithms written in C that handle `char*`. So I believe (b) should be the preferred way. But CPython doesn't provide an efficient way to get UTF-8 from the unicode object. * PyUnicode_AsUTF8AndSize(): When the unicode contains non-ASCII character, it will create a UTF-8 cache. The cache will be remained for longer than required. And there is additional malloc + memcpy to create the cache. * PyUnicode_DecodeUTF8(): It creates bytes object even when the unicode object is ASCII-only or there is a UTF-8 cache already. For speed and efficiency, I propose a new API: ``` /* Borrow the UTF-8 C string from the unicode. * * Store a pointer to the UTF-8 encoding of the unicode to *utf8* and its size to *size*. * The returned object is the owner of the *utf8*. You need to Py_DECREF() it after * you finished to using the *utf8*. The owner may be not the unicode. * Returns NULL when the error occurred while decoding the unicode. */ PyObject* PyUnicode_BorrowUTF8(PyObject *unicode, const char **utf8, Py_ssize_t *len); ``` When the unicode object is ASCII or has UTF-8 cache, this API increment refcnt of the unicode and return it. Otherwise, this API calls `_PyUnicode_AsUTF8String(unicode, NULL)` and return it. ---------- components: C API messages: 358623 nosy: inada.naoki priority: normal severity: normal status: open title: No efficient API to get UTF-8 string from unicode object. type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 12:37:48 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 18 Dec 2019 17:37:48 +0000 Subject: [New-bugs-announce] [issue39088] test_concurrent_futures crashed with python.core core dump on AMD64 FreeBSD Shared 3.x Message-ID: <1576690668.17.0.971544115619.issue39088@roundup.psfhosted.org> New submission from STINNER Victor : Yesterday and today, I pushed two test_concurrent_futures fixes in bpo-38546: * commit 673c39331f844a80c465efd7cff88ac55c432bfb * commit 9707e8e22d80ca97bf7a9812816701cecde6d226 Maybe it fixed this crash, maybe not. https://buildbot.python.org/all/#/builders/152/builds/70 0:19:28 load avg: 4.96 [242/420/1] test_concurrent_futures failed (env changed) (5 min 4 sec) -- running: test_io (2 min 1 sec), test_largefile (31.3 sec) test_cancel (test.test_concurrent_futures.FutureTests) ... ok test_cancelled (test.test_concurrent_futures.FutureTests) ... ok test_done (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_already_cancelled (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_already_failed (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_already_successful (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_raises (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_raises_already_succeeded (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_with_cancel (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_with_exception (test.test_concurrent_futures.FutureTests) ... ok test_done_callback_with_result (test.test_concurrent_futures.FutureTests) ... ok test_exception_with_success (test.test_concurrent_futures.FutureTests) ... ok test_exception_with_timeout (test.test_concurrent_futures.FutureTests) ... ok test_multiple_set_exception (test.test_concurrent_futures.FutureTests) ... ok test_multiple_set_result (test.test_concurrent_futures.FutureTests) ... ok test_repr (test.test_concurrent_futures.FutureTests) ... ok test_result_with_cancel (test.test_concurrent_futures.FutureTests) ... ok test_result_with_success (test.test_concurrent_futures.FutureTests) ... ok test_result_with_timeout (test.test_concurrent_futures.FutureTests) ... ok test_running (test.test_concurrent_futures.FutureTests) ... ok test_correct_timeout_exception_msg (test.test_concurrent_futures.ProcessPoolForkAsCompletedTest) ... 0.35s ok test_duplicate_futures (test.test_concurrent_futures.ProcessPoolForkAsCompletedTest) ... 2.40s ok test_free_reference_yielded_future (test.test_concurrent_futures.ProcessPoolForkAsCompletedTest) ... 0.37s ok test_no_timeout (test.test_concurrent_futures.ProcessPoolForkAsCompletedTest) ... 0.24s ok test_zero_timeout (test.test_concurrent_futures.ProcessPoolForkAsCompletedTest) ... 2.40s ok test_crash (test.test_concurrent_futures.ProcessPoolForkExecutorDeadlockTest) ... 1.12s ok test_shutdown_deadlock (test.test_concurrent_futures.ProcessPoolForkExecutorDeadlockTest) ... 0.57s ok test_initializer (test.test_concurrent_futures.ProcessPoolForkFailingInitializerTest) ... 0.19s ok test_initializer (test.test_concurrent_futures.ProcessPoolForkInitializerTest) ... 0.26s ok test_free_reference (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.50s ok test_killed_child (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.25s ok test_map (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.37s ok test_map_chunksize (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.40s ok test_map_exception (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.58s ok test_map_timeout (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 6.42s ok test_max_workers_negative (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.32s ok test_max_workers_too_large (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... skipped 'Windows-only process limit' test_no_stale_references (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.31s ok test_ressources_gced_in_workers (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.86s ok test_shutdown_race_issue12456 (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.69s ok test_submit (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.51s ok test_submit_keyword (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.16s ok test_traceback (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... 0.18s ok test_context_manager_shutdown (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... 0.06s ok test_del_shutdown (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... 0.07s ok test_hang_issue12364 (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... 1.22s ok test_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... 2.13s ok test_processes_terminate (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... 0.05s ok test_run_after_shutdown (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... 0.00s ok test_submit_after_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolForkProcessPoolShutdownTest) ... 0.42s ok test_all_completed (test.test_concurrent_futures.ProcessPoolForkWaitTest) ... 0.16s ok test_first_completed (test.test_concurrent_futures.ProcessPoolForkWaitTest) ... 1.70s ok test_first_completed_some_already_completed (test.test_concurrent_futures.ProcessPoolForkWaitTest) ... 1.80s ok test_first_exception (test.test_concurrent_futures.ProcessPoolForkWaitTest) ... 3.18s ok test_first_exception_one_already_failed (test.test_concurrent_futures.ProcessPoolForkWaitTest) ... 2.24s ok test_first_exception_some_already_complete (test.test_concurrent_futures.ProcessPoolForkWaitTest) ... 1.86s ok test_timeout (test.test_concurrent_futures.ProcessPoolForkWaitTest) ... 6.26s ok test_correct_timeout_exception_msg (test.test_concurrent_futures.ProcessPoolForkserverAsCompletedTest) ... 1.98s ok test_duplicate_futures (test.test_concurrent_futures.ProcessPoolForkserverAsCompletedTest) ... 3.44s ok test_free_reference_yielded_future (test.test_concurrent_futures.ProcessPoolForkserverAsCompletedTest) ... 1.76s ok test_no_timeout (test.test_concurrent_futures.ProcessPoolForkserverAsCompletedTest) ... 1.90s ok test_zero_timeout (test.test_concurrent_futures.ProcessPoolForkserverAsCompletedTest) ... 3.66s ok test_crash (test.test_concurrent_futures.ProcessPoolForkserverExecutorDeadlockTest) ... 12.13s ok test_shutdown_deadlock (test.test_concurrent_futures.ProcessPoolForkserverExecutorDeadlockTest) ... 3.03s ok test_initializer (test.test_concurrent_futures.ProcessPoolForkserverFailingInitializerTest) ... 1.12s ok test_initializer (test.test_concurrent_futures.ProcessPoolForkserverInitializerTest) ... 1.39s ok test_free_reference (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 2.09s ok test_killed_child (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.93s ok test_map (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 2.02s ok test_map_chunksize (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.87s ok test_map_exception (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.90s ok test_map_timeout (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 7.73s ok test_max_workers_negative (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.55s ok test_max_workers_too_large (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... skipped 'Windows-only process limit' test_no_stale_references (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.78s ok test_ressources_gced_in_workers (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 3.01s ok test_shutdown_race_issue12456 (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.70s ok test_submit (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.67s ok test_submit_keyword (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 1.99s ok test_traceback (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolExecutorTest) ... 2.03s ok test_context_manager_shutdown (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... 0.16s ok test_del_shutdown (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... 0.11s ok test_hang_issue12364 (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... 2.94s ok test_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... 3.31s ok test_processes_terminate (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... 1.53s ok test_run_after_shutdown (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... 0.00s ok test_submit_after_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolForkserverProcessPoolShutdownTest) ... 1.50s ok test_all_completed (test.test_concurrent_futures.ProcessPoolForkserverWaitTest) ... 2.03s ok test_first_completed (test.test_concurrent_futures.ProcessPoolForkserverWaitTest) ... 3.41s ok test_first_completed_some_already_completed (test.test_concurrent_futures.ProcessPoolForkserverWaitTest) ... 3.74s ok test_first_exception (test.test_concurrent_futures.ProcessPoolForkserverWaitTest) ... 5.07s ok test_first_exception_one_already_failed (test.test_concurrent_futures.ProcessPoolForkserverWaitTest) ... 3.91s ok test_first_exception_some_already_complete (test.test_concurrent_futures.ProcessPoolForkserverWaitTest) ... 3.46s ok test_timeout (test.test_concurrent_futures.ProcessPoolForkserverWaitTest) ... 7.47s ok test_correct_timeout_exception_msg (test.test_concurrent_futures.ProcessPoolSpawnAsCompletedTest) ... 2.44s ok test_duplicate_futures (test.test_concurrent_futures.ProcessPoolSpawnAsCompletedTest) ... 4.78s ok test_free_reference_yielded_future (test.test_concurrent_futures.ProcessPoolSpawnAsCompletedTest) ... 2.63s ok test_no_timeout (test.test_concurrent_futures.ProcessPoolSpawnAsCompletedTest) ... 2.74s ok test_zero_timeout (test.test_concurrent_futures.ProcessPoolSpawnAsCompletedTest) ... 4.35s ok test_crash (test.test_concurrent_futures.ProcessPoolSpawnExecutorDeadlockTest) ... 17.00s ok test_shutdown_deadlock (test.test_concurrent_futures.ProcessPoolSpawnExecutorDeadlockTest) ... 3.79s ok test_initializer (test.test_concurrent_futures.ProcessPoolSpawnFailingInitializerTest) ... 1.78s ok test_initializer (test.test_concurrent_futures.ProcessPoolSpawnInitializerTest) ... 1.57s ok test_free_reference (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.47s ok test_killed_child (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.27s ok test_map (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.24s ok test_map_chunksize (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.27s ok test_map_exception (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 3.02s ok test_map_timeout (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 8.39s ok test_max_workers_negative (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.49s ok test_max_workers_too_large (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... skipped 'Windows-only process limit' test_no_stale_references (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.32s ok test_ressources_gced_in_workers (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 4.27s ok test_shutdown_race_issue12456 (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.21s ok test_submit (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 3.45s ok test_submit_keyword (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.75s ok test_traceback (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolExecutorTest) ... 2.49s ok test_context_manager_shutdown (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 0.07s ok test_del_shutdown (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 0.14s ok test_hang_issue12364 (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 3.37s ok test_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 3.11s ok test_processes_terminate (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 2.28s ok test_run_after_shutdown (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 0.00s ok test_submit_after_interpreter_shutdown (test.test_concurrent_futures.ProcessPoolSpawnProcessPoolShutdownTest) ... 1.53s ok test_all_completed (test.test_concurrent_futures.ProcessPoolSpawnWaitTest) ... 3.10s ok test_first_completed (test.test_concurrent_futures.ProcessPoolSpawnWaitTest) ... 3.82s ok test_first_completed_some_already_completed (test.test_concurrent_futures.ProcessPoolSpawnWaitTest) ... 3.92s ok test_first_exception (test.test_concurrent_futures.ProcessPoolSpawnWaitTest) ... 4.87s ok test_first_exception_one_already_failed (test.test_concurrent_futures.ProcessPoolSpawnWaitTest) ... 4.14s ok test_first_exception_some_already_complete (test.test_concurrent_futures.ProcessPoolSpawnWaitTest) ... 3.48s ok test_timeout (test.test_concurrent_futures.ProcessPoolSpawnWaitTest) ... 8.20s ok test_correct_timeout_exception_msg (test.test_concurrent_futures.ThreadPoolAsCompletedTest) ... 0.34s ok test_duplicate_futures (test.test_concurrent_futures.ThreadPoolAsCompletedTest) ... 2.38s ok test_free_reference_yielded_future (test.test_concurrent_futures.ThreadPoolAsCompletedTest) ... 0.30s ok test_no_timeout (test.test_concurrent_futures.ThreadPoolAsCompletedTest) ... 0.34s ok test_zero_timeout (test.test_concurrent_futures.ThreadPoolAsCompletedTest) ... 2.29s ok test_default_workers (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.45s ok test_free_reference (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_idle_thread_reuse (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_map (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_map_exception (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_map_submits_without_iteration (test.test_concurrent_futures.ThreadPoolExecutorTest) Tests verifying issue 11777. ... 0.11s ok test_map_timeout (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 6.15s ok test_max_workers_negative (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_no_stale_references (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_saturation (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.32s ok test_shutdown_race_issue12456 (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.15s ok test_submit (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_submit_keyword (test.test_concurrent_futures.ThreadPoolExecutorTest) ... 0.11s ok test_initializer (test.test_concurrent_futures.ThreadPoolFailingInitializerTest) ... 0.11s ok test_initializer (test.test_concurrent_futures.ThreadPoolInitializerTest) ... 0.19s ok test_context_manager_shutdown (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.12s ok test_del_shutdown (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.00s ok test_hang_issue12364 (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 1.25s ok test_interpreter_shutdown (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 1.55s ok test_run_after_shutdown (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.00s ok test_submit_after_interpreter_shutdown (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.16s ok test_thread_names_assigned (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.01s ok test_thread_names_default (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.00s ok test_threads_terminate (test.test_concurrent_futures.ThreadPoolShutdownTest) ... 0.00s ok test_all_completed (test.test_concurrent_futures.ThreadPoolWaitTests) ... 0.11s ok test_first_completed (test.test_concurrent_futures.ThreadPoolWaitTests) ... 1.71s ok test_first_completed_some_already_completed (test.test_concurrent_futures.ThreadPoolWaitTests) ... 1.62s ok test_first_exception (test.test_concurrent_futures.ThreadPoolWaitTests) ... 3.65s ok test_first_exception_one_already_failed (test.test_concurrent_futures.ThreadPoolWaitTests) ... 2.17s ok test_first_exception_some_already_complete (test.test_concurrent_futures.ThreadPoolWaitTests) ... 1.92s ok test_pending_calls_race (test.test_concurrent_futures.ThreadPoolWaitTests) ... 0.65s ok test_timeout (test.test_concurrent_futures.ThreadPoolWaitTests) ... 6.69s ok ---------------------------------------------------------------------- Ran 168 tests in 303.707s OK (skipped=3) Warning -- files was modified by test_concurrent_futures Before: [] After: ['python.core'] ---------- components: Tests messages: 358634 nosy: vstinner priority: normal severity: normal status: open title: test_concurrent_futures crashed with python.core core dump on AMD64 FreeBSD Shared 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 12:44:49 2019 From: report at bugs.python.org (Tal Einat) Date: Wed, 18 Dec 2019 17:44:49 +0000 Subject: [New-bugs-announce] [issue39089] Update IDLE's credits Message-ID: <1576691089.83.0.499008805758.issue39089@roundup.psfhosted.org> New submission from Tal Einat : The "Credits" document in the "About" dialog could use some updating. It fails to mention Saimadhav Heblikar's important work during GSoC 2014 as well as Terry J. Reedy's tireless work over the past few years which has helped keep IDLE in working order and the codebase in reasonable shape given its age. ---------- assignee: terry.reedy components: IDLE messages: 358636 nosy: taleinat, terry.reedy priority: normal severity: normal status: open title: Update IDLE's credits versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 13:20:50 2019 From: report at bugs.python.org (Brett Cannon) Date: Wed, 18 Dec 2019 18:20:50 +0000 Subject: [New-bugs-announce] [issue39090] Document various options for getting the absolute path from pathlib.Path objects Message-ID: <1576693250.66.0.453775762211.issue39090@roundup.psfhosted.org> New submission from Brett Cannon : The question on how best to get an absolute path from a pathlib.Path object keeps coming up (see https://bugs.python.org/issue29688, https://discuss.python.org/t/add-absolute-name-to-pathlib-path/2882/, and https://discuss.python.org/t/pathlib-absolute-vs-resolve/2573 as examples). As pointed out across those posts, getting the absolute path is surprisingly subtle and varied depending on your needs. As such we should probably add a section somewhere in the pathlib docs explaining the various ways and why you would choose one over the other. ---------- assignee: docs at python components: Documentation messages: 358638 nosy: brett.cannon, docs at python priority: normal severity: normal stage: needs patch status: open title: Document various options for getting the absolute path from pathlib.Path objects type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 13:45:15 2019 From: report at bugs.python.org (Sebastian Krause) Date: Wed, 18 Dec 2019 18:45:15 +0000 Subject: [New-bugs-announce] [issue39091] CPython Segfault in 5 lines of code Message-ID: <1576694715.09.0.414064649134.issue39091@roundup.psfhosted.org> New submission from Sebastian Krause : The following lines trigger a segmentation fault: class E(BaseException): def __new__(cls, *args, **kwargs): return cls def a(): yield a().throw(E) Source with a bit more explanation: https://gist.github.com/coolreader18/6dbe0be2ae2192e90e1a809f1624c694 (I'm not the author of that gist, just reporting it here). ---------- components: Interpreter Core messages: 358639 nosy: skrause priority: normal severity: normal status: open title: CPython Segfault in 5 lines of code type: crash versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 15:50:42 2019 From: report at bugs.python.org (Evan) Date: Wed, 18 Dec 2019 20:50:42 +0000 Subject: [New-bugs-announce] [issue39092] Csv sniffer doesn't attempt to determine and set escape character. Message-ID: <1576702242.94.0.201901935137.issue39092@roundup.psfhosted.org> New submission from Evan : I observed a false positive for the csv sniffer has_header method. (It thought there was a header when there was not.) This is due to the fact that in has_header, it determines the csv dialect by sniffing it, and failed to determine that the file I was using had an escape character of '\'. Since it doesn't set the escape character, it then incorrectly broke the first line of the file into columns, since it encountered an escaped quote within a quoted column, and treated that as the end of that column. (It correctly determined that the dialect wasn't doublequote, but apparently still needs to have the escape character set to handle an escaped quotechar.) I think one (or both) of these things should be done here to avoid this false positive: 1.) Allow a dialect to be passed to has_header, so that someone could specify the escape character of the dialect if it were known. 2.) Allow the sniff method of the Sniffer class to detect and set the escapechar. ---------- components: Library (Lib) messages: 358645 nosy: evan.whitfield priority: normal severity: normal status: open title: Csv sniffer doesn't attempt to determine and set escape character. type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 21:56:56 2019 From: report at bugs.python.org (obserience) Date: Thu, 19 Dec 2019 02:56:56 +0000 Subject: [New-bugs-announce] [issue39093] tkinter objects garbage collected from non-tkinter thread cause panic and core dump Message-ID: <1576724216.68.0.395572121912.issue39093@roundup.psfhosted.org> New submission from obserience : All tkinter objects have a reference to the TCL interpreter object "self.tk". The current cleanup code does not remove these when a widget is destroyed. Garbage collection of the TCL interpreter object occurs only after all gui objects are garbage collected. This may be triggered from another thread causing TCL to panic and trigger a core dump. Error message: >Tcl_AsyncDelete: async handler deleted by the wrong thread >Aborted (core dumped) Adding: "self.tk = None" to the end of Misc.destroy() (tkinter/__init__.py line:439) should fix this by removing these reference when widgets are destroyed. (Note:destroy is recursive on the widget tree and called on the root object when a Tkinter GUI exits) I can't see any problem with removing the interpreter object from a widget when it is destroyed. There doesn't seem to be any way to reassign a widget to a new parent so this shouldn't affect anything. Doing this makes it safe(r) to use tkinter from a non-main thread since if the GUI cleans up properly no "landmines" are left to cause a crash when garbage collected in the wrong thread. ---------- components: Tkinter files: error_case.py messages: 358652 nosy: obserience priority: normal severity: normal status: open title: tkinter objects garbage collected from non-tkinter thread cause panic and core dump type: crash Added file: https://bugs.python.org/file48789/error_case.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 22:06:41 2019 From: report at bugs.python.org (Yoni Lavi) Date: Thu, 19 Dec 2019 03:06:41 +0000 Subject: [New-bugs-announce] [issue39094] Add a default to statistics.mean and related functions Message-ID: <1576724801.65.0.624887209777.issue39094@roundup.psfhosted.org> New submission from Yoni Lavi : I would like to put forward an argument in favour of a `default` parameter in the statistics.mean function and the related function. What motivated me to open this is that my code would more often than not include a check (or try-except) whenever I calculate a mean and add a default/sentinel value, and I felt that there should be a better way. Please also note that we have a precedent for this in a similar parameter added to min & max in 3.4 (https://bugs.python.org/issue18111) ---------- components: Library (Lib) messages: 358653 nosy: Yoni Lavi priority: normal severity: normal status: open title: Add a default to statistics.mean and related functions type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 18 23:13:20 2019 From: report at bugs.python.org (NNN) Date: Thu, 19 Dec 2019 04:13:20 +0000 Subject: [New-bugs-announce] [issue39095] Negative Array Index not Yielding "Index Out Of Bounds" Message-ID: <1576728800.29.0.743612590948.issue39095@roundup.psfhosted.org> New submission from NNN : Created an 2D array: bigFloorLayout = [] bigFloorLayout=[[False for row in range(0,45] for col in range(0,70] for y in range (offsetY, storageY + offsetY): for x in range (offsetX, storageX + offsetX): bigFloorLayout[x][y] = True Offset is a negative number, and thus accessing bigFloorLayout[0][-1], which did not yield "Index out of Bounds" as it should. This is in Blender 2.81, so I have no idea what version of python it is. ---------- messages: 358655 nosy: NNN priority: normal severity: normal status: open title: Negative Array Index not Yielding "Index Out Of Bounds" type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 19 05:11:20 2019 From: report at bugs.python.org (Michael Amrhein) Date: Thu, 19 Dec 2019 10:11:20 +0000 Subject: [New-bugs-announce] [issue39096] Description of "Format Specification Mini-Language" not accurate for Decimal Message-ID: <1576750280.17.0.548410545235.issue39096@roundup.psfhosted.org> New submission from Michael Amrhein : The description of the "Format Specification Mini-Language" states for float and Decimal regarding presentation type 'f': "The default precision is 6." Regarding presentation type None it reads: "Similar to 'g', except that fixed-point notation, when used, has at least one digit past the decimal point." While both statements are accurate for float, they don't hold for Decimal. In order to preserve the information about the decimal exponent, in both cases Decimal formatting displays as many fractional digits as dictated by it's exponent. ---------- assignee: docs at python components: Documentation messages: 358667 nosy: docs at python, mamrhein priority: normal severity: normal status: open title: Description of "Format Specification Mini-Language" not accurate for Decimal type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 19 06:59:20 2019 From: report at bugs.python.org (songyuc) Date: Thu, 19 Dec 2019 11:59:20 +0000 Subject: [New-bugs-announce] [issue39097] The description of multiprocessing.cpu_count() is not accurate in the documentation Message-ID: <1576756760.59.0.899129841323.issue39097@roundup.psfhosted.org> New submission from songyuc <466309936 at qq.com>: In the documentation of Python 3.7, the description of multiprocessing.cpu_count() is "Return the number of CPUs in the system.", but, in fact, this function return the CPU threads number. ---------- assignee: docs at python components: Documentation messages: 358675 nosy: docs at python, songyuc priority: normal severity: normal status: open type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 19 08:19:43 2019 From: report at bugs.python.org (Patrick Buxton) Date: Thu, 19 Dec 2019 13:19:43 +0000 Subject: [New-bugs-announce] [issue39098] OSError: handle is closed in ProcessPoolExecutor on shutdown(wait=False) Message-ID: <1576761583.89.0.930155685361.issue39098@roundup.psfhosted.org> New submission from Patrick Buxton : When shutting down a ProcessPoolExecutor with wait=False, an `OSError: handle is closed` is raised. The error can be replicated with a script as simple as: ``` from concurrent.futures import ProcessPoolExecutor e = ProcessPoolExecutor() e.submit(id) e.shutdown(wait=False) ---------- components: Library (Lib) messages: 358679 nosy: patbuxton priority: normal severity: normal status: open title: OSError: handle is closed in ProcessPoolExecutor on shutdown(wait=False) type: crash versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 19 13:29:01 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Thu, 19 Dec 2019 18:29:01 +0000 Subject: [New-bugs-announce] [issue39099] scandir.dirfd() method Message-ID: <1576780141.57.0.935904677179.issue39099@roundup.psfhosted.org> New submission from Giampaolo Rodola' : PR in attachment adds a new dirfd() method to the scandir() object (POSIX only). This can be be passed to os.* functions supporting the "dir_fd" parameter, and avoid opening a new fd as in: >>> dirfd = os.open("basename", os.O_RDONLY, dir_fd=topfd) At the moment I am not sure if it's possible to also support Windows. ---------- components: Library (Lib) messages: 358686 nosy: giampaolo.rodola priority: normal severity: normal status: open title: scandir.dirfd() method versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 19 14:32:14 2019 From: report at bugs.python.org (Anton Khirnov) Date: Thu, 19 Dec 2019 19:32:14 +0000 Subject: [New-bugs-announce] [issue39100] email.policy.SMTP throws AttributeError on invalid header Message-ID: <1576783934.54.0.946649596481.issue39100@roundup.psfhosted.org> New submission from Anton Khirnov : When parsing a (broken) mail from linux-media at vger.kernel.org (message-id 20190212181908.Horde.pEiGHvV2KHy9EkUy8TA8D1o at webmail.your-server.de, headers attached) with email.policy.SMTP, I get an AttributeError on trying to read the 'to' header: /usr/lib/python3.7/email/headerregistry.py in (.0) 345 mb.local_part or '', 346 mb.domain or '') --> 347 for mb in addr.all_mailboxes])) 348 defects = list(address_list.all_defects) 349 else: AttributeError: 'Group' object has no attribute 'local_part' The header in question is: To: unlisted-recipients:; (no To-header on input) The problem seems to be that mb is a Group and not an Address, gets token_type of 'invalid-mailbox', but does not have the attributes local_part/domain that are expected in mailboxes. Copying the line local_part = domain = route = addr_spec = display_name from InvalidMailbox to Group fixes this, but it is not clear to me this is the right solution, so not sending a patch. ---------- components: email files: mail.eml messages: 358689 nosy: barry, elenril, r.david.murray priority: normal severity: normal status: open title: email.policy.SMTP throws AttributeError on invalid header type: behavior Added file: https://bugs.python.org/file48792/mail.eml _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 19 14:44:59 2019 From: report at bugs.python.org (Fabio Pugliese Ornellas) Date: Thu, 19 Dec 2019 19:44:59 +0000 Subject: [New-bugs-announce] [issue39101] IsolatedAsyncioTestCase freezes when exception is raised Message-ID: <1576784699.03.0.850711294275.issue39101@roundup.psfhosted.org> New submission from Fabio Pugliese Ornellas : IsolatedAsyncioTestCase freezes whenever an exception that inherits from BaseException is raised: import unittest class TestHangsForever(unittest.IsolatedAsyncioTestCase): async def test_hangs_forever(self): raise BaseException("Hangs forever") if __name__ == "__main__": import unittest unittest.main() A kind of similar issue present on 3.7 was fixed on 3.8 here https://github.com/python/cpython/blob/3.8/Lib/asyncio/events.py#L84, where BaseExceptions would not be correctly handled by the event loop, this seems somewhat related. I had a look at IsolatedAsyncioTestCase implementation, did not spot any obvious broken thing there, I could use some light here. Thanks. ---------- components: asyncio messages: 358690 nosy: asvetlov, fornellas, yselivanov priority: normal severity: normal status: open title: IsolatedAsyncioTestCase freezes when exception is raised versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 19 21:48:28 2019 From: report at bugs.python.org (Arseny Boykov) Date: Fri, 20 Dec 2019 02:48:28 +0000 Subject: [New-bugs-announce] [issue39102] Increase Enum performance Message-ID: <1576810108.88.0.911071266859.issue39102@roundup.psfhosted.org> New submission from Arseny Boykov : Now enum has very poor speed on trying values and attributes access (especially when it comes to accessing members name/value attrs) There are two major reasons why attrs access is slow: - All values/names access going through DynamicClassAttribute (x10 slower than accessing attr directly) - EnumMeta has __getattr__ which is slowing down even direct class attributes access (up to x6 slower) However, there are no need to use it, as we could just set value and name to newly created enum members without affecting its class. The main issue with values check is the slow _missing_ hook handling when it raising exception, which happens pretty much always, if value is not valid enum and we talking about vanilla Enum class. Also I found Flag performance issue being fixed already: https://bugs.python.org/issue38045 It's also related, because new Flag creation involves many member.name lookups My proposal: - I think we should completely get rid of __getattr__ on Enum (~6x speed boost) - Rework DynamicClassAttribute so it could work without __getattr__ or perhaps completely get rid of it - Don't use DynamicClassAttribute for member.name and .value (~10x speed boost) - Think of faster handling of _missing_ hook (~2x speed boost) - Make other improvements to the code Proposed changes doesn't require changing public API or behaviour. By far I were able to implement almost things proposed here and will be happy to make a PR. ---------- components: Library (Lib) files: benchmark_result.txt messages: 358694 nosy: MrMrRobat priority: normal severity: normal status: open title: Increase Enum performance type: performance versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48793/benchmark_result.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 00:32:00 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Fri, 20 Dec 2019 05:32:00 +0000 Subject: [New-bugs-announce] [issue39103] [linux] strftime renders %Y with only 3 characters Message-ID: <1576819920.4.0.445555745721.issue39103@roundup.psfhosted.org> New submission from Jason R. Coombs : On Python 3.8, there's a difference between how datetime.datetime.strftime renders %Y for years < 1000 between Linux and other platforms. # Linux $ docker run -it python python -c 'import datetime; print(datetime.date(900,1,1).strftime("%Y"))' 900 # macOS $ python -c 'import datetime; print(datetime.date(900,1,1).strftime("%Y"))' 0900 According to the docs (https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior), one should expect `'0000'` for year zero and so I'd expect `'0900'` for the year 900, so the macOS behavior looks correct to me. ---------- components: Library (Lib) messages: 358695 nosy: jaraco priority: normal severity: normal status: open title: [linux] strftime renders %Y with only 3 characters versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 01:52:22 2019 From: report at bugs.python.org (Thomas Moreau) Date: Fri, 20 Dec 2019 06:52:22 +0000 Subject: [New-bugs-announce] [issue39104] ProcessPoolExecutor hangs on shutdown nowait with pickling failure Message-ID: <1576824742.02.0.740148547005.issue39104@roundup.psfhosted.org> New submission from Thomas Moreau : The attached scripts hangs on python3.7+. This is due to the fact that the main process closes the communication channels directly while the queue_management_thread might still use them. To prevent that, all the closing should be handled by the queue_management_thread. ---------- components: Library (Lib) files: main.py messages: 358697 nosy: tomMoral priority: normal severity: normal status: open title: ProcessPoolExecutor hangs on shutdown nowait with pickling failure versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48794/main.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 02:17:39 2019 From: report at bugs.python.org (Printer offline) Date: Fri, 20 Dec 2019 07:17:39 +0000 Subject: [New-bugs-announce] [issue39105] Printer offline Message-ID: <1576826259.24.0.691283351338.issue39105@roundup.psfhosted.org> New submission from Printer offline : This blog has increased the level of information sharing. I just simply loved this blog. It has helped me a lot. A great and learning blog. I am very satisfied after reading this. https://www.printer-offline.com/ ---------- components: asyncio files: Roko-Logo-2-fi18353094x260.png messages: 358698 nosy: Printer offline, asvetlov, yselivanov priority: normal severity: normal status: open title: Printer offline type: resource usage versions: Python 3.8 Added file: https://bugs.python.org/file48795/Roko-Logo-2-fi18353094x260.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 03:56:09 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 20 Dec 2019 08:56:09 +0000 Subject: [New-bugs-announce] [issue39106] Add suggestions to argparse error message output for unrecognized arguments Message-ID: <1576832169.43.0.680374670537.issue39106@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I came across this idea while working on error messages for click at https://github.com/pallets/click/issues/1446. Currently for unknown arguments which could in some case be typos argparse throws an error but doesn't make any suggestions. It could do some heuristic to suggest matches. The unrecognized argument error prints all unrecognized arguments so in that case it will be less useful to mix match suggestions. It can be helpful for single argument usages. argparse is performance sensitive since it's used in cli environments so I feel the tradeoff to do simple match to make suggestions as a good user experience. # ssl_helper.py import argparse parser = argparse.ArgumentParser() parser.add_argument('--include-ssl', action='store_true') namespace = parser.parse_args() No suggestions are included currently $ python3.8 ssl_helper.py --include-ssll usage: ssl_helper.py [-h] [--include-ssl] ssl_helper.py: error: unrecognized arguments: --include-ssll Include suggestions based when one of the option starts with the argument supplied similar to click $ ./python.exe ssl_helper.py --include-ssll usage: ssl_helper.py [-h] [--include-ssl] ssl_helper.py: error: unrecognized argument: --include-ssll . Did you mean --include-ssl? difflib.get_close_matches could also provide better suggestions in some cases but comes at import cost and could be imported only during error messages as proposed in the click issue ./python.exe ssl_helper.py --exclude-ssl usage: ssl_helper.py [-h] [--include-ssl] ssl_helper.py: error: unrecognized argument: --exclude-ssl . Did you mean --include-ssl? Attached is a simple patch of the implementation with startswith which is more simple and difflib.get_close_matches diff --git Lib/argparse.py Lib/argparse.py index 5d3ce2ad70..e10a4f0c9b 100644 --- Lib/argparse.py +++ Lib/argparse.py @@ -1818,8 +1818,29 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer): def parse_args(self, args=None, namespace=None): args, argv = self.parse_known_args(args, namespace) if argv: - msg = _('unrecognized arguments: %s') - self.error(msg % ' '.join(argv)) + suggestion = None + if len(argv) == 1: + argument = argv[0] + + # simple startswith + for option in self._option_string_actions: + if argument.startswith(option): + suggestion = option + break + + # difflib impl + import difflib + try: + suggestion = difflib.get_close_matches(argv[0], self._option_string_actions, n=1)[0] + except IndexError: + pass + + if suggestion: + msg = _('unrecognized argument: %s . Did you mean %s?') + self.error(msg % (' '.join(argv), suggestion)) + else: + msg = _('unrecognized arguments: %s') + self.error(msg % ' '.join(argv)) return args def parse_known_args(self, args=None, namespace=None): ---------- components: Library (Lib) messages: 358699 nosy: paul.j3, rhettinger, xtreak priority: normal severity: normal status: open title: Add suggestions to argparse error message output for unrecognized arguments type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 05:42:20 2019 From: report at bugs.python.org (Aivar Annamaa) Date: Fri, 20 Dec 2019 10:42:20 +0000 Subject: [New-bugs-announce] [issue39107] Consider upgrading Tkinter to Tk 8.6.10 Message-ID: <1576838540.23.0.689551930127.issue39107@roundup.psfhosted.org> New submission from Aivar Annamaa : It includes several Mac-related enhancements https://sourceforge.net/projects/tcl/files/Tcl/8.6.10/tcltk-release-notes-8.6.10.txt/view ---------- components: Tkinter messages: 358702 nosy: Aivar.Annamaa priority: normal severity: normal status: open title: Consider upgrading Tkinter to Tk 8.6.10 type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 06:45:13 2019 From: report at bugs.python.org (Charles Newey) Date: Fri, 20 Dec 2019 11:45:13 +0000 Subject: [New-bugs-announce] [issue39108] Documentation for "random.gauss" vs "random.normalvariate" is lacking Message-ID: <1576842313.59.0.109867222427.issue39108@roundup.psfhosted.org> New submission from Charles Newey : The Python 3 documentation for the "random" module mentions two possible ways to generate a random variate drawn from a normal distribution - "random.gauss" and "random.normalvariate" (see: https://docs.python.org/3/library/random.html#random.gauss). It's not clear what the distinction is other than apparently the "random.gauss" function is faster. Digging through the source code, it eventually becomes apparent that "random.gauss" is NOT thread safe... but this isn't mentioned in the documentation anywhere. Further, the documentation doesn't make explicit reference to the particular method used for generating these Gaussian variates. Basically what I'm getting at is that it's difficult to tell which function ("gauss" or "randomvariate") I should be using. I feel that the documentation could be clarified here. I'm happy to do this in a PR at some point if required. ---------- assignee: docs at python components: Documentation messages: 358703 nosy: cnewey, docs at python priority: normal severity: normal status: open title: Documentation for "random.gauss" vs "random.normalvariate" is lacking type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 07:33:15 2019 From: report at bugs.python.org (Yannick) Date: Fri, 20 Dec 2019 12:33:15 +0000 Subject: [New-bugs-announce] [issue39109] [C-API] PyUnicode_FromString Message-ID: <1576845195.58.0.163622932603.issue39109@roundup.psfhosted.org> New submission from Yannick : Python version: 3.5 Tested with VS Studio 2017 in an C-API extension. When you have a UTF-8 encoded char buffer which is filled with a 0 or empty, and you youse the PyUnicode_FromString() method on this buffer, you will get a PyObject*. The content looks good, but the refence counter looks strange. In case of an 0 as char in the buffer, the ob_refcnt Field is set to 100 and in case of an empty buffer, the ob_refcnt Field is set to something around 9xx. Example Code: string s1 = u8""; string s2 = u8"0"; PyObject *o1 = PyUnicode_FromString(s1.c_str()); //o1->ob_refcnt = 9xx PyObject *o2 = PyUnicode_FromString(s2.c_str()); //o2->ob_refcnt = 100 I think the ob_refcnt Field should be 1 in both cases. Or why is the refcnt here so high? ---------- components: C API messages: 358706 nosy: YannickSchmetz priority: normal severity: normal status: open title: [C-API] PyUnicode_FromString type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 10:00:48 2019 From: report at bugs.python.org (ctarn) Date: Fri, 20 Dec 2019 15:00:48 +0000 Subject: [New-bugs-announce] [issue39110] It Message-ID: <1576854048.12.0.922057499656.issue39110@roundup.psfhosted.org> Change by ctarn : ---------- files: bug.py nosy: ctarn priority: normal severity: normal status: open title: It type: behavior Added file: https://bugs.python.org/file48796/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 11:42:56 2019 From: report at bugs.python.org (Murali Ganapathy) Date: Fri, 20 Dec 2019 16:42:56 +0000 Subject: [New-bugs-announce] [issue39111] Misleading documentation Message-ID: <1576860176.18.0.619926767128.issue39111@roundup.psfhosted.org> New submission from Murali Ganapathy : The documentation at https://docs.python.org/3.6/library/constants.html#NotImplemented states If all attempts return NotImplemented, the interpreter will raise an appropriate exception. However this is not true for __eq__. === class Foo: def __eq__(self, other): return NotImplemented Foo() == Foo() # returns False, does not throw an exception ==== ---------- assignee: docs at python components: Documentation messages: 358719 nosy: docs at python, murali priority: normal severity: normal status: open title: Misleading documentation versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 20:28:19 2019 From: report at bugs.python.org (Simon Berens) Date: Sat, 21 Dec 2019 01:28:19 +0000 Subject: [New-bugs-announce] [issue39112] Misleading documentation for tuple Message-ID: <1576891699.24.0.656089355811.issue39112@roundup.psfhosted.org> New submission from Simon Berens : Sorry if this is a silly question (my first bug report), but it seems that https://docs.python.org/3/library/functions.html#func-tuple should say "class tuple" instead of just "tuple", as list, dict, and set do. ---------- assignee: docs at python components: Documentation messages: 358753 nosy: docs at python, sberens priority: normal severity: normal status: open title: Misleading documentation for tuple type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 20 22:32:54 2019 From: report at bugs.python.org (william.ayd) Date: Sat, 21 Dec 2019 03:32:54 +0000 Subject: [New-bugs-announce] [issue39113] PyUnicode_AsUTF8AndSize Sometimes Segfaults With Incomplete Surrogate Pair Message-ID: <1576899174.72.0.883256961456.issue39113@roundup.psfhosted.org> New submission from william.ayd : With the attached extension module, if I run the following in the REPL: >>> import libtest >>> >>> libtest.error_if_not_utf8("foo") 'foo' >>> libtest.error_if_not_utf8("\ud83d") Traceback (most recent call last): File "", line 1, in UnicodeEncodeError: 'utf-8' codec can't encode character '\ud83d' in position 0: surrogates not allowed >>> libtest.error_if_not_utf8("foo") 'foo' Things seem OK. But the next invocation of >>> libtest.error_if_not_utf8("\ud83d") Then causes a segfault. Note that the order of the input seems important; simply repeating the call with the invalid surrogate doesn't cause the segfault ---------- files: testmodule.c messages: 358755 nosy: william.ayd priority: normal severity: normal status: open title: PyUnicode_AsUTF8AndSize Sometimes Segfaults With Incomplete Surrogate Pair Added file: https://bugs.python.org/file48798/testmodule.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 21 08:40:45 2019 From: report at bugs.python.org (Ned Batchelder) Date: Sat, 21 Dec 2019 13:40:45 +0000 Subject: [New-bugs-announce] [issue39114] Python 3.9.0a2 changed how finally/return is traced Message-ID: <1576935645.82.0.331239443144.issue39114@roundup.psfhosted.org> New submission from Ned Batchelder : The way trace function reports return-finally has changed in Python 3.9.0a2. I don't know if this change is intentional or not. (BTW: I want to put a 3.9regression keyword on this, but it doesn't exist.) Consider this code: --- 8< ---------------------------------------------------- import linecache, sys def trace(frame, event, arg): lineno = frame.f_lineno print("{} {}: {}".format(event[:4], lineno, linecache.getline(__file__, lineno).rstrip())) return trace print(sys.version) sys.settrace(trace) a = [] def finally_return(): try: return 14 finally: a.append(16) assert finally_return() == 14 assert a == [16] --- 8< ---------------------------------------------------- (My habit is to use line numbers in the lines themselves to help keep things straight.) In Python 3.7 (and before), the last traces are line 14, line 16, return 16. In Python 3.8, the last traces are line 14, line 16, line 14, return 14. In Python 3.9a1, the traces are the same as 3.8. In Python 3.9a2, the traces are now line 14, line 16, line 14, line 16, return 16. This doesn't make sense to me: why does it bounce back and forth? Full output from different versions of Python: % /usr/local/pythonz/pythons/CPython-3.7.1/bin/python3.7 bpo.py 3.7.1 (default, Oct 20 2018, 18:25:32) [Clang 10.0.0 (clang-1000.11.45.2)] call 12: def finally_return(): line 13: try: line 14: return 14 line 16: a.append(16) retu 16: a.append(16) % /usr/local/pythonz/pythons/CPython-3.8.1/bin/python3.8 bpo.py 3.8.1 (default, Dec 19 2019, 08:38:38) [Clang 10.0.0 (clang-1000.10.44.4)] call 12: def finally_return(): line 13: try: line 14: return 14 line 16: a.append(16) line 14: return 14 retu 14: return 14 % /usr/local/pythonz/pythons/CPython-3.9.0a1/bin/python3.9 bpo.py 3.9.0a1 (default, Nov 20 2019, 18:52:14) [Clang 10.0.0 (clang-1000.10.44.4)] call 12: def finally_return(): line 13: try: line 14: return 14 line 16: a.append(16) line 14: return 14 retu 14: return 14 % /usr/local/pythonz/pythons/CPython-3.9.0a2/bin/python3.9 bpo.py 3.9.0a2 (default, Dec 19 2019, 08:42:29) [Clang 10.0.0 (clang-1000.10.44.4)] call 12: def finally_return(): line 13: try: line 14: return 14 line 16: a.append(16) line 14: return 14 line 16: a.append(16) retu 16: a.append(16) ---------- messages: 358771 nosy: nedbat priority: normal severity: normal status: open title: Python 3.9.0a2 changed how finally/return is traced type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 21 09:14:39 2019 From: report at bugs.python.org (Josh de Kock) Date: Sat, 21 Dec 2019 14:14:39 +0000 Subject: [New-bugs-announce] [issue39115] Clarify Python MIME type Message-ID: <1576937679.64.0.394621240723.issue39115@roundup.psfhosted.org> New submission from Josh de Kock : I'd like to add Python's MIME types to a database so that they can be properly used by other services. However, apart from the source code[1][2], I can't find any documentation from Python which highlights this. Is the source code correct here, and can it be taken as the 'official' MIME types from the project? text/x-python py application/x-python-code pyc pyo [1]: https://github.com/python/cpython/blob/19a3d873005e5730eeabdc394c961e93f2ec02f0/Lib/mimetypes.py#L457 [2]: https://github.com/python/cpython/blob/19a3d873005e5730eeabdc394c961e93f2ec02f0/Lib/mimetypes.py#L528 ---------- assignee: docs at python components: Documentation messages: 358772 nosy: docs at python, jdek priority: normal severity: normal status: open title: Clarify Python MIME type type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 21 11:04:36 2019 From: report at bugs.python.org (twisteroid ambassador) Date: Sat, 21 Dec 2019 16:04:36 +0000 Subject: [New-bugs-announce] [issue39116] StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop Message-ID: <1576944276.46.0.229840779815.issue39116@roundup.psfhosted.org> New submission from twisteroid ambassador : I have been getting these strange exception since Python 3.8 on my Windows 10 machine. The external symptoms are many errors like "RuntimeError: aclose(): asynchronous generator is already running" and "Task was destroyed but it is pending!". By adding try..except..logging around my code, I found that my StreamReaders would raise GeneratorExit on readexactly(). Digging deeper, it seems like the following line in StreamReader._wait_for_data(): await self._waiter would raise a GeneratorExit. There are only two other methods on StreamReader that actually does anything to _waiter, set_exception() and _wakeup_waiter(), but neither of these methods were called before GeneratorExit is raised. In fact, both these methods sets self._waiter to None, so normally after _wait_for_data() does "await self._waiter", self._waiter is None. However, after GeneratorExit is raised, I can see that self._waiter is not None. So it seems the GeneratorExit came from nowhere. I have not been able to reproduce this behavior in other code. This is with Python 3.8.1 on latest Windows 10 1909, using ProactorEventLoop. I don't remember seeing this ever on Python 3.7. ---------- components: asyncio messages: 358774 nosy: asvetlov, twisteroid ambassador, yselivanov priority: normal severity: normal status: open title: StreamReader.readexactly() raises GeneratorExit on ProactorEventLoop type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 21 15:13:01 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 21 Dec 2019 20:13:01 +0000 Subject: [New-bugs-announce] [issue39117] Performance regression for making bound methods Message-ID: <1576959181.43.0.150480407259.issue39117@roundup.psfhosted.org> New submission from Raymond Hettinger : $ python3.9 -m timeit -r 11 -s 'class A: pass' -s 'A.m = lambda s: None' -s 'a = A()' 'a.m; a.m; a.m; a.m; a.m' 1000000 loops, best of 11: 230 nsec per loop $ python3.8 -m timeit -r 11 -s 'class A: pass' -s 'A.m = lambda s: None' -s 'a = A()' 'a.m; a.m; a.m; a.m; a.m' 2000000 loops, best of 11: 149 nsec per loop $ python3.7 -m timeit -r 11 -s 'class A: pass' -s 'A.m = lambda s: None' -s 'a = A()' 'a.m; a.m; a.m; a.m; a.m' 2000000 loops, best of 11: 159 nsec per loop $ python3.6 -m timeit -r 11 -s 'class A: pass' -s 'A.m = lambda s: None' -s 'a = A()' 'a.m; a.m; a.m; a.m; a.m' 10000000 loops, best of 11: 0.159 usec per loop Timings made using the recent released python.org macOS 64-bit builds. ---------- components: Interpreter Core messages: 358781 nosy: rhettinger priority: normal severity: normal status: open title: Performance regression for making bound methods type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 22 06:45:39 2019 From: report at bugs.python.org (Leonardo) Date: Sun, 22 Dec 2019 11:45:39 +0000 Subject: [New-bugs-announce] [issue39118] Variables changing values on their own Message-ID: <1577015139.29.0.995275157746.issue39118@roundup.psfhosted.org> New submission from Leonardo : in this code the variable o changes on its own: x=[[-1, 7, 3], [12, 2, -13], [14, 18, -8], [17, 4, -4]] x1=[[-8, -10, 0], [5, 5, 10], [2, -7, 3], [9, -8, -3]] y=[[0,0,0],[0,0,0],[0,0,0],[0,0,0]] k=True f=0 z=[] d=[] while k: print(k) o=x print(o) for i in range(len(x)): for n in range(len(x)): if i!=n: for g in range(3): if x[i][g]>x[n][g]: y[i][g]-=1 if x[i][g] _______________________________________ From report at bugs.python.org Sun Dec 22 11:35:18 2019 From: report at bugs.python.org (Drew DeVault) Date: Sun, 22 Dec 2019 16:35:18 +0000 Subject: [New-bugs-announce] [issue39119] email/_header_value_parser.py:parse_message_id: UnblondLocalError Message-ID: <1577032518.62.0.64044225834.issue39119@roundup.psfhosted.org> New submission from Drew DeVault : File "/usr/lib/python3.8/site-packages/emailthreads/threads.py", line 14, in get_message_by_id if msg["message-id"] == msg_id: File "/usr/lib/python3.8/email/message.py", line 391, in __getitem__ return self.get(name) File "/usr/lib/python3.8/email/message.py", line 471, in get return self.policy.header_fetch_parse(k, v) File "/usr/lib/python3.8/email/policy.py", line 163, in header_fetch_parse return self.header_factory(name, value) File "/usr/lib/python3.8/email/headerregistry.py", line 602, in __call__ return self[name](name, value) File "/usr/lib/python3.8/email/headerregistry.py", line 197, in __new__ cls.parse(value, kwds) File "/usr/lib/python3.8/email/headerregistry.py", line 530, in parse kwds['parse_tree'] = parse_tree = cls.value_parser(value) File "/usr/lib/python3.8/email/_header_value_parser.py", line 2116, in parse_message_id message_id.append(token) ---------- components: Library (Lib) messages: 358794 nosy: ddevault priority: normal severity: normal status: open title: email/_header_value_parser.py:parse_message_id: UnblondLocalError versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 22 12:31:20 2019 From: report at bugs.python.org (Neil Faulkner) Date: Sun, 22 Dec 2019 17:31:20 +0000 Subject: [New-bugs-announce] [issue39120] pyodbc dll load failed Message-ID: <1577035880.74.0.752421559917.issue39120@roundup.psfhosted.org> New submission from Neil Faulkner : Please can someone advise as to the root cause of this error? ---------- components: Library (Lib) files: DLL.PNG messages: 358796 nosy: nf00038 priority: normal severity: normal status: open title: pyodbc dll load failed type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file48800/DLL.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 22 19:39:47 2019 From: report at bugs.python.org (Rob Man) Date: Mon, 23 Dec 2019 00:39:47 +0000 Subject: [New-bugs-announce] [issue39121] gzip header write OS field Message-ID: <1577061587.89.0.561530879196.issue39121@roundup.psfhosted.org> New submission from Rob Man : Files written with gzip module write a value of 255 (unknown) at the 10th position in the header which defined what OS was used when gzip file was written. Files written with gzip linux command correctly set that field to the value of 3 (Unix). This ehancement does that. ---------- components: Library (Lib) messages: 358801 nosy: wungad priority: normal severity: normal status: open title: gzip header write OS field type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 23 06:10:22 2019 From: report at bugs.python.org (=?utf-8?q?Sara_Mart=C3=ADnez_Giner?=) Date: Mon, 23 Dec 2019 11:10:22 +0000 Subject: [New-bugs-announce] [issue39122] Environment variable PYTHONUSERBASE is not set during customized Python Installation Message-ID: <1577099422.36.0.0774544638319.issue39122@roundup.psfhosted.org> New submission from Sara Mart?nez Giner : Environment variable PYTHONUSERBASE is not set during customized Python Installation. Python installer 3.7.6(x64) / Windows 10 Check 1: Customize the installation to install Python in C:\Python37 for all users. Result: Access Denied using pip Check 2: Check 1: Customize the installation to install Python in C:\Program Files\Python37 for all users (default). Result: pip works, but error appears trying to install anything with pip. For example: >>pip install virtualenv WARNING: The script virtualenv.exe is intalled in 'C:\Users\XXX\AppData\Roaming\Python\Python37\Scripts' which is not on path By default APPDATA matches with C:\Users\XXX\AppData\Roaming and PYHTONUSERBASE is empty ----------------------------------------------------------------- I've found the path constructor in \Python37\Lib\site.py (_getuserbase) So I try the following steps: - Create folder with full control in C:\ (C:\Python) - Set environment variable PYTHONUSERBASE=C:\Python - Install Python for all users in C:\Python\Python37 That works for me. ---------- components: Windows messages: 358808 nosy: paul.moore, sarmar11, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Environment variable PYTHONUSERBASE is not set during customized Python Installation type: security versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 23 09:23:44 2019 From: report at bugs.python.org (Vadim Zeitlin) Date: Mon, 23 Dec 2019 14:23:44 +0000 Subject: [New-bugs-announce] [issue39123] PyThread_xxx() not available when using limited API Message-ID: <1577111024.93.0.368390768924.issue39123@roundup.psfhosted.org> New submission from Vadim Zeitlin : These functions (e.g. PyThread_allocate_lock() etc) are not declared inside #if !defined(Py_LIMITED_API) in pythread.h, yet they're not exported from python3.lib. IMHO, ideal would be to just provide these functions in the library, as they exist since basically always, but if the intention is to not make them part of the limited API, a guard around their declarations in the header should be added so that using them at least results in link-time errors instead of compile-time ones when using limited API. ---------- components: C API messages: 358813 nosy: VZ priority: normal severity: normal status: open title: PyThread_xxx() not available when using limited API type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 23 09:34:58 2019 From: report at bugs.python.org (Adelson Luiz de Lima) Date: Mon, 23 Dec 2019 14:34:58 +0000 Subject: [New-bugs-announce] [issue39124] round Decimal error Message-ID: <1577111698.59.0.695595778809.issue39124@roundup.psfhosted.org> New submission from Adelson Luiz de Lima : When I round this: round(Decimal('9.925'), 2), in Python 3.7.5 the result is Decimal('9.92'), but in Python 2.7.17 is 9.93 ---------- messages: 358814 nosy: adelsonllima priority: normal severity: normal status: open title: round Decimal error type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 23 10:40:00 2019 From: report at bugs.python.org (=?utf-8?q?Nguy=E1=BB=85n_Gia_Phong?=) Date: Mon, 23 Dec 2019 15:40:00 +0000 Subject: [New-bugs-announce] [issue39125] Type signature of @property not shown in help() Message-ID: <1577115600.68.0.0436692507171.issue39125@roundup.psfhosted.org> New submission from Nguy?n Gia Phong : Dear Maintainer, I want to request a feature on the generative documentation of type-hinting. As of December 2019, I believe there is no support for generating such information in help(). For demonstration, I have this tiny piece of code class Foo: @property def bar(self) -> int: return 42 @bar.setter def bar(self, value: int) -> None: pass def baz(self, arg: float) -> str: pass whose documentation on CPython 3.7.5 (on Debian testing amd64 if that matters) is generated as class Foo(builtins.object) | Methods defined here: | | baz(self, arg: float) -> str | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) | | bar I expect the documentation for bar to be as informative as bar, i.e. something similar to ``bar: int''. As pointed out by ChrisWarrick on freenode#python, the annotations are already present, yet help() is not making use of them: >>> Foo.bar.fget.__annotations__ {'return': } >>> Foo.bar.fset.__annotations__ {'value': , 'return': None} Have a Merry Christmas or other holiday of your choice, Nguy?n Gia Phong ---------- assignee: docs at python components: Documentation messages: 358823 nosy: McSinyx, docs at python priority: normal severity: normal status: open title: Type signature of @property not shown in help() type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 23 16:11:54 2019 From: report at bugs.python.org (dmaxime) Date: Mon, 23 Dec 2019 21:11:54 +0000 Subject: [New-bugs-announce] [issue39126] Some characters confuse the editor Message-ID: <1577135514.42.0.144036776783.issue39126@roundup.psfhosted.org> New submission from dmaxime : >>> b'\xf0\x9f\x98\x86'.decode('utf8') '?' >>> '?'.encode('utf8') b'\xf0\x9f\x98\x86' ...now if you write '?'.encode() then you move the cursor between the brackets and type "'utf8'" you will have this result while the cursor remains in the brackets: >>> '?'.encode()''8ftu SyntaxError: invalid syntax >>> I've attached a video that shows this behavior. Thanks for your attention. Cheers. ---------- assignee: terry.reedy components: IDLE files: python3ide bug.mp4 messages: 358836 nosy: dmaxime, terry.reedy priority: normal severity: normal status: open title: Some characters confuse the editor type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48801/python3ide bug.mp4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 23 23:43:00 2019 From: report at bugs.python.org (Andy Lester) Date: Tue, 24 Dec 2019 04:43:00 +0000 Subject: [New-bugs-announce] [issue39127] _Py_HashPointer's void * argument should be const Message-ID: <1577162580.85.0.922460323342.issue39127@roundup.psfhosted.org> New submission from Andy Lester : _Py_HashPointer in Python/pyhash.c takes a pointer argument that can be made const. This will let compiler and static analyzers know that the pointer's target is not modified. You can also change calls to _Py_HashPointer that are down-casting pointers. For example, in meth_hash in Objects/methodobject.c, this call can have the void * changed to const void *. y = _Py_HashPointer((void*)(a->m_ml->ml_meth)); ---------- components: Interpreter Core messages: 358839 nosy: petdance priority: normal severity: normal status: open title: _Py_HashPointer's void * argument should be const _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 24 01:38:19 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Tue, 24 Dec 2019 06:38:19 +0000 Subject: [New-bugs-announce] [issue39128] Document happy eyeball parameters in loop.create_connection signature docs Message-ID: <1577169499.92.0.148035578334.issue39128@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Created from https://github.com/aio-libs/aiohttp/issues/4451 . happy_eyeballs_delay and interleave are not documented in the signature at [0] though the parameters were explained below . Andrew, feel free to update if there is any additional action needed besides signature update. I guess it could be tagged as a good newcomer friendly issue. [0] https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.create_connection ---------- assignee: docs at python components: Documentation, asyncio messages: 358840 nosy: asvetlov, docs at python, xtreak, yselivanov priority: normal severity: normal status: open title: Document happy eyeball parameters in loop.create_connection signature docs type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 24 02:04:13 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Tue, 24 Dec 2019 07:04:13 +0000 Subject: [New-bugs-announce] [issue39129] Incorrect import of TimeoutError while creating happy eyeballs connection Message-ID: <1577171053.02.0.187675761973.issue39129@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I guess the TimeoutError exception needs to be imported from asyncio.exceptions and not from asyncio.futures that causes AttributeError while instantiating a connection with happy eyeballs. ./python.exe -m asyncio asyncio REPL 3.9.0a2+ (heads/master:068768faf6, Dec 23 2019, 18:35:26) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Use "await" directly instead of "asyncio.run()". Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> conn = await asyncio.open_connection("localhost", port=8000, happy_eyeballs_delay=1) Traceback (most recent call last): File "/Users/kasingar/stuff/python/cpython/Lib/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/Users/kasingar/stuff/python/cpython/Lib/concurrent/futures/_base.py", line 388, in __get_result raise self._exception File "", line 1, in File "/Users/kasingar/stuff/python/cpython/Lib/asyncio/streams.py", line 52, in open_connection transport, _ = await loop.create_connection( File "/Users/kasingar/stuff/python/cpython/Lib/asyncio/base_events.py", line 1041, in create_connection sock, _, _ = await staggered.staggered_race( File "/Users/kasingar/stuff/python/cpython/Lib/asyncio/staggered.py", line 144, in staggered_race raise d.exception() File "/Users/kasingar/stuff/python/cpython/Lib/asyncio/staggered.py", line 86, in run_one_coro with contextlib.suppress(futures.TimeoutError): AttributeError: module 'asyncio.futures' has no attribute 'TimeoutError' ---------- components: asyncio messages: 358841 nosy: asvetlov, xtreak, yselivanov priority: normal severity: normal status: open title: Incorrect import of TimeoutError while creating happy eyeballs connection type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 24 08:30:50 2019 From: report at bugs.python.org (Khalid Mammadov) Date: Tue, 24 Dec 2019 13:30:50 +0000 Subject: [New-bugs-announce] [issue39130] Dict is reversable from v3.8 and should say that in the doc Message-ID: <1577194250.11.0.350136379073.issue39130@roundup.psfhosted.org> Change by Khalid Mammadov : ---------- assignee: docs at python components: Documentation nosy: docs at python, khalidmammadov priority: normal severity: normal status: open title: Dict is reversable from v3.8 and should say that in the doc type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 24 10:24:15 2019 From: report at bugs.python.org (Jasper Spaans) Date: Tue, 24 Dec 2019 15:24:15 +0000 Subject: [New-bugs-announce] [issue39131] signing needs two serialisation passes Message-ID: <1577201055.68.0.886293230442.issue39131@roundup.psfhosted.org> New submission from Jasper Spaans : When creating multipart/signed messages, this currently require two serialisation passes: once to extract the flattened contents to be signed, and once to actually serialise the message. The PR this ticket will be linked to contains a new class, MIMEMultipartSigned, which can be instantiated with a signer function that can perform the signing while serialising, reducing this to only once. Besides, this ensures that the signed contents cannot changed between signing and outputting. Patch is against py3.8 ---------- components: email messages: 358849 nosy: barry, jap, r.david.murray priority: normal severity: normal status: open title: signing needs two serialisation passes type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 24 10:24:49 2019 From: report at bugs.python.org (Batuhan) Date: Tue, 24 Dec 2019 15:24:49 +0000 Subject: [New-bugs-announce] [issue39132] Adding funcitonality to determine if a constant string node is triple quoted Message-ID: <1577201089.29.0.516911847589.issue39132@roundup.psfhosted.org> New submission from Batuhan : I was working on a preceding system for AST unparser and @pablogsal said there is an issue with docstrings (they printed in the same line even it was like 1000 chars long). So determining something is a triple quoted or not is simple with preceding system but I think it would be better if we can do it simpler and public. What I am thinking is using col_offset and end_col_offset information to determine if a string is a triple quoted or not. With a function called `ast.is_triple_quoted` or `ast.get_quote_count` etc. I can submit a patch if this wanted.I dont think this is big enough to discuss it on python-ideas but if it is, then I'm happy to port discussion over there. ---------- components: Library (Lib) messages: 358850 nosy: BTaskaya, pablogsal priority: normal severity: normal status: open title: Adding funcitonality to determine if a constant string node is triple quoted type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 25 12:41:25 2019 From: report at bugs.python.org (Abhijeet) Date: Wed, 25 Dec 2019 17:41:25 +0000 Subject: [New-bugs-announce] [issue39133] threading lib. working improperly on idle window Message-ID: <1577295685.26.0.995462727475.issue39133@roundup.psfhosted.org> New submission from Abhijeet : Threading library's Thread and start functions not showing predicted result on idle. Especially, newline is not invoked between two threads while sometimes gets invoked in result. Often, running of code stops after thread ends i.e no further execution. Version - 3.8.0 exe installer ---------- assignee: terry.reedy components: IDLE messages: 358870 nosy: Pyjeet, terry.reedy priority: normal severity: normal status: open title: threading lib. working improperly on idle window type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 25 19:05:40 2019 From: report at bugs.python.org (Alexander Hirner) Date: Thu, 26 Dec 2019 00:05:40 +0000 Subject: [New-bugs-announce] [issue39134] can't construct dataclass as ABC (or runtime check as data protocol) Message-ID: <1577318740.46.0.10314451567.issue39134@roundup.psfhosted.org> New submission from Alexander Hirner : At runtime, we want to check whether objects adhere to a data protocol. This is not possible due to problematic interactions between ABC and @dataclass. The attached file tests all relevant yet impossible cases. Those are: 1) A(object): Can't check due to "Protocols with non-method members don't support issubclass()" (as outlined in PEP 554) 2) B(ABC): "Can't instantiate abstract class B with abstract methods x, y" 3) C(Protocol): same as A or same as B if @property is @abstractmethod The problem can be solved in two parts. First allowing to implement @abstractproperty in a dataclass (B). This doesn't involve typing and enables the expected use case of dataclass+ABC. I analysed this problem as follows: Abstract properties evaluate to a default of property, not to dataclasses.MISSING. Hence, `dataclasses._init_fn` throws TypeError because of deriving from class vars without defaults. Second, eliding the exception of @runtime_checkable Protocols with non-method members if and only if the the the protocol is in its MRO. I didn't think that through fully, but instantiation could e.g. fail for missing implementations as expected from ABC behaviour (see case D in attached file). I'm not sure about the runtime overhead of this suggestion. ---------- files: dc_repro.py messages: 358876 nosy: cybertreiber, eric.smith priority: normal severity: normal status: open title: can't construct dataclass as ABC (or runtime check as data protocol) type: behavior versions: Python 3.6, Python 3.8 Added file: https://bugs.python.org/file48802/dc_repro.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 26 02:12:12 2019 From: report at bugs.python.org (Michael Wayne Goodman) Date: Thu, 26 Dec 2019 07:12:12 +0000 Subject: [New-bugs-announce] [issue39135] time.get_clock_info() documentation still has 'clock' name Message-ID: <1577344332.83.0.812432046246.issue39135@roundup.psfhosted.org> New submission from Michael Wayne Goodman : The documentation for Python 3.8 and higher still refer to 'clock' as an accepted 'name' argument for time.get_clock_info() that returns a namespace readable by time.clock(), despite time.clock() being removed since Python 3.8. See the first bullet point in the function documentation: https://docs.python.org/3.8/library/time.html#time.get_clock_info In Python 3.8, calling time.get_clock_info('clock') raises "ValueError: unknown clock", so it seems the bug is only in the documentation. ---------- assignee: docs at python components: Documentation messages: 358879 nosy: docs at python, goodmami priority: normal severity: normal status: open title: time.get_clock_info() documentation still has 'clock' name versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 26 03:20:03 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Thu, 26 Dec 2019 08:20:03 +0000 Subject: [New-bugs-announce] [issue39136] Typos in whatsnew file and docs Message-ID: <1577348403.57.0.3073501707.issue39136@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Based on https://github.com/python/cpython/pull/17665#pullrequestreview-335610849 . Grepping for the words might help fix in multiple instances of the typos. Typos in Whatsnew document asolute -> absolute happend -> happened Excape -> Escape Doc typos : defintitions -> definitions follwing -> following necesarily -> necessarily configuraton -> configuration focusses -> focuses funtion -> function ---------- assignee: docs at python components: Documentation keywords: easy, newcomer friendly messages: 358882 nosy: docs at python, xtreak priority: normal severity: normal status: open title: Typos in whatsnew file and docs type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 26 05:54:39 2019 From: report at bugs.python.org (Christoph Reiter) Date: Thu, 26 Dec 2019 10:54:39 +0000 Subject: [New-bugs-announce] [issue39137] create_unicode_buffer() gives different results on Windows vs Linux Message-ID: <1577357679.73.0.864612617674.issue39137@roundup.psfhosted.org> New submission from Christoph Reiter : >>> len(ctypes.create_unicode_buffer("\ud800\udc01", 2)[:]) On Windows: 1 On Linux: 2 Using Python 3.8 on both. ---------- components: ctypes messages: 358885 nosy: lazka priority: normal severity: normal status: open title: create_unicode_buffer() gives different results on Windows vs Linux versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 26 06:14:01 2019 From: report at bugs.python.org (Yorkie Liu) Date: Thu, 26 Dec 2019 11:14:01 +0000 Subject: [New-bugs-announce] [issue39138] import a pycapsule object that's attached on many modules Message-ID: <1577358841.1.0.469176020389.issue39138@roundup.psfhosted.org> New submission from Yorkie Liu : Current PyCapsule's name is corresponding to the `modulename.attrname`, which requires it could be imported in the specified module. And it's possible to implement a feature which shares the same capsule object between different modules, and supports importing them like this: ``` PyCapsule* cap = PyCapsule_New("foobar"); PyObject_SetAttrString(module1, cap->name); PyObject_SetAttrString(module2, cap->name); PyCapsule_Import("module1.foobar", 0); PyCapsule_Import("module2.foobar", 0); ``` ---------- components: C API messages: 358886 nosy: yorkie priority: normal severity: normal status: open title: import a pycapsule object that's attached on many modules type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 26 11:44:22 2019 From: report at bugs.python.org (Khalid Mammadov) Date: Thu, 26 Dec 2019 16:44:22 +0000 Subject: [New-bugs-announce] [issue39139] Reference to depricated collections.abc class in collections is unnecessary and confusing Message-ID: <1577378662.65.0.232984196995.issue39139@roundup.psfhosted.org> New submission from Khalid Mammadov : "Deprecated since version 3.3, will be removed in version 3.9: Moved Collections Abstract Base Classes to the collections.abc module. For backwards compatibility, they continue to be visible in this module through Python 3.8." on the overview is confusing as it's not listed on that page and explained on the next one. ---------- assignee: docs at python components: Documentation messages: 358889 nosy: docs at python, khalidmammadov priority: normal severity: normal status: open title: Reference to depricated collections.abc class in collections is unnecessary and confusing type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 26 18:39:08 2019 From: report at bugs.python.org (Luca Paganin) Date: Thu, 26 Dec 2019 23:39:08 +0000 Subject: [New-bugs-announce] [issue39140] shutil.move does not work properly with pathlib.Path objects Message-ID: <1577403548.05.0.784515319719.issue39140@roundup.psfhosted.org> New submission from Luca Paganin : Suppose you have two pathlib objects representing source and destination of a move: src=pathlib.Path("foo/bar/barbar/myfile.txt") dst=pathlib.Path("foodst/bardst/") If you try to do the following shutil.move(src, dst) Then an AttributeError will be raised, saying that PosixPath objects do not have an rstrip attribute. The error is the following: Traceback (most recent call last): File "mover.py", line 10, in shutil.move(src, dst) File "/Users/lucapaganin/opt/anaconda3/lib/python3.7/shutil.py", line 562, in move real_dst = os.path.join(dst, _basename(src)) File "/Users/lucapaganin/opt/anaconda3/lib/python3.7/shutil.py", line 526, in _basename return os.path.basename(path.rstrip(sep)) AttributeError: 'PosixPath' object has no attribute 'rstrip' Looking into shutil code, line 526, I see that the problem happens when you try to strip the trailing slash using rstrip, which is a method for strings, while PosixPath objects do not have it. Moreover, pathlib.Path objects already manage for trailing slashes, correctly getting basenames even when these are present. The following two workarounds work: 1) Explicit cast both src and dst as string using shutil.move(str(src), str(dst)) This work for both the cases in which dst contains the destination filename or not. 2) Add the filename to the end of the PosixPath dst object: dst=pathlib.Path("foodst/bardst/myfile.txt") Then do shutil.move(src, dst) Surely one could use the method pathlib.Path.replace for PosixPath objects, which does the job without problems, even if it requires for dst to contain the destination filename at the end, and lacks generality, since it bugs when one tries to move files between different filesystems. I think that you should account for the possibility for shutil.move to manage pathlib.Path objects even if one does not provide the destination filename, since the source of the bug is due to a safety measure which is not necessary for pathlib.Path objects, i.e. the managing of the trailing slash. Do you think that is possible? Thank you in advance. Luca Paganin P.S.: I attach a tarball with the dirtree I used for the demonstration. ---------- files: mover.tgz messages: 358891 nosy: Luca Paganin priority: normal severity: normal status: open title: shutil.move does not work properly with pathlib.Path objects type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48803/mover.tgz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 27 13:15:59 2019 From: report at bugs.python.org (David Turner) Date: Fri, 27 Dec 2019 18:15:59 +0000 Subject: [New-bugs-announce] [issue39141] IDLE Clear function returns 256 on Mac OS Catalina Message-ID: <1577470559.68.0.805755696987.issue39141@roundup.psfhosted.org> New submission from David Turner : Trying to set up shortcut function to clear screen but its not working as expected on my Mac OS Catalina -- below is txt from idle import os >>> cls= lambda: os.system('clear') >>> cls() 256 ---------- messages: 358908 nosy: twister68 at gmail.com priority: normal severity: normal status: open title: IDLE Clear function returns 256 on Mac OS Catalina type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 27 15:33:12 2019 From: report at bugs.python.org (Henrique) Date: Fri, 27 Dec 2019 20:33:12 +0000 Subject: [New-bugs-announce] [issue39142] logging.config.dictConfig will convert namedtuple to ConvertingTuple Message-ID: <1577478792.48.0.935250050519.issue39142@roundup.psfhosted.org> New submission from Henrique : While passing { "version": 1, "disable_existing_loggers": False, "formatters": { "verbose": {"format": "%(levelname)s %(asctime)s %(module)s %(message)s"} }, "handlers": { "stackdriver": { "class": "google.cloud.logging.handlers.CloudLoggingHandler", "client": client, "resource": resource, }, }, "root": {"level": "INFO", "handlers": ["stackdriver"]}, } to logging.config.dictConfig it will convert resource, which is a namedtuple to ConvertingTuple, this will make google.cloud.logging.handlers.CloudLoggingHandler break down the line. I am having to create a wrapper class like class Bla: resource = logging.resource.Resource( type="cloud_run_revision", labels={}, ) def _to_dict(self): return self.resource._to_dict() to go around this ---------- messages: 358914 nosy: hcoura priority: normal severity: normal status: open title: logging.config.dictConfig will convert namedtuple to ConvertingTuple type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 27 16:35:04 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 27 Dec 2019 21:35:04 +0000 Subject: [New-bugs-announce] [issue39143] Implementing sub-generation steps in the gc Message-ID: <1577482504.11.0.390249633882.issue39143@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : While I was re-reading The Garbage Collection Handbook [Moss, Hosking, Jones], I have been doing some benchmarks of different aspects of our garbage collection and taking statistics using the pyperformance suite as long as several "production" applications that heavily creates objects. I have observed that our collector presents a form of "generational nepotism" in some typical scenarios. The most common reason is that when promotions of the youngest generations happen, some very young objects that just arrived in the generation are promoted because we have reached a threshold, and its death will be delayed. These objects will likely die immediately if the promotion was delayed a bit (think of a burst of temporary objects being created at once) or if the promotion could distinguish "age" with more granularity. The book proposes serval solutions to this problem, and one of the simpler ones taking the architecture of our collector is to use sub-generation steps: > Promotion can be delayed by structuring a generation into two or more aging spaces. This allows objects to be copied between the fromspace and tospace an arbitrary number of times within the generation before they are promoted. In Lieberman and Hewitt's original generational collector [1983], a generation is collected several times before all survivors are eventually promoted en masse. In terms of the aging semispaces of Figure 9.2b, ei? ther all live objects in fromspace are evacuated to tospace within this generation or all are promoted to the next generation, depending on the age of the generation as a whole. Basically, the differences between steps and generations are that both segregate objects by age, but different generations are collected at different frequencies whereas all the steps of a generation are collected at the same time. By using steps in the youngest generation (where most mutation occurs), and by reducing premature col?lection, the load on the write barrier can be reduced while also controlling promotion, without need for per-object age records. -- What do you think about implementing sub-generation steps? Maybe only on the youngest generation? A "lazy" way of implementing this (although without the semantic of "steps") is adding more intermediate generations with the same threshold (but this likely won't yield the same benefits). ---------- components: Interpreter Core messages: 358916 nosy: nascheme, pablogsal, tim.peters priority: normal severity: normal status: open title: Implementing sub-generation steps in the gc type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 27 17:10:42 2019 From: report at bugs.python.org (anthony shaw) Date: Fri, 27 Dec 2019 22:10:42 +0000 Subject: [New-bugs-announce] [issue39144] Align ctags and etags targets and include Python stdlib Message-ID: <1577484642.78.0.140620689323.issue39144@roundup.psfhosted.org> New submission from anthony shaw : make tags will include - Modules/_ctypes/ where as make TAGS will not. Also, neither include the Python source files for the standard library, which both etags and ctags are capable of handling. PR to follow ---------- components: Build messages: 358917 nosy: anthony shaw priority: normal severity: normal status: open title: Align ctags and etags targets and include Python stdlib type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 27 21:21:57 2019 From: report at bugs.python.org (Cyker Way) Date: Sat, 28 Dec 2019 02:21:57 +0000 Subject: [New-bugs-announce] [issue39145] Innocuous parent class changes multiple inheritance MRO Message-ID: <1577499717.77.0.3384206838.issue39145@roundup.psfhosted.org> New submission from Cyker Way : With an inheritance graph like this: A C B D (X) A E Adding or removing class X in E's parents will change the order of A and C in E's MRO: EBDAC vs EBDCXA. I couldn't imagine what would be the "perfect" MRO. However, given X is completely independent from A and C, this behavior looks strange and problematic. ---------- components: Interpreter Core files: mro.py messages: 358921 nosy: cykerway priority: normal severity: normal status: open title: Innocuous parent class changes multiple inheritance MRO type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48804/mro.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 28 04:20:29 2019 From: report at bugs.python.org (Zhipeng Xie) Date: Sat, 28 Dec 2019 09:20:29 +0000 Subject: [New-bugs-announce] [issue39146] to much memory consumption in re.compile unicode Message-ID: <1577524829.3.0.724432739165.issue39146@roundup.psfhosted.org> New submission from Zhipeng Xie <775350901 at qq.com>: when running the following script, we found python2 comsume a lot memory while python3 does not have the issue. import re import time NON_PRINTABLE = re.compile(u'[^\U00010000-\U0010ffff]') time.sleep( 30 ) python2: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6943 root 20 0 109956 93436 3956 S 0.0 1.2 0:00.30 python python3: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6952 root 20 0 28032 8880 4868 S 0.0 0.1 0:00.02 python3 ---------- components: Library (Lib) messages: 358936 nosy: Zhipeng Xie priority: normal severity: normal status: open title: to much memory consumption in re.compile unicode type: resource usage versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 28 04:56:02 2019 From: report at bugs.python.org (Patrick Liu) Date: Sat, 28 Dec 2019 09:56:02 +0000 Subject: [New-bugs-announce] [issue39147] using zipfile with root privilege shows FileNotFoundError Message-ID: <1577526962.17.0.0234769232218.issue39147@roundup.psfhosted.org> New submission from Patrick Liu : When I run the python script with root privilege, it can clone the repo successfully but with the error message as follow. However, it runs correctly with normal user. Why it cannot find the html file? Thanks. Python 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from git_clone import git_clone >>> git_clone('https://github.com/android9527/android9527.github.io') 13472.0 KB, 13574 KB/s, 0.99 seconds passed Traceback (most recent call last): File "", line 1, in File "/root/anaconda3/lib/python3.7/site-packages/git_clone/git_clone.py", line 53, in git_clone f.extractall(path + '/.') File "/root/anaconda3/lib/python3.7/zipfile.py", line 1619, in extractall self._extract_member(zipinfo, path, pwd) File "/root/anaconda3/lib/python3.7/zipfile.py", line 1673, in _extract_member open(targetpath, "wb") as target: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/fit-Knowledgezoo/test/android9527.github.io-master/2019/01/24/2019-01-24-Java ??? synchronized /index.html' >>> exit() ---------- messages: 358937 nosy: Patrick Liu priority: normal severity: normal status: open title: using zipfile with root privilege shows FileNotFoundError type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 28 05:55:11 2019 From: report at bugs.python.org (=?utf-8?q?Alex_Gr=C3=B6nholm?=) Date: Sat, 28 Dec 2019 10:55:11 +0000 Subject: [New-bugs-announce] [issue39148] DatagramProtocol + IPv6 does not work with ProactorEventLoop Message-ID: <1577530511.62.0.532247879973.issue39148@roundup.psfhosted.org> New submission from Alex Gr?nholm : Receiving a UDP datagram using DatagramProtocol on the Proactor event loop results in error_received() being called with WinError 87 (Invalid Parameter). The low-level sock_recv() works fine, but naturally loses the sender address information. The attached script works fine as-is on Linux, and on Windows if ::1 is replaced with 127.0.0.1. There were extensive tests added for UDP support on IOCP, but unfortunately all of them use only IPv4 sockets so they could not catch this problem. ---------- components: Windows files: udpreceive.py messages: 358940 nosy: alex.gronholm, asvetlov, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: DatagramProtocol + IPv6 does not work with ProactorEventLoop type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48805/udpreceive.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 28 13:31:50 2019 From: report at bugs.python.org (Leonardo Galani) Date: Sat, 28 Dec 2019 18:31:50 +0000 Subject: [New-bugs-announce] [issue39149] False positive using operator 'AND' while checking keys on dict() Message-ID: <1577557910.45.0.43701821857.issue39149@roundup.psfhosted.org> New submission from Leonardo Galani : using Python 3.7.6 (default, Dec 27 2019, 09:51:07) @ macOs dict = { 'a': 1, 'b': 2, 'c': 3 } if you `if 'a' and 'b' and 'c' in dict: print('ok')` you will get a True, since everything is true. if you `if 'a' and 'g' and 'c' in dict: print('ok')` you also get a True because the last statement is True but the mid statement is false. To avoid this false positive, you need to be explicit: `if 'a' in dict and 'g' in dict and 'c' in dict: print('ok')` you will get a false. ---------- components: macOS messages: 358954 nosy: leonardogalani, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: False positive using operator 'AND' while checking keys on dict() versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 28 20:45:21 2019 From: report at bugs.python.org (Andy Lester) Date: Sun, 29 Dec 2019 01:45:21 +0000 Subject: [New-bugs-announce] [issue39150] See if PyToken_OneChar would be faster as a lookup table Message-ID: <1577583921.53.0.126418549221.issue39150@roundup.psfhosted.org> New submission from Andy Lester : PyToken_OneChar in Parser/token.c is autogenerated. I suspect it may be faster and smaller if it were a lookup into a static table of ops rather than a switch statement. Check to see if it is. ---------- components: Interpreter Core messages: 358975 nosy: petdance priority: normal severity: normal status: open title: See if PyToken_OneChar would be faster as a lookup table type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 28 22:59:28 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 29 Dec 2019 03:59:28 +0000 Subject: [New-bugs-announce] [issue39151] Simplify the deep-first-search of the assembler Message-ID: <1577591968.17.0.626428145875.issue39151@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : I was recently checking the code of the assembler and when looking in detail at the dfs() function I was surprised by this code: static void dfs(struct compiler *c, basicblock *b, struct assembler *a, int end) { .... while (j < end) { b = a->a_postorder[j++]; for (i = 0; i < b->b_iused; i++) { struct instr *instr = &b->b_instr[i]; if (instr->i_jrel || instr->i_jabs) dfs(c, instr->i_target, a, j); } assert(a->a_nblocks < j); a->a_postorder[a->a_nblocks++] = b; } } In particular, the recursive call to: dfs(c, instr->i_target, a, j) I cannot imagine a situation in which the previous for loop for (j = end; b && !b->b_seen; b = b->b_next) { b->b_seen = 1; assert(a->a_nblocks < j); a->a_postorder[--j] = b; } has not visited all blocks (so the b_seen is already set) and in which the recursion will do something meaninfull. Indeed, as a naive check, simplifying the function to (no recursive call): static void dfs(struct compiler *c, basicblock *b, struct assembler *a, int end) { int i, j; for (j = end; b && !b->b_seen; b = b->b_next) { b->b_seen = 1; assert(a->a_nblocks < j); a->a_postorder[--j] = b; } while (j < end) { b = a->a_postorder[j++]; assert(a->a_nblocks < j); a->a_postorder[a->a_nblocks++] = b; } } passes the full test suite. I am missing something? Even if I am missing something I think that situation should be added to the test suite so It mativated me to open the issue. ---------- components: Interpreter Core messages: 358978 nosy: Mark.Shannon, pablogsal, serhiy.storchaka priority: normal severity: normal status: open title: Simplify the deep-first-search of the assembler type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 08:05:33 2019 From: report at bugs.python.org (Giovanni Lombardo) Date: Sun, 29 Dec 2019 13:05:33 +0000 Subject: [New-bugs-announce] [issue39152] Faulty override of tkinter.Misc.configure in tkinter.ttk.Scale.configure Message-ID: <1577624733.91.0.369906281789.issue39152@roundup.psfhosted.org> New submission from Giovanni Lombardo : The issue arises by simply calling configure on the Scale widget of the themed tk (ttk) widgets set: ``` cursor = scale.configure('cursor')[-1] ``` The above casues the following error: ``` File "C:\Users\Giovanni\Tests\test_scale.py", line 604, in main cursor = scale.configure('cursor')[-1] File "C:\Users\Giovanni\AppData\Local\Programs\Python\Python37\lib\tkinter\ttk.py", line 1090, in configure kw.update(cnf) ValueError: dictionary update sequence element #0 has length 1; 2 is required ``` The interpreter is: ``` Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32``` ---------- components: Tkinter messages: 358987 nosy: glombardo priority: normal severity: normal status: open title: Faulty override of tkinter.Misc.configure in tkinter.ttk.Scale.configure type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 08:33:59 2019 From: report at bugs.python.org (Nick Coghlan) Date: Sun, 29 Dec 2019 13:33:59 +0000 Subject: [New-bugs-announce] [issue39153] Clarify refcounting semantics of PyDict_SetItem[String] Message-ID: <1577626439.88.0.846596028568.issue39153@roundup.psfhosted.org> New submission from Nick Coghlan : The documentation for PyList_SetItem is explicit that it steals a reference to the passed in value, and drops the reference for any existing entry: https://docs.python.org/3.3/c-api/list.html?highlight=m#PyList_SetItem The documentation for PyDict_SetItem leaves the semantics unspecified, forcing the reader to either make assumptions, or else go read the source code (as was done for the SO answer at https://stackoverflow.com/questions/40700251/reference-counting-using-pydict-setitemstring) Since the default assumption is actually correct, I don't think a Sphinx note is warranted, but an extra explicit sentence would be helpful. PySequence_SetItem has such a sentence already: "This function does *not* steal a reference to v." My suggestion is that we also add that sentence to the documentation for: * PyObject_SetItem * PyMapping_SetItemString * PyDict_SetItem * PyDict_SetItemString ---------- messages: 358988 nosy: ncoghlan priority: normal severity: normal status: open title: Clarify refcounting semantics of PyDict_SetItem[String] _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 12:42:10 2019 From: report at bugs.python.org (Peter Ludemann) Date: Sun, 29 Dec 2019 17:42:10 +0000 Subject: [New-bugs-announce] [issue39154] "utf8-sig" missing from codecs (inconsistency) Message-ID: <1577641330.32.0.40121523181.issue39154@roundup.psfhosted.org> New submission from Peter Ludemann : In general, 'utf8' and 'utf-8' are interchangeable in the codecs (and in many parts of the Python library). However, 'utf8-sig' is missing ... and it happens to also be generated by lib2to3.tokenize.detect_encoding. >>> import codecs >>> codecs.getincrementaldecoder('utf-8-sig')() >>> codecs.getincrementaldecoder('utf8-sig')() Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/codecs.py", line 987, in getincrementaldecoder decoder = lookup(encoding).incrementaldecoder LookupError: unknown encoding: utf8-sig ---------- components: Unicode messages: 358994 nosy: Peter Ludemann, ezio.melotti, vstinner priority: normal severity: normal status: open title: "utf8-sig" missing from codecs (inconsistency) type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 13:14:13 2019 From: report at bugs.python.org (Peter Ludemann) Date: Sun, 29 Dec 2019 18:14:13 +0000 Subject: [New-bugs-announce] [issue39155] "utf8-sig" missing from codecs (inconsistency) Message-ID: <1577643253.55.0.467066593045.issue39155@roundup.psfhosted.org> New submission from Peter Ludemann : In general, 'utf8' and 'utf-8' are interchangeable in the codecs (and in many parts of the Python library). However, 'utf8-sig' is missing ... and it happens to also be generated by lib2to3.tokenize.detect_encoding. >>> import codecs >>> codecs.getincrementaldecoder('utf-8-sig')() >>> codecs.getincrementaldecoder('utf8-sig')() Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/codecs.py", line 987, in getincrementaldecoder decoder = lookup(encoding).incrementaldecoder LookupError: unknown encoding: utf8-sig ---------- components: Unicode messages: 358996 nosy: Peter Ludemann, ezio.melotti, vstinner priority: normal severity: normal status: open title: "utf8-sig" missing from codecs (inconsistency) type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 14:44:42 2019 From: report at bugs.python.org (Mark Shannon) Date: Sun, 29 Dec 2019 19:44:42 +0000 Subject: [New-bugs-announce] [issue39156] Break up COMPARE_OP into logically distinct operations. Message-ID: <1577648682.52.0.347690468964.issue39156@roundup.psfhosted.org> New submission from Mark Shannon : Currently the COMPARE_OP instruction performs one of four different tasks. We should break it up into four different instructions, that each performs only one of those tasks. The four tasks are: Rich comparison (>, <, ==, !=, >=, <=) Identity comparison (is, is not) Contains test (in, not in) Exception matching The current implementation involves an unnecessary extra dispatch to determine which task to perform. Comparisons are common operations, so this extra call and unpredictable branch has a cost. In addition, testing for exception matching is always followed by a branch, so the test and branch can be combined. I propose adding three new instructions and changing the meaning of `COMPARE_OP`. COMPARE_OP should only perform rich comparisons, and should call `PyObject_RichCompare` directly. IS_OP performs identity tests, performs no calls and cannot fail. CONTAINS_OP tests for 'in and 'not in' and should call `PySequence_Contains` directly. JUMP_IF_NOT_EXC_MATCH Tests whether the exception matches and jumps if it does not. ---------- components: Interpreter Core messages: 359002 nosy: Mark.Shannon priority: normal severity: normal status: open title: Break up COMPARE_OP into logically distinct operations. type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 15:51:55 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 29 Dec 2019 20:51:55 +0000 Subject: [New-bugs-announce] [issue39157] test_pidfd_send_signal can fail on some systems with PermissionError: [Errno 1] Operation not permitted Message-ID: <1577652715.56.0.81838626061.issue39157@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : ====================================================================== FAIL: test_pidfd_send_signal (test.test_signal.PidfdSignalTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/buildbot/buildarea/3.x.pablogsal-arch-x86_64/build/Lib/test/test_signal.py", line 1287, in test_pidfd_send_signal self.assertEqual(cm.exception.errno, errno.EBADF) AssertionError: 1 != 9 --------------------------------------------------------------------- Example failure: https://buildbot.python.org/all/#/builders/231/builds/1/steps/5/logs/stdio We should skip the test if the syscall is not permitted ---------- messages: 359004 nosy: pablogsal priority: normal severity: normal status: open title: test_pidfd_send_signal can fail on some systems with PermissionError: [Errno 1] Operation not permitted _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 16:42:10 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 29 Dec 2019 21:42:10 +0000 Subject: [New-bugs-announce] [issue39158] ast.literal_eval() doesn't support empty sets Message-ID: <1577655730.41.0.646692068238.issue39158@roundup.psfhosted.org> New submission from Raymond Hettinger : We already support sets but not empty sets. After the PR, this now works: >>> from ast import literal_eval >>> literal_eval('set()') set() If we wanted, it would be a simple matter to extend it frozensets: >>> literal_eval('frozenset({10, 20, 30})') frozenset({10, 20, 30}) ---------- components: Library (Lib) messages: 359007 nosy: rhettinger priority: normal severity: normal status: open title: ast.literal_eval() doesn't support empty sets versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 17:22:49 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 29 Dec 2019 22:22:49 +0000 Subject: [New-bugs-announce] [issue39159] Ideas for making ast.literal_eval() usable Message-ID: <1577658169.97.0.393062022698.issue39159@roundup.psfhosted.org> New submission from Raymond Hettinger : A primary goal for ast.literal_eval() is to "Safely evaluate an expression node or a string". In the context of a real application, we need to do several things to make it possible to fulfill its design goal: 1) We should document possible exceptions that need to be caught. So far, I've found TypeError, MemoryError, SyntaxError, ValueError. 2) Define a size limit guaranteed not to give a MemoryError. The smallest unsafe size I've found so far is 301 characters: s = '(' * 100 + '0' + ',)' * 100 literal_eval(s) # Raises a MemoryError 3) Consider writing a standalone expression compiler that doesn't have the same small limits as our usual compile() function. This would make literal_eval() usable for evaluating tainted inputs with bigger datasets. (Imagine if the json module could only be safely used with inputs under 301 characters). 4) Perhaps document an example of how we suggest that someone process tainted input: expr = input('Enter a dataset in Python format: ') if len(expr) > 300: error(f'Maximum supported size is 300, not {len(expr)}') try: data = literal_eval(expr) except (TypeError, MemoryError, SyntaxError, ValueError): error('Input cannot be evaluated') ---------- messages: 359011 nosy: rhettinger priority: normal severity: normal status: open title: Ideas for making ast.literal_eval() usable _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 29 22:45:04 2019 From: report at bugs.python.org (anthony shaw) Date: Mon, 30 Dec 2019 03:45:04 +0000 Subject: [New-bugs-announce] [issue39160] ./configure --help has inconsistencies in style Message-ID: <1577677504.15.0.780834896909.issue39160@roundup.psfhosted.org> New submission from anthony shaw : I've noticed that ./configure --help is inconsistent. - The way default values are shared - The way enumerated - The verbs used (e.g. enable, set) - Some --with-xyx and some --with(out-xyz) - Some start with capitals, others don't Also, many of the flags could use additional explanation as to their purpose, or reference the rST file in the doc that explains what they do. PR to follow ---------- components: Build messages: 359014 nosy: anthony shaw priority: normal severity: normal status: open title: ./configure --help has inconsistencies in style _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 00:18:47 2019 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 30 Dec 2019 05:18:47 +0000 Subject: [New-bugs-announce] [issue39161] Py_NewInterpreter docs need updating for multi-phase initialization Message-ID: <1577683127.59.0.69655210791.issue39161@roundup.psfhosted.org> New submission from Nick Coghlan : The Py_NewInterpreter docs only cover the behaviour of extension modules that use single-phase initialization: https://docs.python.org/3/c-api/init.html#c.Py_NewInterpreter Multi-phase initialization allows each subinterpreter to get its own copy of extension modules as well, with only C/C++ level static and global variables being shared. ---------- assignee: docs at python components: Documentation messages: 359019 nosy: docs at python, ncoghlan, petr.viktorin priority: normal severity: normal stage: needs patch status: open title: Py_NewInterpreter docs need updating for multi-phase initialization type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 03:35:06 2019 From: report at bugs.python.org (anthony shaw) Date: Mon, 30 Dec 2019 08:35:06 +0000 Subject: [New-bugs-announce] [issue39162] setup.py not picking up tkinter headers Message-ID: <1577694906.58.0.783001060452.issue39162@roundup.psfhosted.org> New submission from anthony shaw : ./configure && make -j4 is returning: Failed to build these modules: _tkinter I'm running macOS 10.15.2, with the SDK installed using `xcode-select --install` (no funny business) /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/Tk.framework/ xcrun --show-sdk-path /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk If I debug setup.py, I can see it's eagerly returning the wrong path (/System) On a REPL, if I do: >>> import setup >>> setup.macosx_sdk_root() '/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk' So the path is correct, but I can see in setup.py that detect_tkinter_darwin() will break inside the loop if /System/Library/Frameworks/Tk.framework/Versions/Current exists (which it does), but it doesn't contain the headers, it contains 3 directories: Resources Tk _CodeSignature I think detect_tkinter_darwin should be updated so that framework_dirs scans macosx_sdk_root first, but I don't know what other scenarios this might break. I'd be happy to submit a patch for my scenario but it looks like this whole function needs tests so each platform change and scenario as they come up. There's loads of other issues on BPO related. ---------- components: Build messages: 359026 nosy: anthonypjshaw priority: normal severity: normal status: open title: setup.py not picking up tkinter headers type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 04:04:57 2019 From: report at bugs.python.org (georgepaul bj) Date: Mon, 30 Dec 2019 09:04:57 +0000 Subject: [New-bugs-announce] [issue39163] Perfect Exchange Migration tool Message-ID: <1577696697.84.0.298684644872.issue39163@roundup.psfhosted.org> New submission from georgepaul bj : EdbMails Exchange Migration tool is the perfect tool which migrates your entire mailbox items like mails, calendars, contacts etc. without any data loss. and it supports Exchange versions 2003, 2007, 2010, 2013, 2016 and 2019. The migration will be incremental .ie. no duplicate items during consecutive migration on the same system. Exchange Migration The tool provides risk less migration which helps to migrate direct Exchange to Exchange server and Exchange to Office 365 Migration. It also handles public folders and Archive mailboxes migration. The software supports cut over , staged and hybrid migration. It easily performs cross forest and cross domain migration. To know more : https://www.edbmails.com/pages/exchange-server-migration-tool.html ---------- files: exchange-migration.jpg messages: 359028 nosy: georgepaul123 priority: normal severity: normal status: open title: Perfect Exchange Migration tool type: performance Added file: https://bugs.python.org/file48809/exchange-migration.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 05:36:38 2019 From: report at bugs.python.org (Julien Danjou) Date: Mon, 30 Dec 2019 10:36:38 +0000 Subject: [New-bugs-announce] [issue39164] PyErr_GetExcInfo does not allow to retrieve for an arbitrary thread Message-ID: <1577702198.9.0.00483910121214.issue39164@roundup.psfhosted.org> New submission from Julien Danjou : PyErr_GetExcInfo does not allow to retrieve exception information for an arbitrary thread. As it calls `_PyThreadState_GET` itself, it's impossible to get exception information for a different thread. ---------- components: C API messages: 359029 nosy: jd priority: normal severity: normal status: open title: PyErr_GetExcInfo does not allow to retrieve for an arbitrary thread type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 08:19:05 2019 From: report at bugs.python.org (=?utf-8?q?Juancarlo_A=C3=B1ez?=) Date: Mon, 30 Dec 2019 13:19:05 +0000 Subject: [New-bugs-announce] [issue39165] Completeness and symmetry in RE, avoid `findall(...)[0]` Message-ID: <1577711945.6.0.0241743491206.issue39165@roundup.psfhosted.org> New submission from Juancarlo A?ez : The problematic `findall(...)[0]` is a common anti-pattern in Python programs. The reason is lack of symmetry and completeness in the `re` module. The original proposal in `python-ideas` was to add `re.findfirst(pattern, string, flags=0, default=_mark)` with more or less the semantics of `next(findall(pattern, string, flags=flags), default=default)`. The referenced PR adds `findalliter(pattern, string, flags=0)` with the value semantics of `findall()` over a generator, implements `findall()` as `return list(findalliter(...))`, and implements `findfirst()`. Consistency and correctness are likely because all tests pass with the redefined `findall()`. ---------- components: Library (Lib) messages: 359039 nosy: apalala priority: normal pull_requests: 17191 severity: normal status: open title: Completeness and symmetry in RE, avoid `findall(...)[0]` type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 08:22:39 2019 From: report at bugs.python.org (Ned Batchelder) Date: Mon, 30 Dec 2019 13:22:39 +0000 Subject: [New-bugs-announce] [issue39166] Python 3.9.0a2 changed how "async for" traces its final iteration Message-ID: <1577712159.27.0.774322517863.issue39166@roundup.psfhosted.org> New submission from Ned Batchelder : 3.9.0a2 changed how the final iteration of "async for" is traced. The body of the loop is traced when the body is not executed. Standard "for" loops don't show the same effect. In the output below, notice that 3.9.0a2 and 3.9.0a2+ both show one last execution of line 32, but that line is not actually executed (there is no output). The standard for loop doesn't show line 27 doing that in any version. --- 8< ---------------------------------------------------- import linecache, sys def trace(frame, event, arg): # The weird globals here is to avoid a NameError on shutdown... if frame.f_code.co_filename == globals().get("__file__"): lineno = frame.f_lineno print("{} {}: {}".format(event[:4], lineno, linecache.getline(__file__, lineno).rstrip())) return trace import asyncio class AsyncIteratorWrapper: def __init__(self, obj): self._it = iter(obj) def __aiter__(self): return self async def __anext__(self): try: return next(self._it) except StopIteration: raise StopAsyncIteration def doit_sync(): for letter in "ab": print(letter) print(".") async def doit_async(): async for letter in AsyncIteratorWrapper("ab"): print(letter) print(".") print(sys.version) sys.settrace(trace) doit_sync() loop = asyncio.new_event_loop() loop.run_until_complete(doit_async()) loop.close() --- 8< ---------------------------------------------------- $ /usr/local/pythonz/pythons/CPython-3.9.0a1/bin/python3.9 /tmp/bpo2.py 3.9.0a1 (default, Nov 20 2019, 18:52:14) [Clang 10.0.0 (clang-1000.10.44.4)] call 25: def doit_sync(): line 26: for letter in "ab": line 27: print(letter) a line 26: for letter in "ab": line 27: print(letter) b line 26: for letter in "ab": line 28: print(".") . retu 28: print(".") call 30: async def doit_async(): line 31: async for letter in AsyncIteratorWrapper("ab"): call 13: def __init__(self, obj): line 14: self._it = iter(obj) retu 14: self._it = iter(obj) call 16: def __aiter__(self): line 17: return self retu 17: return self call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) retu 21: return next(self._it) exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) a line 31: async for letter in AsyncIteratorWrapper("ab"): call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) retu 21: return next(self._it) exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) b line 31: async for letter in AsyncIteratorWrapper("ab"): call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) exce 21: return next(self._it) line 22: except StopIteration: line 23: raise StopAsyncIteration exce 23: raise StopAsyncIteration retu 23: raise StopAsyncIteration exce 31: async for letter in AsyncIteratorWrapper("ab"): line 33: print(".") . retu 33: print(".") $ /usr/local/pythonz/pythons/CPython-3.9.0a2/bin/python3.9 /tmp/bpo2.py 3.9.0a2 (default, Dec 19 2019, 08:42:29) [Clang 10.0.0 (clang-1000.10.44.4)] call 25: def doit_sync(): line 26: for letter in "ab": line 27: print(letter) a line 26: for letter in "ab": line 27: print(letter) b line 26: for letter in "ab": line 28: print(".") . retu 28: print(".") call 30: async def doit_async(): line 31: async for letter in AsyncIteratorWrapper("ab"): call 13: def __init__(self, obj): line 14: self._it = iter(obj) retu 14: self._it = iter(obj) call 16: def __aiter__(self): line 17: return self retu 17: return self call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) retu 21: return next(self._it) exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) a line 31: async for letter in AsyncIteratorWrapper("ab"): call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) retu 21: return next(self._it) exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) b line 31: async for letter in AsyncIteratorWrapper("ab"): call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) exce 21: return next(self._it) line 22: except StopIteration: line 23: raise StopAsyncIteration exce 23: raise StopAsyncIteration retu 23: raise StopAsyncIteration exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) line 33: print(".") . retu 33: print(".") $ /usr/local/cpython/bin/python3.9 /tmp/bpo2.py 3.9.0a2+ (heads/master:89aa7f0ede, Dec 30 2019, 07:52:33) [Clang 10.0.0 (clang-1000.10.44.4)] call 25: def doit_sync(): line 26: for letter in "ab": line 27: print(letter) a line 26: for letter in "ab": line 27: print(letter) b line 26: for letter in "ab": line 28: print(".") . retu 28: print(".") call 30: async def doit_async(): line 31: async for letter in AsyncIteratorWrapper("ab"): call 13: def __init__(self, obj): line 14: self._it = iter(obj) retu 14: self._it = iter(obj) call 16: def __aiter__(self): line 17: return self retu 17: return self call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) retu 21: return next(self._it) exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) a line 31: async for letter in AsyncIteratorWrapper("ab"): call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) retu 21: return next(self._it) exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) b line 31: async for letter in AsyncIteratorWrapper("ab"): call 19: async def __anext__(self): line 20: try: line 21: return next(self._it) exce 21: return next(self._it) line 22: except StopIteration: line 23: raise StopAsyncIteration exce 23: raise StopAsyncIteration retu 23: raise StopAsyncIteration exce 31: async for letter in AsyncIteratorWrapper("ab"): line 32: print(letter) line 33: print(".") . retu 33: print(".") $ ---------- keywords: 3.9regression messages: 359040 nosy: Mark.Shannon, nedbat priority: normal severity: normal status: open title: Python 3.9.0a2 changed how "async for" traces its final iteration _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 11:18:30 2019 From: report at bugs.python.org (Trenton Bricken) Date: Mon, 30 Dec 2019 16:18:30 +0000 Subject: [New-bugs-announce] [issue39167] argparse boolean type bug Message-ID: <1577722710.93.0.00677061008996.issue39167@roundup.psfhosted.org> New submission from Trenton Bricken : This is a bug with argparse. Say I have: parser.add_argument('--verbose', type=bool, action='store', nargs='+', default = [False], help='turns on verbosity') If in the command line I have "--verbose False' the default will then evaluate to True and return the value "True" rather than the value of "False" that I tried to set it to! When a developer has lots of arguments to pass in, they may not remember if an argument defaults to False or True. Setting the value to False and having it then return True is very confusing and should not occur. Right now I have a work-around where I have a new type=buildBool where: def buildBool(arg): return bool(arg) and do: parser.add_argument('--verbose', type=buildBool, action='store', nargs='+', default = ['False'], help='turns on verbosity') but this means I have to have this type and have my default value be a string which is suboptimal and this bug remains a trap for other developers. ---------- messages: 359045 nosy: Trenton Bricken priority: normal severity: normal status: open title: argparse boolean type bug type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 12:17:10 2019 From: report at bugs.python.org (Ruslan Dautkhanov) Date: Mon, 30 Dec 2019 17:17:10 +0000 Subject: [New-bugs-announce] [issue39168] Generic type subscription is a huge toll on Python performance Message-ID: <1577726230.5.0.713641828615.issue39168@roundup.psfhosted.org> New submission from Ruslan Dautkhanov : Reported originally here - https://twitter.com/__zero323__/status/1210911632953692162 See details here https://asciinema.org/a/290643 In [4]: class Foo: pass In [5]: %timeit -n1_000_000 Foo() 88.5 ns ? 3.44 ns per loop (mean ? std. dev. of 7 runs, 1000000 loops each) In [6]: T = TypeVar("T") In [7]: class Bar(Generic[T]): pass In [8]: %timeit -n1_000_000 Bar() 883 ns ? 3.46 ns per loop (mean ? std. dev. of 7 runs, 1000000 loops each) Same effect in Python 3.6 and 3.8 ---------- messages: 359049 nosy: Ruslan Dautkhanov priority: normal severity: normal status: open title: Generic type subscription is a huge toll on Python performance type: performance versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 30 19:06:25 2019 From: report at bugs.python.org (Ronald Li) Date: Tue, 31 Dec 2019 00:06:25 +0000 Subject: [New-bugs-announce] [issue39169] TypeError: 'int' object is not callable if the signal handler is SIG_IGN Message-ID: <1577750785.74.0.863762449142.issue39169@roundup.psfhosted.org> New submission from Ronald Li : The attached script ign2ndsig.py demonstrates an attempted way to handle signals at most once and ignore any subsequent signals. SIGINT and SIGTERM are used in the demo. In Python 3.5, the subprocess would go into one of the "except KeyboardInterrupt:" or "except SystemExit:" blocks before doing "finally:" and exiting zero: # execute the script with no args: # ./ign2ndsig.py subproc: sys.version_info(major=3, minor=5, micro=9, releaselevel='final', serial=0) raising KeyboardInterrupt handling KeyboardInterrupt doing finally rc: 0 In Python 3.6, 3.7 or 3.8, the subprocess would go into neither the "except KeyboardInterrupt:" nor "except SystemExit:" blocks before doing "finally:" and exiting non-zero, with a traceback like this: subproc: sys.version_info(major=3, minor=6, micro=10, releaselevel='final', serial=0) raising KeyboardInterrupt doing finally Traceback (most recent call last): File "./ign2ndsig.py", line 30, in subproc input() File "./ign2ndsig.py", line 18, in handler raise KeyboardInterrupt KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./ign2ndsig.py", line 58, in subproc() File "./ign2ndsig.py", line 32, in subproc printef('handling KeyboardInterrupt') File "./ign2ndsig.py", line 10, in printef print(*objects, file=sys.stderr, flush=True) TypeError: 'int' object is not callable rc: 1 More on the behaviors of ign2ndsig.py : 1. Replacing SIG_IGN at line 14 and 15 with a no-op Python function "fixes" the problem - such that in Python 3.6, 3.7 or 3.8, it'd behave similarly to the Python 3.5 output above. 2. Sending a SIGTERM then a SIGINT (instead of a SIGINT followed by a SIGTERM) (i.e. reversing the order of SIGINT and SIGTERM) at line 49-50 results in a similar behavior (that the TypeError is raised). 3. Sending 2 SIGINTs or 2 SIGTERMs (instead of 1 SIGINT + 1 SIGTERM) (i.e. sending the same signal twice) at line 49-50 does NOT result in the TypeError. Potentially related issues: _thread.interrupt_main() errors if SIGINT handler in SIG_DFL, SIG_IGN https://bugs.python.org/issue23395 Turn SIG_DFL and SIG_IGN into functions https://bugs.python.org/issue23325 ---------- files: ign2ndsig.py messages: 359071 nosy: Ronald Li priority: normal severity: normal status: open title: TypeError: 'int' object is not callable if the signal handler is SIG_IGN type: behavior versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48810/ign2ndsig.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 31 01:57:54 2019 From: report at bugs.python.org (Clinton James) Date: Tue, 31 Dec 2019 06:57:54 +0000 Subject: [New-bugs-announce] [issue39170] Sqlite3 row_factory for attribute access: NamedRow Message-ID: <1577775474.34.0.0934062423159.issue39170@roundup.psfhosted.org> New submission from Clinton James : Currently, sqlite3 returns rows by tuple or sqlite3.Row for dict-style, index access. I constantly find myself wanting attribute access like namedtuple for rows. I find attribute access cleaner without the brackets and quoting field names. However, unlike previous discussions (https://bugs.python.org/issue13299), I don't want to use the namedtuple object. I appreciate the simple API and minimal memory consumption of sqlite3.Row and used it as my guide in creating sqlite3.NamedRow to allow access by index and attribute. A pull request is ready Why a new object instead of adding attribute access to the existing sqlite3.Row? There is an existing member method `keys` and any table with the field "keys" would cause a hard to debug, easily avoidable, collision. Features: + Optimized in C, so it will be faster than any python implementation. + Access columns by attribute for all valid names and by index for all names. + Iterate over fields by name/value pairs. + Works with standard functions `len` and `contains`. + Identical memory consumption to sqlite3.Row with two references: the data tuple and the cursor description. + Identical speed to sqlite3.Row if not faster. Timing usually has it slightly faster for index by name or attribute, but it is almost identical. Examples: >>> import sqlite3 >>> c = sqlite3.Connection(":memory:").cursor() >>> c.row_factory = sqlite3.NamedRow >>> named_row = c.execute("SELECT 'A' AS letter, '.-' AS morse, 65 AS ord").fetchone() >>> len(named_row) 3 >>> 'letter' in named_row true >>> named_row == named_row true >>> hash(named_row) 5512444875192833987 Index by number and range. >>> named_row[0] 'A' >>> named_row[1:] ('.-', 65) Index by column name. >>> named_row["ord"] 65 Access by attribute. >>> named_row.morse '.-' Iterate row for name/value pairs. >>> dict(named_row) {'letter': 'A', 'morse': '.-', 'ord': 65} >>> tuple(named_row) (('letter', 'A'), ('morse', '.-'), ('ord', 65)) How sqlite3.NamedRow differs from sqlite3.Row ---------------------------------------------- The class only has class dunder methods to allow any valid field name. When the field name would be an invalid attribute name, you have two options: either use the SQL `AS` in the select statement or index by name. To get the field names use the iterator `[x[0] for x in row]` or do the same from the `cursor.description`. ```python titles = [x[0] for x in row] titles = [x[0] for x in cursor.description] titles = dict(row).keys() ``` Attribute and dict access are no longer case-insensitive. There are three reasons for this. 1. Case-insensitive comparison only works well for ASCII characters. In a Unicode world, case-insensitive edge cases create unnecessary errors. Looking at a several existing codebases, this feature of Row is almost never used and I believe is not needed in NamedRow. 2. Case-insensitivity is not allowed for attribute access. This "feature" would treat attribute access differently from the rest of Python and "special cases aren't special enough to break the rules". Where `row.name`, `row.Name`, and `row.NAME` are all the same it gives off the faint code smell of something wrong. When case-insensitively is needed and the query SELECT can not be modified, sqlite3.Row is still there. 3. Code is simpler and easier to maintain. 4. It is faster. Timing Results -------------- NamedRow is faster than sqlite3.Row for index-by-name access. I have published a graph and the methodology of my testing. In the worst-case scenario, it is just as fast as sqlite3.Row without any extra memory. In most cases, it is faster. For more information, see the post at http://jidn.com/2019/10/namedrow-better-python-sqlite3-row-factory/ ---------- components: Library (Lib) messages: 359104 nosy: jidn priority: normal severity: normal status: open title: Sqlite3 row_factory for attribute access: NamedRow type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 31 03:15:47 2019 From: report at bugs.python.org (Dominic Mayers) Date: Tue, 31 Dec 2019 08:15:47 +0000 Subject: [New-bugs-announce] [issue39171] Missing default root in tkinter simpledialog.py Message-ID: <1577780147.06.0.429247944585.issue39171@roundup.psfhosted.org> New submission from Dominic Mayers : My first "bug" report here. Not sure I am doing it right. It is just that if I execute the code import tkinter from tkinter import simpledialog tkinter.Tk().withdraw() integer_value = simpledialog.askinteger('Dialog Title', 'What is your age?', minvalue=0, maxvalue=100) It works. In particular, when the line `parent = tkinter._default_root` is executed in simpledialog.py, `_default_root` is defined. However, if I execute the code import tkinter from tkinter import simpledialog integer_value = simpledialog.askinteger('Dialog Title', 'What is your age?', minvalue=0, maxvalue=100) which does not have the line `tkinter.Tk().withdraw()` it does not work. When the line `parent = tkinter._default_root` is executed, `_default_root` is not defined. I don't know if it is a bug. I don't understand the remainder of the code enough to say. However, the purpose of this line is to define a parent when none is provided. It seem to me that it should be possible to find a parent window... ---------- components: Tkinter messages: 359106 nosy: dominic108 priority: normal severity: normal status: open title: Missing default root in tkinter simpledialog.py type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 31 08:32:58 2019 From: report at bugs.python.org (=?utf-8?b?16DXmdeh158g15HXm9eo?=) Date: Tue, 31 Dec 2019 13:32:58 +0000 Subject: [New-bugs-announce] [issue39172] Translation of "string.find('asd', 'sd')" doesn't work Message-ID: <1577799178.37.0.470973849561.issue39172@roundup.psfhosted.org> New submission from ???? ??? : When using the following code to run using python 2.x, it works: import string string.find("asd", "sd") But this method is deprecated in python 3.x When using "2to3" tool to convert, it doesn't convert it successfully. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 359115 nosy: ???? ??? priority: normal severity: normal status: open title: Translation of "string.find('asd', 'sd')" doesn't work versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 31 10:23:53 2019 From: report at bugs.python.org (hai shi) Date: Tue, 31 Dec 2019 15:23:53 +0000 Subject: [New-bugs-announce] [issue39173] _AttributeHolder of argparse should support the sort function or not? Message-ID: <1577805833.81.0.166775558073.issue39173@roundup.psfhosted.org> New submission from hai shi : Currently, many developers discuss the output of attributes of argparse should be sorted or not? >>> from argparse import ArgumentParser >>> parser = ArgumentParser() >>> _ = parser.add_argument('outstream') >>> _ = parser.add_argument('instream') >>> args = parser.parse_args(['out.txt', 'in.txt']) # Keep the original order >>> vars(args) {'outstream': 'out.txt', 'instream': 'in.txt'} # Order is sorted >>> args Namespace(instream='in.txt', outstream='out.txt') IMHO, the attributes order should be keep the original order by default. If user would like use order the attributes order, we should add a param in `_AttributeHolder` to open sorting or not. such as: ``` class _AttributeHolder(object): def __init__(self, sort=false): self.sort = sort def _get_kwargs(self): if sort: return sorted(self.__dict__.items()) else: return self.__dict__.items() ``` some other bpos have discussed this topic too: issue39075?issue39058 ---------- components: Library (Lib) messages: 359118 nosy: shihai1991 priority: normal severity: normal status: open title: _AttributeHolder of argparse should support the sort function or not? type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 31 13:22:31 2019 From: report at bugs.python.org (Lee Collins) Date: Tue, 31 Dec 2019 18:22:31 +0000 Subject: [New-bugs-announce] [issue39174] unicodedata.normalize failing with NFD and NFKD for some characters in Python3 Message-ID: <1577816551.99.0.153382211706.issue39174@roundup.psfhosted.org> New submission from Lee Collins : A script that works in 2.7.17 is now failing for some Unicode characters in 3.7.5 on MacOS 10.14.6. For example unicodedata.normalize('NFD', '?') used to return the correct decomposition u'a\u0300', but in 3.7 it returns the single composed character U+00E0. This doesn't happen for all composed forms, just some. Other examples: ?, ? ---------- components: Unicode messages: 359120 nosy: Lee Collins, ezio.melotti, vstinner priority: normal severity: normal status: open title: unicodedata.normalize failing with NFD and NFKD for some characters in Python3 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 31 20:11:33 2019 From: report at bugs.python.org (Justin Hodder) Date: Wed, 01 Jan 2020 01:11:33 +0000 Subject: [New-bugs-announce] [issue39175] Funkness with issubset Message-ID: <1577841093.35.0.83643957986.issue39175@roundup.psfhosted.org> New submission from Justin Hodder : line 59,"print(x2,"avalMana",set(avalMana.keys()))" prints:"{('A', 'B')} avalMana {'A', ('A', 'B'), ('A', 'C')}" line 60," if x2.issubset(set(avalMana.keys())):" is False change line 60 to " if x2.issubset(set(list(avalMana.keys()))):" and it works as expected. ---------- components: Interpreter Core files: PythonDoesntWorks.py messages: 359137 nosy: Justin Hodder priority: normal severity: normal status: open title: Funkness with issubset type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48814/PythonDoesntWorks.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 31 20:44:07 2019 From: report at bugs.python.org (Ned Batchelder) Date: Wed, 01 Jan 2020 01:44:07 +0000 Subject: [New-bugs-announce] [issue39176] Syntax error message uses strange term: "named assignment" Message-ID: <1577843047.69.0.249542272206.issue39176@roundup.psfhosted.org> New submission from Ned Batchelder : I know this is not allowed: >>> ((a, b, c) := (1, 2, 3)) File "", line 1 SyntaxError: cannot use named assignment with tuple But what is "named assignment", and why is this SyntaxError talking about it? Shouldn't it say "cannot use assignment expressions with tuple" ? ---------- messages: 359138 nosy: nedbat priority: normal severity: normal status: open title: Syntax error message uses strange term: "named assignment" _______________________________________ Python tracker _______________________________________