From report at bugs.python.org Sun Mar 1 04:26:22 2020 From: report at bugs.python.org (Manjusaka) Date: Sun, 01 Mar 2020 09:26:22 +0000 Subject: [New-bugs-announce] [issue39806] different behavior between __ior__ and __or__ in dict made by PEP 584 Message-ID: <1583054782.31.0.481926462368.issue39806@roundup.psfhosted.org> New submission from Manjusaka : Hello Guys: I have tried Python 3.9.0a4, I have an issue: the __ior__ and the __or__ have different behavior For example: x={} y=[(1,2)] x|=y is right and x=x|y will raise an exception. I think it's should be better make the same between two magic method ? ---------- components: C API messages: 363045 nosy: Manjusaka priority: normal severity: normal status: open title: different behavior between __ior__ and __or__ in dict made by PEP 584 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 04:55:12 2020 From: report at bugs.python.org (Young Wong) Date: Sun, 01 Mar 2020 09:55:12 +0000 Subject: [New-bugs-announce] [issue39807] Python38 installed in wrong directory on Windows Message-ID: <1583056512.25.0.736477691281.issue39807@roundup.psfhosted.org> New submission from Young Wong : I'm on Windows 10 and downloaded the Python 3.8 installation package. I explicitly selected `C:\Program Files\Python38` as my installation path during in the menu, but it installs it in `C:\Program Files (x86)\Python38` at the end. My `C:\Program Files` has Python37. ---------- components: Windows messages: 363051 nosy: Young Wong, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python38 installed in wrong directory on Windows versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 05:27:01 2020 From: report at bugs.python.org (swgmma) Date: Sun, 01 Mar 2020 10:27:01 +0000 Subject: [New-bugs-announce] [issue39808] pathlib: reword docs for stat() Message-ID: <1583058421.39.0.192820451754.issue39808@roundup.psfhosted.org> New submission from swgmma : The docs for stat() (https://docs.python.org/3.9/library/pathlib.html#pathlib.Path.stat) state: > Return information about this path (similarly to os.stat()). The result is looked up at each call to this method." Nit picks: 1) It states "similarly to os.stat()" which implies there may be differences between the two, when in fact they both return the same `os.stat_result` object. 2) It should mention that `stat()` returns a `os.stat_result` object without having to go digging into the docs for `os` to find out. ---------- assignee: docs at python components: Documentation messages: 363052 nosy: docs at python, swgmma priority: normal severity: normal status: open title: pathlib: reword docs for stat() versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 05:54:19 2020 From: report at bugs.python.org (Luca) Date: Sun, 01 Mar 2020 10:54:19 +0000 Subject: [New-bugs-announce] [issue39809] argparse: add max_text_width parameter to ArgumentParser Message-ID: <1583060059.73.0.28112169594.issue39809@roundup.psfhosted.org> New submission from Luca : It is often desirable to limit the help text width, for instance to 78 or 88 columns, regardless of the actual size of the terminal window. Currently you can achieve this in rather cumbersome ways, for instance by setting "os.environ['COLUMNS'] = '80'" (but this requires the "os" module, which may not be needed otherwise by your module, and may lead to other undesired effects), or by writing a custom formatting class. IMHO there should be a simpler option for such a basic task. I propose to add a max_text_width parameter to ArgumentParser. This would require only minor code changes to argparse (see attached patch), should I open a pull request on GitHub? ---------- components: Library (Lib) files: argparse_max_text_width.patch keywords: patch messages: 363053 nosy: lucatrv priority: normal severity: normal status: open title: argparse: add max_text_width parameter to ArgumentParser type: enhancement versions: Python 3.9 Added file: https://bugs.python.org/file48939/argparse_max_text_width.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 06:59:05 2020 From: report at bugs.python.org (Alex Hall) Date: Sun, 01 Mar 2020 11:59:05 +0000 Subject: [New-bugs-announce] [issue39810] Generic script for finding bugs in get_source_segment Message-ID: <1583063945.36.0.652749436793.issue39810@roundup.psfhosted.org> New submission from Alex Hall : Attached is a script which: - Gets all the source code it can find from sys.modules - Looks at every node in the parsed source - Gets source text for that node using ast.get_source_segment - Parses the source text again - Compares the original node with the newly parsed node - Points out if the nodes don't match I ran this on Python 3.8.0, and it found several issues which have now been solved. So if there was a test like this then many bugs would have been caught earlier. I haven't tried it on a build of master, so I'm actually not sure which bugs have been fixed and what new bugs have been introduced. The script partly relies on [asttokens](https://github.com/gristlabs/asttokens) which is another way to get the source code of a node. This helps to skip some known issues and to show what the output from get_source_segment should probably be. You don't strictly need to install asttokens to run the script but it's helpful. ---------- components: Interpreter Core files: get_source_segment_test.py messages: 363056 nosy: alexmojaki priority: normal severity: normal status: open title: Generic script for finding bugs in get_source_segment type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48940/get_source_segment_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 07:15:46 2020 From: report at bugs.python.org (toonn) Date: Sun, 01 Mar 2020 12:15:46 +0000 Subject: [New-bugs-announce] [issue39811] Curses crash on ^4 Message-ID: <1583064946.05.0.172302785808.issue39811@roundup.psfhosted.org> New submission from toonn : We got a report about a crash which seems to happen in the curses library when a user pressed ^4. How do we go about debugging this? https://github.com/ranger/ranger/issues/1859 ---------- components: Library (Lib) messages: 363057 nosy: toonn priority: normal severity: normal status: open title: Curses crash on ^4 type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 09:03:30 2020 From: report at bugs.python.org (Antoine Pitrou) Date: Sun, 01 Mar 2020 14:03:30 +0000 Subject: [New-bugs-announce] [issue39812] Avoid daemon threads in concurrent.futures Message-ID: <1583071410.96.0.599538732629.issue39812@roundup.psfhosted.org> New submission from Antoine Pitrou : Since issue37266 (which forbid daemon threads in subinterpreters), we probably want to forego daemon threads in concurrent.futures. This means we also need a way to run an atexit-like hook before non-daemon threads are joined on (sub)interpreter shutdown. See discussion below: https://bugs.python.org/issue37266#msg362890 ---------- components: Library (Lib) messages: 363059 nosy: aeros, pitrou, tomMoral priority: normal severity: normal stage: needs patch status: open title: Avoid daemon threads in concurrent.futures type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 09:29:15 2020 From: report at bugs.python.org (Marco Sulla) Date: Sun, 01 Mar 2020 14:29:15 +0000 Subject: [New-bugs-announce] [issue39813] test_ioctl skipped -- Unable to open /dev/tty Message-ID: <1583072955.22.0.585630626982.issue39813@roundup.psfhosted.org> New submission from Marco Sulla : During `make test`, I get the error in the title. (venv_3_9) marco at buzz:~/sources/cpython_test$ ll /dev/tty crw-rw-rw- 1 root tty 5, 0 Mar 1 15:24 /dev/tty ---------- components: Tests messages: 363063 nosy: Marco Sulla priority: normal severity: normal status: open title: test_ioctl skipped -- Unable to open /dev/tty type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 13:31:42 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 01 Mar 2020 18:31:42 +0000 Subject: [New-bugs-announce] [issue39814] Hyphens not generated for split-words in a "note" directive Message-ID: <1583087502.84.0.65826697988.issue39814@roundup.psfhosted.org> New submission from Raymond Hettinger : The justification algorithm for the docs will sometimes split words at the end of a line and will add a hyphen to indicate the continuation. This works in most text but is broken within a "note" directive. For example, there dangling "im" split of "implemenation" at the end of the third line in the note at: https://docs.python.org/3/library/functools.html#functools.total_ordering Contrast this with the correct "sup-" split of "supplies" in the first sentence. ---------- assignee: docs at python components: Documentation messages: 363075 nosy: docs at python, rhettinger priority: normal severity: normal status: open title: Hyphens not generated for split-words in a "note" directive type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 13:34:39 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 01 Mar 2020 18:34:39 +0000 Subject: [New-bugs-announce] [issue39815] functools.cached_property() not included in __all__ Message-ID: <1583087679.43.0.929392471324.issue39815@roundup.psfhosted.org> Change by Raymond Hettinger : ---------- components: Library (Lib) keywords: easy nosy: rhettinger priority: normal severity: normal status: open title: functools.cached_property() not included in __all__ type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 14:35:48 2020 From: report at bugs.python.org (Alex Hall) Date: Sun, 01 Mar 2020 19:35:48 +0000 Subject: [New-bugs-announce] [issue39816] More descriptive error message than "too many values to unpack" Message-ID: <1583091348.39.0.0422747327362.issue39816@roundup.psfhosted.org> New submission from Alex Hall : Based on the discussion in https://mail.python.org/archives/list/python-ideas at python.org/thread/C6QEAEEAELUHMLB23OBRSQK2UYU3AF5O/ When unpacking fails with an error such as: ValueError: too many values to unpack (expected 2) the name of the type of the unpacked object should be included, e.g. ValueError: too many values to unpack (expected 2) from object of type 'str' and if the type is exactly list or tuple, which are already special cased: https://github.com/python/cpython/blob/baf29b221682be0f4fde53a05ea3f57c3c79f431/Python/ceval.c#L2243-L2252 then the length can also be included: ValueError: too many values to unpack (expected 2, got 3) from object of type 'tuple' ---------- components: Interpreter Core messages: 363083 nosy: alexmojaki priority: normal severity: normal status: open title: More descriptive error message than "too many values to unpack" type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 15:17:47 2020 From: report at bugs.python.org (Oscar) Date: Sun, 01 Mar 2020 20:17:47 +0000 Subject: [New-bugs-announce] [issue39817] CRITICAL: TypeError: cannot pickle 'generator' Message-ID: <1583093867.42.0.0359831443421.issue39817@roundup.psfhosted.org> New submission from Oscar : I use Windows 10 Home 1909 CRITICAL: TypeError: cannot pickle 'generator' object PS D:\projects\user.log> Traceback (most recent call last): File "", line 1, in File "c:\users\user\appdata\local\programs\python\python38\lib\multiprocessing\spawn.py", line 102, in spawn_main source_process = _winapi.OpenProcess( OSError: [WinError 87] ---------- components: Library (Lib), Windows messages: 363090 nosy: dotoscat, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: CRITICAL: TypeError: cannot pickle 'generator' type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 17:11:43 2020 From: report at bugs.python.org (=?utf-8?q?Grzegorz_Kraso=C5=84?=) Date: Sun, 01 Mar 2020 22:11:43 +0000 Subject: [New-bugs-announce] [issue39818] Declaring local variable invalidates access to a global variable Message-ID: <1583100703.5.0.728842586999.issue39818@roundup.psfhosted.org> New submission from Grzegorz Kraso? : I'm not certain if this is intended behavior, but I would like to make sure. Please resolve with the lowest priority. It seems tricky that line #6 can influence instruction that chronologically appears earlier. Especially taking into account that line #6 is never executed. ---------- components: Interpreter Core files: demo.py messages: 363099 nosy: Grzegorz Kraso? priority: normal severity: normal status: open title: Declaring local variable invalidates access to a global variable type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48941/demo.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 18:12:05 2020 From: report at bugs.python.org (Joshua Kinard) Date: Sun, 01 Mar 2020 23:12:05 +0000 Subject: [New-bugs-announce] [issue39819] NULL pointer crash in Modules/_cursesmodule.c in PyInit__curses() on MIPS uclibc-ng and ncurses-6.2 Message-ID: <1583104325.7.0.594003781513.issue39819@roundup.psfhosted.org> New submission from Joshua Kinard : Inside a MIPS O32 chroot, based on uclibc-ng-1.0.32, if python-27 or python-3.7 are built against ncurses-6.2, then after compilation, there is a crash in the '_curses' module. Specific to Python-3.7, the crash is in Modules/_cursesmodule.c:3482, PyInit__curses(): 3477: { 3478: int key; 3479: char *key_n; 3480: char *key_n2; 3481: for (key=KEY_MIN;key < KEY_MAX; key++) { 3482: key_n = (char *)keyname(key); 3483: if (key_n == NULL || strcmp(key_n,"UNKNOWN KEY")==0) 3484: continue; 3485: if (strncmp(key_n,"KEY_F(",6)==0) { 3486: char *p1, *p2; It looks like keyname() is casting to a NULL pointer and crashing when 'key' is 257. The issue is reproducible by running python and trying to import the curses modules: # python Python 3.7.6 (default, Feb 29 2020, 22:51:27) [GCC 9.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import curses Segmentation fault Or: # python -m curses Segmentation fault dmesg shows this on the main host: [99297.243445] do_page_fault(): sending SIGSEGV to python for invalid read access from 0000000000000000 [99297.243459] epc = 0000000000000000 in python3.7m[400000+10000] [99297.243483] ra = 0000000076a68c6c in _curses.cpython-37m-mips-linux-gnu.so[76a50000+20000] I've been able to work out that the fault has something to do with ncurses itself. There is no issue if built against ncurses-6.1, and even the later datestamped patches appear to be okay. It seems like any ncurses AFTER 20190609 will exhibit the problem. ncurses-6.2 was recently released, and it, too, causes this issue. However, I am unable to get gdb to trace through any of the ncurses libraries. The faulting code is in Python itself, so I assume it's something to do with a macro definition or an include provided by ncurses-6.2 that introduces the breakage. This issue also only happens in a uclibc-ng-based root. I have had zero issues building python-3.7 in multiple glibc-based roots and even a musl-1.1.24-based root works fine. So I am not completely sure if the fault is truly with Python itself, or the combination of uclibc-ng, ncurses-6.2, and Python. As far as I know, the issue may also be specific to MIPS hardware, but I do not have a similar chroot on any other architecture to verify this with. I'll attach to this bug a gdb backtrace of Python built with -O0 and -gddb3. I have a core file available if that will help, but will probably need to e-mail that as I'll have to include the malfunctioning python binary and the separate debug symbol files generated from my build. ---------- components: Extension Modules files: py37-gdb-bt-sigsegv-cursesmodule-uclibc-20200301.txt messages: 363107 nosy: kumba priority: normal severity: normal status: open title: NULL pointer crash in Modules/_cursesmodule.c in PyInit__curses() on MIPS uclibc-ng and ncurses-6.2 type: crash versions: Python 2.7, Python 3.7 Added file: https://bugs.python.org/file48942/py37-gdb-bt-sigsegv-cursesmodule-uclibc-20200301.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 18:30:35 2020 From: report at bugs.python.org (Marco Sulla) Date: Sun, 01 Mar 2020 23:30:35 +0000 Subject: [New-bugs-announce] [issue39820] Bracketed paste mode for REPL Message-ID: <1583105435.88.0.337448475427.issue39820@roundup.psfhosted.org> New submission from Marco Sulla : I suggest to add an implementation of bracketed paste mode in the REPL. Currently if you, for example, copy & paste a piece of Python code to see if it works, if the code have a blank line without indentation and the previous and next line are indented, REPL raises an error. If you create a .py, paste the same code and run it with the python interpreter, no error is raised, since the syntax is legit. Bracketed paste mode is implemented in many text editors, as vi. ---------- components: Interpreter Core messages: 363109 nosy: Marco Sulla priority: normal severity: normal status: open title: Bracketed paste mode for REPL type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 1 23:54:37 2020 From: report at bugs.python.org (Abhishek) Date: Mon, 02 Mar 2020 04:54:37 +0000 Subject: [New-bugs-announce] [issue39821] grp library functions grp.getgrnam() & grp.getgrgid() returning incorrect gr_mem information Message-ID: <1583124877.81.0.857891340735.issue39821@roundup.psfhosted.org> New submission from Abhishek : If root user is part of a linux group, then in the response of getgrnam() & grp.getgrid(), in te gr_mem part, root user is not listed. [root at biplab2 ~]# getent group | grep starwars starwars:x:1011:root,abhi [root at biplab2 ~]# python3 Python 3.6.8 (default, Dec 5 2019, 16:11:43) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux >>> import grp >>> grp.getgrnam('starwars') grp.struct_group(gr_name='starwars', gr_passwd='x', gr_gid=1011, gr_mem=['abhi']) >>> grp.getgrgid(1011) grp.struct_group(gr_name='starwars', gr_passwd='x', gr_gid=1011, gr_mem=['abhi']) But, when grp.getgrall() is run, we the correct response (gr_mem includes root user as well) >>> grp.getgrall() grp.struct_group(gr_name='starwars', gr_passwd='x', gr_gid=1011, gr_mem=['root', 'abhi'])] ---------- messages: 363113 nosy: abhi.sharma priority: normal severity: normal status: open title: grp library functions grp.getgrnam() & grp.getgrgid() returning incorrect gr_mem information type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 02:39:05 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 02 Mar 2020 07:39:05 +0000 Subject: [New-bugs-announce] [issue39822] Use NULL instead of None for empty attrib in C implementation of Element Message-ID: <1583134745.29.0.697381999553.issue39822@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently None is used instead of an empty directory for the attrib field in the C implementation of Element in ElementTree. It is a pure optimization: an empty dict takes a memory and its creation has a cost. The proposed PR makes NULL be using instead of None. This simplifies the code. ---------- components: Extension Modules messages: 363133 nosy: eli.bendersky, scoder, serhiy.storchaka priority: normal severity: normal status: open title: Use NULL instead of None for empty attrib in C implementation of Element versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 04:00:46 2020 From: report at bugs.python.org (S Murthy) Date: Mon, 02 Mar 2020 09:00:46 +0000 Subject: [New-bugs-announce] [issue39823] Disassembly - improve documentation for bytecode instruction class and set source line no. attribute for every instruction Message-ID: <1583139646.83.0.873183200808.issue39823@roundup.psfhosted.org> New submission from S Murthy : I note that on disassembling a piece of source code (via source strings or code objects) the corresponding sequence of bytecode instruction objects (https://docs.python.org/3/library/dis.html#dis.Instruction) do not always have the `starts_line` attribute set - the storage and display of this line no. seems to be based on whether a given instruction is the first in a block of instructions which implement a given source line. I think it would be better, for mapping source and logical lines of code to bytecode instruction blocks, to set `starts_line` for every instruction, and amend the bytecode printing method (`dis._disassemble_bytes`) to keep the existing behaviour by detecting whether an instruction is the first line of an instruction block. ATM `Instruction` objects are created and generated within this loop in `dis._get_bytecode_instructions`: def _get_instructions_bytes(code, varnames=None, names=None, constants=None, cells=None, linestarts=None, line_offset=0): """Iterate over the instructions in a bytecode string. Generates a sequence of Instruction namedtuples giving the details of each opcode. Additional information about the code's runtime environment (e.g. variable names, constants) can be specified using optional arguments. """ labels = findlabels(code) starts_line = None for offset, op, arg in _unpack_opargs(code): if linestarts is not None: starts_line = linestarts.get(offset, None) ... ... So it's this line starts_line = linestarts.get(offset, None) which currently causes `starts_line` to be to set to `None` for every instruction which isn't the first in an instruction block - linestarts is a dict of source line numbers and offsets of the first instructions starting the corresponding instruction blocks. My idea is to (1) change that line above to starts_line = linestarts.get(offset, starts_line) which ensures every instruction will have the corresponding source line no. set, (2) amend `Instruction._disassemble` to add a new optional argument `print_start_line` with default of `True` to determine whether to print the source line no., and (3) amend `dis._disassemble_bytes` to accept a new optional argument `start_line_by_block` with a default of `True` which can be used to preserve existing behaviour of printing source line numbers by instruction block. I was wondering whether this sounds OK, if so, I am happy to submit a PR. ---------- components: Library (Lib) messages: 363140 nosy: smurthy priority: normal severity: normal status: open title: Disassembly - improve documentation for bytecode instruction class and set source line no. attribute for every instruction type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 04:52:22 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 02 Mar 2020 09:52:22 +0000 Subject: [New-bugs-announce] [issue39824] Multi-phase extension module (PEP 489): don't call m_traverse, m_clear nor m_free if md_state is NULL Message-ID: <1583142742.19.0.316214627808.issue39824@roundup.psfhosted.org> New submission from STINNER Victor : Currently, when a module implements m_traverse(), m_clear() or m_free(), these methods can be called with md_state=NULL even if the module implements the "Multi-phase extension module initialization" API (PEP 489). I'm talking about these module methods: * tp_traverse: module_traverse() calls md_def->m_traverse() if m_traverse is not NULL * tp_clear: module_clear() calls md_def->m_clear() if m_clear is not NULL * tp_dealloc: module_dealloc() calls md_def->m_free() is m_free is not NULL Because of that, the implementation of these methods must check manually if md_state is NULL or not. I propose to change module_traverse(), module_clear() and module_dealloc() to not call m_traverse(), m_clear() and m_free() if md_state is NULL and m_size > 0. "m_size > 0" is an heuristic to check if the module implements the "Multi-phase extension module initialization" API (PEP 489). For example, the _pickle module doesn't fully implements the PEP 489: m_size > 0, but PyInit__pickle() uses PyModule_Create(). See bpo-32374 which documented that "m_traverse may be called with m_state=NULL" (GH-5140). ---------- components: Interpreter Core messages: 363145 nosy: vstinner priority: normal severity: normal status: open title: Multi-phase extension module (PEP 489): don't call m_traverse, m_clear nor m_free if md_state is NULL versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 08:40:07 2020 From: report at bugs.python.org (Antoine Pitrou) Date: Mon, 02 Mar 2020 13:40:07 +0000 Subject: [New-bugs-announce] [issue39825] EXT_SUFFIX inconsistent between sysconfig and distutils.sysconfig (Windows) Message-ID: <1583156407.62.0.364374237882.issue39825@roundup.psfhosted.org> New submission from Antoine Pitrou : On Windows, Python 3.7.6 and 3.8.1: ``` >>> import sysconfig >>> sysconfig.get_config_var('EXT_SUFFIX') '.pyd' >>> from distutils import sysconfig >>> sysconfig.get_config_var('EXT_SUFFIX') '.cp38-win_amd64.pyd' ``` The sysconfig answer is probably wrong (the ABI-qualified extension '.cp38-win_amd64.pyd' should be preferred). ---------- components: Distutils, Library (Lib) messages: 363171 nosy: dstufft, eric.araujo, paul.moore, pitrou, steve.dower, tarek, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: EXT_SUFFIX inconsistent between sysconfig and distutils.sysconfig (Windows) type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 10:04:37 2020 From: report at bugs.python.org (lorb) Date: Mon, 02 Mar 2020 15:04:37 +0000 Subject: [New-bugs-announce] [issue39826] logging HTTPHandler does not support proxy Message-ID: <1583161477.89.0.356688736779.issue39826@roundup.psfhosted.org> New submission from lorb : The HTTPHandler does not support using a proxy. It would be necessary to subclass it and reimplement `emit` to enable passing in proxy settings. Adding a hook to make it easy to customize the connection used would solve this. ---------- components: Library (Lib) messages: 363181 nosy: lorb priority: normal severity: normal status: open title: logging HTTPHandler does not support proxy type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 10:11:30 2020 From: report at bugs.python.org (Till Korten) Date: Mon, 02 Mar 2020 15:11:30 +0000 Subject: [New-bugs-announce] [issue39827] setting a locale that uses comma as decimal separator breaks tkinter.DoubleVar Message-ID: <1583161890.16.0.234726594776.issue39827@roundup.psfhosted.org> New submission from Till Korten : This issue occurs when a locale is set that uses comma as decimal separator (e.g. locale.setlocale(locale.LC_NUMERIC, 'de_DE.utf8')). I have a tkinter.Spinbox with increment=0.1 connected to a tkinter.DoubleVar. When I change the value of the Spinbox using the arrow buttons and subsequently try to read out the variable with tkinter.DoubleVar.get(), my code throws the follwoing error: _tkinter.TclError: expected floating-point number but got "0,1". Here is a minimal code example: ------------- import tkinter import locale locale.setlocale(locale.LC_NUMERIC, 'de_DE.utf8') class TestDoubleVar(): def __init__(self): root = tkinter.Tk() self.var = tkinter.DoubleVar() self.var.set(0.8) number = tkinter.Spinbox( root, from_=0, to=1, increment=0.1, textvariable=self.var, command=self.update, width=4 ) number.pack(side=tkinter.LEFT) root.mainloop() def update(self, *args): print(float(self.var.get())) if __name__ == '__main__': TestDoubleVar() ------- Actual result: the code throws an error Expected result: the code should print the values of the DoubleVar even with a locale set that uses comma as the decimal separator. n.b. the problem also occurs with tkinter.Scale ---------- components: Tkinter files: test_doublevar.py messages: 363184 nosy: thawn priority: normal severity: normal status: open title: setting a locale that uses comma as decimal separator breaks tkinter.DoubleVar type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48943/test_doublevar.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 10:30:28 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 02 Mar 2020 15:30:28 +0000 Subject: [New-bugs-announce] [issue39828] json.tool should catch BrokenPipeError Message-ID: <1583163028.53.0.26021214856.issue39828@roundup.psfhosted.org> New submission from STINNER Victor : The json.tool module doesn't catch BrokenPipeError: ----------------------- $ echo "{}" | python3 -m json.tool | true BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/lib64/python3.7/json/tool.py", line 45, in main() File "/usr/lib64/python3.7/json/tool.py", line 41, in main outfile.write('\n') BrokenPipeError: [Errno 32] Broken pipe ----------------------- json.tool should catch BrokenPipeError. ---------- components: Library (Lib) messages: 363185 nosy: vstinner priority: normal severity: normal status: open title: json.tool should catch BrokenPipeError versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 10:36:16 2020 From: report at bugs.python.org (Kim-Adeline Miguel) Date: Mon, 02 Mar 2020 15:36:16 +0000 Subject: [New-bugs-announce] [issue39829] __len__ called twice in the list() constructor Message-ID: <1583163376.54.0.165592229385.issue39829@roundup.psfhosted.org> New submission from Kim-Adeline Miguel : (See #33234) Recently we added Python 3.8 to our CI test matrix, and we noticed a possible backward incompatibility with the list() constructor. We found that __len__ is getting called twice, while before 3.8 it was only called once. Here's an example: class Foo: def __iter__(self): print("iter") return iter([3, 5, 42, 69]) def __len__(self): print("len") return 4 Calling list(Foo()) using Python 3.7 prints: iter len But calling list(Foo()) using Python 3.8 prints: len iter len It looks like this behaviour was introduced for #33234 with PR GH-9846. We realize that this was merged a while back, but at least we wanted to make the team aware of this change in behaviour. ---------- components: Interpreter Core messages: 363186 nosy: brett.cannon, eric.snow, kimiguel, pablogsal, rhettinger priority: normal severity: normal status: open title: __len__ called twice in the list() constructor type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 14:17:49 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Mon, 02 Mar 2020 19:17:49 +0000 Subject: [New-bugs-announce] [issue39830] zipfile.Path is not included in __all__ Message-ID: <1583176669.44.0.585908649046.issue39830@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Since zipfile.Path is a public API and documented it could be included in __all__ . This might be a problem for code that uses below pattern : from zipfile import * from pathlib import Path If this is accepted this can be a good beginner issue. ---------- components: Library (Lib) messages: 363199 nosy: barry, jaraco, xtreak priority: normal severity: normal status: open title: zipfile.Path is not included in __all__ type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 14:40:18 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 02 Mar 2020 19:40:18 +0000 Subject: [New-bugs-announce] [issue39831] Reference leak in PyErr_WarnEx() Message-ID: <1583178018.47.0.586114300252.issue39831@roundup.psfhosted.org> New submission from Serhiy Storchaka : The test added for issue38913 exposed a reference leak in PyErr_WarnEx(). ---------- components: Interpreter Core messages: 363203 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Reference leak in PyErr_WarnEx() type: resource usage versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 19:25:16 2020 From: report at bugs.python.org (Norbert) Date: Tue, 03 Mar 2020 00:25:16 +0000 Subject: [New-bugs-announce] [issue39832] Modules with decomposable characters in module name not found on macOS Message-ID: <1583195116.72.0.709878882387.issue39832@roundup.psfhosted.org> New submission from Norbert : Modules whose names contain characters that are in precomposed form but can be decomposed in Normalization Form D can?t be found on macOS. To reproduce: 1. Download and unzip the attached file Modules.zip. This produces a directory Modules with four Python source files. 2. In Terminal, go to the directory that contains Modules. 3. Run "python3 -m Modules.Import". Expected behavior: The following lines should be generated: Maerchen M?rchen Actual behavior: The first line, ?Maerchen? is generated, but then an error occurs: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 193, in _run_module_as_main return _run_code(code, main_globals, None, File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/business/tmp/pyimports/Modules/Import.py", line 5, in from Modules.M?rchen import hello2 ModuleNotFoundError: No module named 'Modules.M?rchen' Evaluation: In the source file Modules/Import.py, the name of the module ?M?rchen? is written with the precomposed character U+00E4. The file name M?rchen.py uses the decomposed character sequence U+0061 U+0308 instead. Macintosh file names commonly use a variant of Normalization Form D in file names ? the old file system HFS enforces this, and while APFS doesn?t, the Finder still generates file names in this form. U+00E4 and U+0061 U+0308 are canonically equivalent, so they should be treated as equal in module loading. Tested configuration: CPython 3.8.2 macOS 10.14.6 ---------- components: Interpreter Core files: Modules.zip messages: 363224 nosy: Norbert priority: normal severity: normal status: open title: Modules with decomposable characters in module name not found on macOS type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48944/Modules.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 2 21:16:05 2020 From: report at bugs.python.org (Evan) Date: Tue, 03 Mar 2020 02:16:05 +0000 Subject: [New-bugs-announce] [issue39833] Bug in html parsing module triggered by malformed input Message-ID: <1583201765.99.0.950637389492.issue39833@roundup.psfhosted.org> New submission from Evan : Relevant base python library-- C:\Users\User\AppData\Local\Programs\Python\Python38\lib\_markupbase.py The issue- After parsing over 900 SEC filings using beautifulsoup4, I get this user warning. UserWarning: unknown status keyword 'ERF' in marked section warnings.warn(msg) Followed by a traceback .... File "C:\Users\XXXX\AppData\Local\Programs\Python\Python38\lib\site-packages\bs4\__init__.py", line 325, in __init__ self._feed() .... File "C:\Users\XXXX\AppData\Local\Programs\Python\Python38\lib\_markupbase.py", line 160, in parse_marked_section if not match: UnboundLocalError: local variable 'match' referenced before assignment It's probably to due to malformed input from on of the docs. 144 lines into _markupbase lib we have: # Internal -- parse a marked section # Override this to handle MS-word extension syntax content def parse_marked_section(self, i, report=1): rawdata= self.rawdata assert rawdata[i:i+3] == ' ending match= _markedsectionclose.search(rawdata, i+3) elif sectName in {"if", "else", "endif"}: # look for MS Office ]> ending match= _msmarkedsectionclose.search(rawdata, i+3) else: self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) if not match: return -1 if report: j = match.start(0) self.unknown_decl(rawdata[i+3: j]) return match.end(0) `match` should be set to None in the fall-through else statement right before `if not match`. ---------- components: Library (Lib) messages: 363234 nosy: SanJacintoJoe priority: normal severity: normal status: open title: Bug in html parsing module triggered by malformed input type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 03:44:30 2020 From: report at bugs.python.org (E) Date: Tue, 03 Mar 2020 08:44:30 +0000 Subject: [New-bugs-announce] [issue39834] + vs. % operator precedence not correct Message-ID: <1583225070.21.0.47230314315.issue39834@roundup.psfhosted.org> New submission from E : + has operator precedence over % in python: https://docs.python.org/3/reference/expressions.html However: > i=5 > i+5 % 10 10 > 10 % 10 0 > (i+5) % 10 0 Thus, for + to take precedence over %, parentheses need to be used. ---------- components: Interpreter Core messages: 363241 nosy: ergun priority: normal severity: normal status: open title: + vs. % operator precedence not correct type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 10:18:37 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 03 Mar 2020 15:18:37 +0000 Subject: [New-bugs-announce] [issue39836] Implement PyObject_GetMemoryView Message-ID: <1583248717.8.0.114073017278.issue39836@roundup.psfhosted.org> New submission from Joannah Nanjekye : We have a memory-view object represented with the following structure: typedef struct { PyObject_VAR_HEAD _PyManagedBufferObject *mbuf; /* managed buffer */ Py_hash_t hash; /* hash value for read-only views */ int flags; /* state flags */ Py_ssize_t exports; /* number of buffer re-exports */ Py_buffer view; /* private copy of the exporter's view */ PyObject *weakreflist; Py_ssize_t ob_array[1]; /* shape, strides, suboffsets */ } PyMemoryViewObject; It would be good to have the implementation for PyObject_GetMemoryView which returns a memory-view object as was originally intended in PEP 3118 i.e : PyObject *PyObject_GetMemoryView(PyObject *obj) ---------- components: C API messages: 363265 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Implement PyObject_GetMemoryView type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 10:18:33 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 03 Mar 2020 15:18:33 +0000 Subject: [New-bugs-announce] [issue39835] Implement PyObject_CopyToObject Message-ID: <1583248713.2.0.213749571334.issue39835@roundup.psfhosted.org> New submission from Joannah Nanjekye : I suggest implementing a C-API for copying data into a buffer exported by an obj. i.e int PyObject_CopyToObject(PyObject *obj, void *buf, Py_ssize_t len, char fortran) as was intended in PEP 3118. The documentation there says this functionality should: "Copy len bytes of data pointed to by the contiguous chunk of memory pointed to by buf into the buffer exported by obj. Return 0 on success and return -1 and raise an error on failure. If the object does not have a writable buffer, then an error is raised. If fortran is 'F', then if the object is multi-dimensional, then the data will be copied into the array in Fortran-style (first dimension varies the fastest). If fortran is 'C', then the data will be copied into the array in C-style (last dimension varies the fastest). If fortran is 'A', then it does not matter and the copy will be made in whatever way is more efficient." ---------- components: C API messages: 363264 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Implement PyObject_CopyToObject type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 12:05:19 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 03 Mar 2020 17:05:19 +0000 Subject: [New-bugs-announce] [issue39837] Remove Azure Pipelines from GitHub PRs Message-ID: <1583255119.77.0.277327127951.issue39837@roundup.psfhosted.org> New submission from STINNER Victor : The Azure Pipelines jobs have been reimplemented as GitHub actions which are better integrated with GitHub: * Docs / Docs (pull_request) * Tests / Windows (x64) (pull_request) * Tests / macOS (pull_request) * Tests / Ubuntu (pull_request) * etc. Azure Pipelines runs the same jobs, but it looks slower. It is voting and so prevents to merge a PR until it completes. I propose to simply remove the job. I already proposed it on python-dev: https://mail.python.org/archives/list/python-dev at python.org/message/NC2ZS4WSF5AYGUUMBB7I4YIQ4YJXWBA5/ In this thread: https://mail.python.org/archives/list/python-dev at python.org/thread/2NSPJUEWULTLLALR3HY3H2PRYAUT474C/#NC2ZS4WSF5AYGUUMBB7I4YIQ4YJXWBA5 ---------- components: Tests messages: 363279 nosy: vstinner priority: normal severity: normal status: open title: Remove Azure Pipelines from GitHub PRs versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 13:12:44 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 03 Mar 2020 18:12:44 +0000 Subject: [New-bugs-announce] [issue39838] Possible unnecessary redifinition of _POSIX_C_SOURCE Message-ID: <1583259164.39.0.686980300564.issue39838@roundup.psfhosted.org> New submission from Joannah Nanjekye : Please note the compile warning: ./pyconfig.h:1590: warning: "_POSIX_C_SOURCE" redefined #define _POSIX_C_SOURCE 200809L In file included from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/string.h:26, from /workspace/cpython/Modules/expat/xmltok.c:34: /usr/include/features.h:295: note: this is the location of the previous definition # define _POSIX_C_SOURCE 199506L There must be a way of avoiding this warning. ---------- components: Interpreter Core messages: 363286 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Possible unnecessary redifinition of _POSIX_C_SOURCE type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 13:24:30 2020 From: report at bugs.python.org (=?utf-8?b?0KDRg9GB0YLQsNC8INCo0LDRhQ==?=) Date: Tue, 03 Mar 2020 18:24:30 +0000 Subject: [New-bugs-announce] [issue39839] Non-working error handler when creating a task with assigning a variable Message-ID: <1583259870.1.0.174919125514.issue39839@roundup.psfhosted.org> New submission from ?????? ??? : #This example does not work due to assigning the task to a variable import asyncio import logging def handle_exception(loop, context): msg = context.get("exception", context["message"]) logging.error("Caught exception: %s", msg) async def test(): await asyncio.sleep(1) raise Exception("Crash.") def main(): loop = asyncio.get_event_loop() loop.set_exception_handler(handle_exception) # if removed "task = " - exception handler will work. task = loop.create_task(test()) try: loop.run_forever() finally: loop.close() if __name__ == "__main__": main() ---------- components: asyncio messages: 363287 nosy: asvetlov, yselivanov, ?????? ??? priority: normal severity: normal status: open title: Non-working error handler when creating a task with assigning a variable versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 14:53:30 2020 From: report at bugs.python.org (Augie Fackler) Date: Tue, 03 Mar 2020 19:53:30 +0000 Subject: [New-bugs-announce] [issue39840] FileNotFoundError et al show b-prefix on filepaths if passed as bytes Message-ID: <1583265210.54.0.278699602043.issue39840@roundup.psfhosted.org> New submission from Augie Fackler : I'm not really sure if this is a bug per se, so please feel encouraged to close as WAI if you like, but: >>> open(b'foo', 'rb') Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: b'foo' Seems a little weird to me (and it shows up in the UI layer of hg), because the path-as-bytes seems like it shouldn't show up in the human-readable version of the exception (I think I would have expected the fsdecode() of the bytes, for consistency?) But that's up to you. If the presentation format of this feels right to Python that's no big deal. ---------- messages: 363297 nosy: durin42 priority: normal severity: normal status: open title: FileNotFoundError et al show b-prefix on filepaths if passed as bytes versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 15:59:02 2020 From: report at bugs.python.org (Alan Robertson) Date: Tue, 03 Mar 2020 20:59:02 +0000 Subject: [New-bugs-announce] [issue39841] "as" variable in except block deletes local variables with same name Message-ID: <1583269142.54.0.23852633046.issue39841@roundup.psfhosted.org> New submission from Alan Robertson : When an exception "as" variable occurs, it deletes local variables with the same name. This is certainly surprising, and doesn't appear to be a documented behavior (but maybe I don't know where to look). The word "bug" comes to mind. The following few lines of code illustrate it nicely: def testme(): err = Exception("nothing worked") try: raise ValueError("no value") except ValueError as err: pass print(err) testme() ---------- components: Interpreter Core files: foo.py messages: 363300 nosy: alanr priority: normal severity: normal status: open title: "as" variable in except block deletes local variables with same name versions: Python 3.7 Added file: https://bugs.python.org/file48946/foo.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 3 18:57:59 2020 From: report at bugs.python.org (Marco Sulla) Date: Tue, 03 Mar 2020 23:57:59 +0000 Subject: [New-bugs-announce] [issue39842] partial_format() Message-ID: <1583279879.75.0.642484587376.issue39842@roundup.psfhosted.org> New submission from Marco Sulla : In `string` module, there's a very little known class `Template`. It implements a very simple template, but it has an interesting method: `safe_substitute()`. `safe_substitute()` permits you to not fill the entire Template at one time. On the contrary, it substitute the placeholders that are passed, and leave the others untouched. I think it could be useful to have a similar method for the format minilanguage. I propose a partial_format() method. === WHY I think this is useful? === This way, you can create subtemplates from a main template. You could want to use the template for creating a bunch of strings, all of them with the same value for some placeholders, but different values for other ones. This way you have *not* to reuse the same main template and substitute every time the placeholders that does not change. `partial_format()` should act as `safe_substitute()`: if some placeholder misses a value, no error will be raised. On the contrary, the placeholder is leaved untouched. Some example: >>> "{} {}".partial_format(1) '1 {}' >>> "{x} {a}".partial_format(a="elephants") '{x} elephants' >>> "{:-f} and {:-f} nights".partial_format(1000) '1000 and {:-f} nights' ---------- components: Interpreter Core messages: 363317 nosy: Marco Sulla priority: normal severity: normal status: open title: partial_format() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 00:00:55 2020 From: report at bugs.python.org (=?utf-8?q?Mikko_Nyl=C3=A9n?=) Date: Wed, 04 Mar 2020 05:00:55 +0000 Subject: [New-bugs-announce] [issue39843] Merged fix for bpo-17560 missing from changelog Message-ID: <1583298055.95.0.17111086432.issue39843@roundup.psfhosted.org> New submission from Mikko Nyl?n : A fix for bpo-17560 (about passing very large objects between processes with multiprocessing) was merged in Nov 2018 in this PR: https://github.com/python/cpython/pull/10305 However, I see no mention of this in the changelog , even though the issue page reports the fix to be in version 3.8. Is the changelog just missing entry for this, or was the fix pulled back later and never made it's way into 3.8? ---------- assignee: docs at python components: Documentation messages: 363327 nosy: Mikko Nyl?n, docs at python priority: normal severity: normal status: open title: Merged fix for bpo-17560 missing from changelog type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 02:53:22 2020 From: report at bugs.python.org (Jacin Ferreira) Date: Wed, 04 Mar 2020 07:53:22 +0000 Subject: [New-bugs-announce] [issue39844] IDLE 3.8.2 on MacOS 10.15.3 Launches to Black Windows Message-ID: <1583308402.69.0.97352724615.issue39844@roundup.psfhosted.org> New submission from Jacin Ferreira : 0) MacBook Pro 13" running MacOS 10.15.3 Catalina 1) Fresh install of Python 3.8.2 from python.org 2) Launch IDLE 3) Observe Python 3.8.2 Shell 4) Goto File Menu 5) Select Preferences 6) Observe Preferences EXPECTED RESULTS Windows should show text and are usable ACTUAL RESULTS Windows are all black and no text shows. IDLE is unusable in this state. ---------- assignee: terry.reedy components: IDLE, macOS files: Screen Shot 2020-03-03 at 11.51.21 PM.png messages: 363334 nosy: Jacin Ferreira, ned.deily, ronaldoussoren, terry.reedy priority: normal severity: normal status: open title: IDLE 3.8.2 on MacOS 10.15.3 Launches to Black Windows type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48947/Screen Shot 2020-03-03 at 11.51.21 PM.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 03:40:21 2020 From: report at bugs.python.org (Ion Cebotari) Date: Wed, 04 Mar 2020 08:40:21 +0000 Subject: [New-bugs-announce] [issue39845] Argparse on Python 3.7.1 (Windows) appends double quotes to string if it ends with backward slash Message-ID: <1583311221.81.0.793172357388.issue39845@roundup.psfhosted.org> New submission from Ion Cebotari : I have this code for a tool that copies files from one directory to another: parser = argparse.ArgumentParser() parser.add_argument('src_dir') parser.add_argument('dest_dir') args = parser.parse_args() It works fine on Unix, but on Windows, if the destination path ends with a backward slash, it seems that argparse parses the string as it would escape a double quote and returns the string with the double quote appended. For example, calling the script: (base) PS Z:\test> python.exe .\main.py -d Z:\tmp\test\DJI\ 'C:\unu doi\' will create the destination path string: C:\unu doi" The source path, even though it ends with the backslash as well, isn't modified by argparse. I've worked around this issue by using this validation function for the arguments: def is_valid_dir_path(string): """ Checks if the path is a valid path :param string: The path that needs to be validated :return: The validated path """ if sys.platform.startswith('win') and string.endswith('"'): string = string[:-1] if os.path.isdir(string): return string else: raise NotADirectoryError(string) parser = argparse.ArgumentParser() parser.add_argument('src_dir', type=is_valid_dir_path) parser.add_argument('dest_dir', type=is_valid_dir_path) args = parser.parse_args() ---------- messages: 363339 nosy: 888xray999 priority: normal severity: normal status: open title: Argparse on Python 3.7.1 (Windows) appends double quotes to string if it ends with backward slash type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 05:24:15 2020 From: report at bugs.python.org (Daniel Chimeno) Date: Wed, 04 Mar 2020 10:24:15 +0000 Subject: [New-bugs-announce] [issue39846] Register .whl as a unpack format in shutil unpack Message-ID: <1583317455.42.0.20954925139.issue39846@roundup.psfhosted.org> New submission from Daniel Chimeno : While working on project with Python wheels I found myself adding: ```` import shutil shutil.register_unpack_format('whl', ['.whl'], shutil._unpack_zipfile) ```` Since PEP 427 explicitly says wheels are ZIP-format archive. https://www.python.org/dev/peps/pep-0427/ I wonder if it's loable to register the unpack format by default so the shutil.unpack_archive() function works without adding it. ---------- components: Library (Lib) messages: 363341 nosy: dchimeno priority: normal severity: normal status: open title: Register .whl as a unpack format in shutil unpack type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 08:20:26 2020 From: report at bugs.python.org (And Clover) Date: Wed, 04 Mar 2020 13:20:26 +0000 Subject: [New-bugs-announce] [issue39847] EnterNonRecursiveMutex on win32 can hang for 49.7 days Message-ID: <1583328026.98.0.491106865905.issue39847@roundup.psfhosted.org> New submission from And Clover : Since bpo-15038, waiting to acquire locks/events/etc from _thread/threading on Windows can fail to return long past the requested timeout. Cause: https://github.com/python/cpython/blob/3.8/Python/thread_nt.h#L85 using 32-bit GetTickCount/DWORD, which will overflow at around 49.7 days of uptime. If the WaitForSingleObjectEx call in PyCOND_TIMEDWAIT returns later than the 'target' time, and the tick count overflows in that gap, 'milliseconds' will become very large (up to another 49.7 days) and the next PyCOND_TIMEDWAIT will be stuck for a long time. Where we've seen it is where it's most likely to happen: when the machine is hibernated during the WaitForSingleObjectEx call. I believe the TickCount continues to increase during hibernation so there is a much bigger gap between 'target' and 'now' for the overflow to happen in. Simplest fix is probably to switch to GetTickCount64/ULONGLONG. We should be able to get away with using this now we no longer support WinXP. ---------- components: Library (Lib), Windows messages: 363346 nosy: aclover, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: EnterNonRecursiveMutex on win32 can hang for 49.7 days type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 09:56:30 2020 From: report at bugs.python.org (Marco Sulla) Date: Wed, 04 Mar 2020 14:56:30 +0000 Subject: [New-bugs-announce] [issue39848] Warning: 'classifiers' should be a list, got type 'tuple' Message-ID: <1583333790.29.0.541399000683.issue39848@roundup.psfhosted.org> New submission from Marco Sulla : I got this warning. I suppose that `distutils` can use any iterable. ---------- components: Distutils messages: 363354 nosy: Marco Sulla, dstufft, eric.araujo priority: normal severity: normal status: open title: Warning: 'classifiers' should be a list, got type 'tuple' versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 10:08:24 2020 From: report at bugs.python.org (Dong-hee Na) Date: Wed, 04 Mar 2020 15:08:24 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue39849=5D_Compiler_warninig?= =?utf-8?q?=3A_warning=3A_variable_=E2=80=98res=E2=80=99_set_but_not_used_?= =?utf-8?q?=5B-Wunused-but-set-variable=5D?= Message-ID: <1583334504.92.0.978999288989.issue39849@roundup.psfhosted.org> New submission from Dong-hee Na : Modules/_testcapimodule.c:6808:15: warning: variable ?res? set but not used [-Wunused-but-set-variable] 6808 | PyObject *res; This warning is noticed after bpo-38913 is fixed. My GCC version is 9.2.0 :) ---------- messages: 363355 nosy: corona10, serhiy.storchaka priority: normal severity: normal status: open title: Compiler warninig: warning: variable ?res? set but not used [-Wunused-but-set-variable] type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 11:12:51 2020 From: report at bugs.python.org (Nathan Michaels) Date: Wed, 04 Mar 2020 16:12:51 +0000 Subject: [New-bugs-announce] [issue39850] multiprocessing.connection.Listener fails to close with null byte in AF_UNIX socket name. Message-ID: <1583338371.71.0.604294788421.issue39850@roundup.psfhosted.org> Change by Nathan Michaels : ---------- components: Library (Lib) nosy: nmichaels priority: normal severity: normal status: open title: multiprocessing.connection.Listener fails to close with null byte in AF_UNIX socket name. type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 13:03:06 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 04 Mar 2020 18:03:06 +0000 Subject: [New-bugs-announce] [issue39851] tarfile: Exception ignored in (... stdout ...) BrokenPipeError Message-ID: <1583344986.43.0.320216561528.issue39851@roundup.psfhosted.org> New submission from STINNER Victor : When a stdout pipe is closed on the consumer side, Python logs an exception at exit: $ ./python -m tarfile -l archive.tar|true Exception ignored in: <_io.TextIOWrapper name='' mode='w' encoding='utf-8'> BrokenPipeError: [Errno 32] Broken pipe I tried to flush explicitly stdout at exit: it allows to catch BrokenPipeError... but Python still logs the exception, since it triggered by the TextIOWrapper finalizer which tries to flush the file again. See also bpo-39828: "json.tool should catch BrokenPipeError". ---------- components: Library (Lib) messages: 363368 nosy: corona10, vstinner priority: normal severity: normal status: open title: tarfile: Exception ignored in (... stdout ...) BrokenPipeError versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 13:12:17 2020 From: report at bugs.python.org (Dave Liptack) Date: Wed, 04 Mar 2020 18:12:17 +0000 Subject: [New-bugs-announce] [issue39852] IDLE: Copy/Paste behaves like Cut/Paste Message-ID: <1583345537.82.0.492123275621.issue39852@roundup.psfhosted.org> New submission from Dave Liptack : Python 3.8.1 IDLE 3.8.1 When COPYing text in IDLE, right-click and PASTE behaves like CUT/PASTE This also occurs with COPY -> Go to Line -> PASTE This does not occur with COPY -> left-click -> PASTE ---------- assignee: terry.reedy components: IDLE messages: 363370 nosy: Dave Liptack, terry.reedy priority: normal severity: normal status: open title: IDLE: Copy/Paste behaves like Cut/Paste type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 13:38:22 2020 From: report at bugs.python.org (Anne Archibald) Date: Wed, 04 Mar 2020 18:38:22 +0000 Subject: [New-bugs-announce] [issue39853] Segmentation fault with urllib.request.urlopen and threads Message-ID: <1583347102.97.0.749597726143.issue39853@roundup.psfhosted.org> New submission from Anne Archibald : This was discovered in the astropy test suite, where ThreadPoolExecutor is used to concurrently launch a lot of urllib.request.urlopen. This occurs when the URLs are local files; I'm not sure about other URL schemes. The problem appears to occur in python 3.7 but not python 3.8 or python 3.6 (on a different machine). $ python urllib_segfault.py Linux-5.3.0-29-generic-x86_64-with-Ubuntu-19.10-eoan Python 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] Segmentation fault (core dumped) $ python3.8 urllib_segfault.py Linux-5.3.0-29-generic-x86_64-with-glibc2.29 Python 3.8.0 (default, Oct 28 2019, 16:14:01) [GCC 9.2.1 20191008] $ python3 urllib_segfault.py Linux-4.15.0-88-generic-x86_64-with-Ubuntu-18.04-bionic Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] $ The Astropy bug report: https://github.com/astropy/astropy/issues/10008 ---------- components: Library (Lib) files: urllib_segfault.py messages: 363374 nosy: Anne Archibald priority: normal severity: normal status: open title: Segmentation fault with urllib.request.urlopen and threads type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48950/urllib_segfault.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 13:41:51 2020 From: report at bugs.python.org (Aaron Meurer) Date: Wed, 04 Mar 2020 18:41:51 +0000 Subject: [New-bugs-announce] [issue39854] f-strings with format specifiers have wrong col_offset Message-ID: <1583347311.91.0.313226476752.issue39854@roundup.psfhosted.org> New submission from Aaron Meurer : This is tested in CPython master. The issue also occurs in older versions of Python. >>> ast.dump(ast.parse('f"{x}"')) "Module(body=[Expr(value=JoinedStr(values=[FormattedValue(value=Name(id='x', ctx=Load()), conversion=-1, format_spec=None)]))], type_ignores=[])" >>> ast.dump(ast.parse('f"{x!r}"')) "Module(body=[Expr(value=JoinedStr(values=[FormattedValue(value=Name(id='x', ctx=Load()), conversion=114, format_spec=None)]))], type_ignores=[])" >>> ast.parse('f"{x}"').body[0].value.values[0].value.col_offset 3 >>> ast.parse('f"{x!r}"').body[0].value.values[0].value.col_offset 1 The col_offset for the variable x should be 3 in both instances. ---------- messages: 363375 nosy: asmeurer priority: normal severity: normal status: open title: f-strings with format specifiers have wrong col_offset versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 14:21:51 2020 From: report at bugs.python.org (Matej Cepl) Date: Wed, 04 Mar 2020 19:21:51 +0000 Subject: [New-bugs-announce] [issue39855] test.test_subprocess.POSIXProcessTestCase.test_user fails in the limited build environment Message-ID: <1583349711.75.0.193414352095.issue39855@roundup.psfhosted.org> New submission from Matej Cepl : When testing Python from Python-3.9.0a3.tar.xz two test cases file in the limited build environment for openSUSE. We have very limited number of users there: stitny:/home/abuild/rpmbuild/BUILD/Python-3.9.0a3 # cat /etc/passwd root:x:0:0:root:/root:/bin/bash abuild:x:399:399:Autobuild:/home/abuild:/bin/bash stitny:/home/abuild/rpmbuild/BUILD/Python-3.9.0a3 # So, tests which expect existence of the user 'nobody' fail: [ 747s] ====================================================================== [ 747s] ERROR: test_user (test.test_subprocess.POSIXProcessTestCase) (user='nobody', close_fds=False) [ 747s] ---------------------------------------------------------------------- [ 747s] Traceback (most recent call last): [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/test/test_subprocess.py", line 1805, in test_user [ 747s] output = subprocess.check_output( [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/subprocess.py", line 419, in check_output [ 747s] return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/subprocess.py", line 510, in run [ 747s] with Popen(*popenargs, **kwargs) as process: [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/subprocess.py", line 929, in __init__ [ 747s] uid = pwd.getpwnam(user).pw_uid [ 747s] KeyError: "getpwnam(): name not found: 'nobody'" [ 747s] [ 747s] ====================================================================== [ 747s] ERROR: test_user (test.test_subprocess.POSIXProcessTestCase) (user='nobody', close_fds=True) [ 747s] ---------------------------------------------------------------------- [ 747s] Traceback (most recent call last): [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/test/test_subprocess.py", line 1805, in test_user [ 747s] output = subprocess.check_output( [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/subprocess.py", line 419, in check_output [ 747s] return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/subprocess.py", line 510, in run [ 747s] with Popen(*popenargs, **kwargs) as process: [ 747s] File "/home/abuild/rpmbuild/BUILD/Python-3.9.0a3/Lib/subprocess.py", line 929, in __init__ [ 747s] uid = pwd.getpwnam(user).pw_uid [ 747s] KeyError: "getpwnam(): name not found: 'nobody'" [ 747s] [ 747s] ---------------------------------------------------------------------- [ 747s] I am not sure what is the proper solution here. Whether test should be skipped if nobody doesn?t exist, or the test should switch to user 0, or the current user? ---------- components: Tests messages: 363380 nosy: mcepl, vstinner priority: normal severity: normal status: open title: test.test_subprocess.POSIXProcessTestCase.test_user fails in the limited build environment versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 17:27:00 2020 From: report at bugs.python.org (Je GeVa) Date: Wed, 04 Mar 2020 22:27:00 +0000 Subject: [New-bugs-announce] [issue39856] glob : some 'unix style' glob items are not supported Message-ID: <1583360820.43.0.706736327816.issue39856@roundup.psfhosted.org> New submission from Je GeVa : some common Unix style pathname pattern expansions are not supported : ~/ for $HOME ~user/ for $HOME of user {1,abc,999} for enumeration ex: lets say $ls ~ hello1.a hello2.a helli3.c then : $echo ~/hell*{1,3}.* hello1.a helli3.c while >> glob.glob("~/hel*") [] >> glob.glob("/home/jegeva/hel*") ['hello1.a','hello2.a','helli3.c >> glob.glob("/home/jegeva/hell*{1,3}.*") [] ---------- components: Library (Lib) messages: 363396 nosy: Je GeVa priority: normal severity: normal status: open title: glob : some 'unix style' glob items are not supported versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 4 21:10:18 2020 From: report at bugs.python.org (Mike Frysinger) Date: Thu, 05 Mar 2020 02:10:18 +0000 Subject: [New-bugs-announce] [issue39857] subprocess.run: add an extra_env kwarg to complement existing env kwarg Message-ID: <1583374218.17.0.443069658502.issue39857@roundup.psfhosted.org> New submission from Mike Frysinger : a common idiom i run into is wanting to add/set one or two env vars when running a command via subprocess. the only thing the API allows currently is inherting the current environment, or specifying the complete environment. this means a lot of copying & pasting of the pattern: env = os.environ.copy() env['FOO'] = ... env['BAR'] = ... subprocess.run(..., env=env, ...) it would nice if we could simply express this incremental behavior: subprocess.run(..., extra_env={'FOO': ..., 'BAR': ...}, ...) then the subprocess API would take care of copying & merging. if extra_env: assert env is None env = os.environ.copy() env.update(extra_env) this is akin to subprocess.run's capture_output shortcut. it's unclear to me whether this would be in both subprocess.Popen & subprocess.run, or only subprocess.run. it seems like subprocess.Popen elides convenience APIs. ---------- components: Library (Lib) messages: 363413 nosy: vapier priority: normal severity: normal status: open title: subprocess.run: add an extra_env kwarg to complement existing env kwarg type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 00:22:36 2020 From: report at bugs.python.org (Sam Price) Date: Thu, 05 Mar 2020 05:22:36 +0000 Subject: [New-bugs-announce] [issue39858] bitfield layout wrong in ctypes Message-ID: <1583385756.4.0.955468761993.issue39858@roundup.psfhosted.org> New submission from Sam Price : if 8 1 byte fields are included in a ctype field, it allows an extra byte to be included in the packing when there is no room left for the next field. If I put the bitfields in a child structure then I get expected results. In [35]: run ctypeSizeTest.py Size is 4 Expected 3 0 0x10000 a0 0 0x10001 a1 0 0x10002 a2 0 0x10003 a3 0 0x10004 a4 0 0x10005 a5 0 0x10006 a6 0 0x10007 a7 0 0x40008 b0 <- Expected to be at offset 1, not 0. 2 0xc0000 b1 <- Expected to be at offset 1, not 2 Size is 3 Expected 3 0 0x1 a 1 0x40000 b0 1 0xc0004 b1 ---------- components: ctypes files: ctypeSizeTest.py messages: 363417 nosy: thesamprice priority: normal severity: normal status: open title: bitfield layout wrong in ctypes type: behavior versions: Python 2.7, Python 3.5, Python 3.6 Added file: https://bugs.python.org/file48954/ctypeSizeTest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 00:49:49 2020 From: report at bugs.python.org (Andy Lester) Date: Thu, 05 Mar 2020 05:49:49 +0000 Subject: [New-bugs-announce] [issue39859] set_herror should not throw away constness of hstrerror Message-ID: <1583387389.29.0.102450533864.issue39859@roundup.psfhosted.org> New submission from Andy Lester : set_herror builds a string by calling hstrerror but downcasts its return value to char *. It should be const char *. ---------- components: Interpreter Core messages: 363418 nosy: petdance priority: normal severity: normal status: open title: set_herror should not throw away constness of hstrerror _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 04:07:57 2020 From: report at bugs.python.org (Ben Griffin) Date: Thu, 05 Mar 2020 09:07:57 +0000 Subject: [New-bugs-announce] [issue39860] configparser - no support for cascading defaults (as defined by MySQL) Message-ID: <1583399277.82.0.801521222284.issue39860@roundup.psfhosted.org> New submission from Ben Griffin : While there is now support for a single default group, mysql documentation is clear that there is a cascade of groups for option settings, normally starting with [client], and including version numbers.. This allows generic settings to be overridden by specific settings, and it's an important feature when building an architecture around a mysql/mariadb environment. A typical configuration chain may look like this. [client] -> [mysql] -> [mysql-5.6] -> [pymysql] -> [my_custom_app] Currently, the implementation of configparser only allows the programmer to define the default group (typically [client]) and then the group to read from [my_custom_app]. In terms of a proposed approach to the library, I suggest two changes (both backwards compatible). (1) Extend the 'default_section' initializer such that it supports both a string (current implementation) and an iterable (ordered from specialised to general). (2) Likewise extend the 'section' parameter of get() such that it supports both a string (current implementation) and an iterable (ordered from specialised to general), as above. Mysql's own docs are as follows. https://dev.mysql.com/doc/refman/8.0/en/option-files.html#option-file-syntax "List more general option groups first and more specific groups later. For example, a [client] group is more general because it is read by all client programs, whereas a [mysqldump] group is read only by mysqldump. Options specified later override options specified earlier, so putting the option groups in the order [client], [mysqldump] enables mysqldump-specific options to override [client] options." ---------- components: Library (Lib) messages: 363421 nosy: Ben Griffin priority: normal severity: normal status: open title: configparser - no support for cascading defaults (as defined by MySQL) versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 04:39:55 2020 From: report at bugs.python.org (Dorian) Date: Thu, 05 Mar 2020 09:39:55 +0000 Subject: [New-bugs-announce] [issue39861] French doc __futur__: Bad URL Message-ID: <1583401195.98.0.339356939479.issue39861@roundup.psfhosted.org> New submission from Dorian : Hello, In the French page: https://docs.python.org/fr/3/library/__future__.html There is a bad URL in the table at the end of the page: division/2.2.0a2/3.0/PEP 328 : Changement de l'op?rateur de division It's supposed to be PEP 238, not 328. The URL is https://www.python.org/dev/peps/pep-0328 It's supposed to be https://www.python.org/dev/peps/pep-0238 Should be easy to fix. Keep the good work. Regards, Dorian ---------- assignee: docs at python components: 2to3 (2.x to 3.x conversion tool), Documentation messages: 363422 nosy: Narann, docs at python priority: normal severity: normal status: open title: French doc __futur__: Bad URL _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 05:29:41 2020 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Thu, 05 Mar 2020 10:29:41 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue39862=5D_Why_are_the_union?= =?utf-8?q?_relationships_not_implemented_by_default_for_=E2=89=A4_and_?= =?utf-8?b?4omlPw==?= Message-ID: <1583404181.09.0.848601767593.issue39862@roundup.psfhosted.org> New submission from G?ry : Mathematically, the [binary relation](https://en.wikipedia.org/wiki/Binary_relation) ? is the [union](https://en.wikipedia.org/wiki/Binary_relation#Union) of the binary relations < and =, while the binary relation ? is the union of the binary relations > and =. So is there a reason why Python does not implement `__le__` in terms of `__lt__` and `__eq__` by default, and `__ge__` in terms of `__gt__` and `__eq__` by default? The default implementation would be like this (but probably in C for performance, like `__ne__`): ```python def __le__(self, other): result_1 = self.__lt__(other) result_2 = self.__eq__(other) if result_1 is not NotImplemented and result_2 is not NotImplemented: return result_1 or result_2 return NotImplemented def __ge__(self, other): result_1 = self.__gt__(other) result_2 = self.__eq__(other) if result_1 is not NotImplemented and result_2 is not NotImplemented: return result_1 or result_2 return NotImplemented ``` This would save users from implementing these two methods all the time. Here is the relevant paragraph in the [Python documentation](https://docs.python.org/3/reference/datamodel.html#object.__lt__) (emphasis mine): > By default, `__ne__()` delegates to `__eq__()` and inverts the result > unless it is `NotImplemented`. There are no other implied > relationships among the comparison operators, **for example, the truth > of `(x is the complement of ?. These complementary relationships can be easily implemented by users when they are valid with the [`functools.total_ordering`](https://docs.python.org/3/library/functools.html#functools.total_ordering) class decorator provided by the Python standard library. ---------- components: Interpreter Core messages: 363426 nosy: maggyero priority: normal severity: normal status: open title: Why are the union relationships not implemented by default for ? and ?? type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 07:37:08 2020 From: report at bugs.python.org (Inada Naoki) Date: Thu, 05 Mar 2020 12:37:08 +0000 Subject: [New-bugs-announce] [issue39863] Add trimend option to readline() and readlines() Message-ID: <1583411828.43.0.97175305903.issue39863@roundup.psfhosted.org> New submission from Inada Naoki : str.splitlines() has `keepends` option. Like that, `IOBase.readline([trimend=False])` and `IOBase.readlines([trimends=False])` would be useful. ---------- components: IO messages: 363430 nosy: inada.naoki priority: normal severity: normal status: open title: Add trimend option to readline() and readlines() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 10:06:50 2020 From: report at bugs.python.org (Ningyi Du) Date: Thu, 05 Mar 2020 15:06:50 +0000 Subject: [New-bugs-announce] [issue39864] IndexError gives wrong axis info Message-ID: <1583420810.46.0.335329049767.issue39864@roundup.psfhosted.org> New submission from Ningyi Du : IndexError: index 11 is out of bounds for axis 0 with size 11 The actual error is not with axis 0, but axis 3. error message: 168 if iJ>=9: 169 print(iE,iE0,iEtemp,iJ,li,lf,mlf+lf) --> 170 SS[iEtemp][iJ][li][lf][mlf+lf]= FF(jf,mf,lf,mlf,ji,mi,li,J)*SJ[iCh][jCh] 171 172 sumSJ1 += np.abs(SJ0[iCh][jCh])**2 IndexError: index 11 is out of bounds for axis 0 with size 11 ---------- messages: 363433 nosy: ningyidu priority: normal severity: normal status: open title: IndexError gives wrong axis info _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 14:24:01 2020 From: report at bugs.python.org (pasenor) Date: Thu, 05 Mar 2020 19:24:01 +0000 Subject: [New-bugs-announce] [issue39865] getattr silences an unrelated AttributeError Message-ID: <1583436241.07.0.803311507701.issue39865@roundup.psfhosted.org> New submission from pasenor : if a class has a descriptor and a defined __getattr__ method, and an AttributeError (unrelated to the descriptor lookup) is raised inside the descriptor, it will be silenced: class A: @property def myprop(self): print("property called") a = 1 a.foo # <-- AttributeError that should not be silenced def __getattr__(self, attr_name): print("__getattr__ called") a = A() a.myprop In this example myprop() is called, the error silenced, then __getattr__() is called. This can lead to rather subtle bugs. Probably an explicit AttributeError should be raised instead. ---------- messages: 363449 nosy: pasenor priority: normal severity: normal status: open title: getattr silences an unrelated AttributeError versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 15:14:16 2020 From: report at bugs.python.org (Joey) Date: Thu, 05 Mar 2020 20:14:16 +0000 Subject: [New-bugs-announce] [issue39866] get_type_hints raises inconsistent TypeError Message-ID: <1583439256.25.0.955408580796.issue39866@roundup.psfhosted.org> New submission from Joey : If you pass in an instance of an object without type annotations, you get an error that states "XXX is not a module, class, method, or function." This correctly describes the situation ? typing.get_type_hints(object()) Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.8/typing.py", line 1252, in get_type_hints raise TypeError('{!r} is not a module, class, method, ' TypeError: is not a module, class, method, or function. However, if you pass in an instance of a class that _does_ have type annotations... >class Bar: ... foo: int >typing.get_type_hints(Bar()) {'foo': } You don't get an error even though the message of the first exception would suggest you do. Fix should be pretty easy, either just have the get_type_hints always return a dictionary, and return an empty dictionary if the type of the object has no __annotations__ defined (my preferred solution), or actually check to see if the object is an instance of `_allowed_types` before checking whether the object has annotations. ---------- components: Library (Lib) messages: 363450 nosy: j.tran4418 priority: normal severity: normal status: open title: get_type_hints raises inconsistent TypeError versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 15:15:39 2020 From: report at bugs.python.org (jfbu) Date: Thu, 05 Mar 2020 20:15:39 +0000 Subject: [New-bugs-announce] [issue39867] randrange(N) for N's in same dyadic blocs have excessive correlations when sharing identical seeds Message-ID: <1583439339.66.0.539160874229.issue39867@roundup.psfhosted.org> New submission from jfbu : We generate quadruples of random integers using randrange(n) and randrange(m) and count how many times the quadruples are identical, using the same random seed. Of course for nearby n and m (the real life example was with n==95 and m==97) we do expect matches. But we found orders of magnitude more than was expected. The attached file demonstrates this by comparison with random()*n (with rounding) as alternative method to generate the random integers (we are aware this gives less uniformity for a given range, but these effects are completely negligible in comparison to the effect we test). For the latter the probability of matches is non-vanishing but orders of magnitude smaller than using randrange(n). Here is an excerpt of our testing result. Each trial uses a random seed (selected via randrange(100000000)). Then 4 random integers in two given ranges are generated and compared. A hit is when all 4 match. - with randrange(): n = 99, m = 124, 4135 hits among 10000 trials n = 99, m = 125, 3804 hits among 10000 trials n = 99, m = 126, 3803 hits among 10000 trials n = 99, m = 127, 3892 hits among 10000 trials n = 99, m = 128, 0 hits among 10000 trials n = 99, m = 129, 0 hits among 10000 trials n = 99, m = 130, 0 hits among 10000 trials n = 99, m = 131, 0 hits among 10000 trials - with random(): n = 99, m = 124, 0 hits among 10000 trials n = 99, m = 125, 0 hits among 10000 trials n = 99, m = 126, 0 hits among 10000 trials n = 99, m = 127, 0 hits among 10000 trials n = 99, m = 128, 0 hits among 10000 trials n = 99, m = 129, 0 hits among 10000 trials n = 99, m = 130, 0 hits among 10000 trials n = 99, m = 131, 0 hits among 10000 trials The test file has some hard-coded random seeds for reproductibility. Although I did only limited testing it is flagrant there is completely abnormal correlation between randrange(n) and randrange(m) when the two integers have the same length in base 2. Tested with 3.6 and 3.8. ---------- files: testrandrange.py messages: 363451 nosy: jfbu priority: normal severity: normal status: open title: randrange(N) for N's in same dyadic blocs have excessive correlations when sharing identical seeds versions: Python 3.8 Added file: https://bugs.python.org/file48955/testrandrange.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 16:54:08 2020 From: report at bugs.python.org (Brandt Bucher) Date: Thu, 05 Mar 2020 21:54:08 +0000 Subject: [New-bugs-announce] [issue39868] Stale Python Language Reference docs (no walrus). Message-ID: <1583445248.34.0.0516528795686.issue39868@roundup.psfhosted.org> New submission from Brandt Bucher : It looks like https://docs.python.org/3/reference/expressions.html and https://docs.python.org/3/reference/compound_stmts.html were never updated for named expressions. Because this change has to be backported, it's sort of a blocker for my PEP 614 doc updates in issue 39702, which need to use the missing node in 3.9 only (I'd rather have this get a clean backport now than a messy one later)! Is somebody more familiar with PEP 572 willing to take this? Should be pretty straightforward. Pinging Emily since it looks like you've done some grammar/doc work for this in the past. ---------- assignee: docs at python components: Documentation keywords: easy messages: 363456 nosy: brandtbucher, docs at python, emilyemorehouse priority: normal severity: normal status: open title: Stale Python Language Reference docs (no walrus). versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 18:52:49 2020 From: report at bugs.python.org (Mariatta) Date: Thu, 05 Mar 2020 23:52:49 +0000 Subject: [New-bugs-announce] [issue39869] Improve Instance Objects tutorial documentation Message-ID: <1583452369.51.0.496952610919.issue39869@roundup.psfhosted.org> New submission from Mariatta : In https://docs.python.org/3.9/tutorial/classes.html#instance-objects, it says: > There are two kinds of valid attribute names, data attributes and methods. Replace the comma with a colon > There are two kinds of valid attribute names: data attributes and methods. Reported in docs mailing list: https://mail.python.org/archives/list/docs at python.org/thread/BWXLZM4OLWF5XVBI4S2WK3LFUIEBI6M6/ ---------- assignee: docs at python components: Documentation keywords: newcomer friendly messages: 363463 nosy: Mariatta, docs at python priority: normal severity: normal stage: needs patch status: open title: Improve Instance Objects tutorial documentation versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 21:33:47 2020 From: report at bugs.python.org (Andy Lester) Date: Fri, 06 Mar 2020 02:33:47 +0000 Subject: [New-bugs-announce] [issue39870] sys_displayhook_unencodable takes an unnecessary PyThreadState * argument Message-ID: <1583462027.48.0.887856173941.issue39870@roundup.psfhosted.org> New submission from Andy Lester : sys_displayhook_unencodable in Python/sysmodule.c doesn't need its first argument. Remove it. ---------- messages: 363475 nosy: petdance priority: normal severity: normal status: open title: sys_displayhook_unencodable takes an unnecessary PyThreadState * argument _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 5 21:59:01 2020 From: report at bugs.python.org (David Vo) Date: Fri, 06 Mar 2020 02:59:01 +0000 Subject: [New-bugs-announce] [issue39871] math.copysign raises SystemError with non-float x and custom y Message-ID: <1583463541.57.0.123228500554.issue39871@roundup.psfhosted.org> New submission from David Vo : If math.copysign(x, y) is passed an x that cannot be converted to a float and a y that implements __float__() in Python, math.copysign() will raise a SystemError from the TypeError resulting from the attempted float conversion of x. math.copysign() should probably return immediately if converting the first argument to a float raises an error. Example: >>> import math >>> from fractions import Fraction >>> float(Fraction(-1, 1)) # this is needed to avoid an AttributeError? -1.0 >>> math.copysign((-1) ** 0.5, Fraction(-1, 1)) TypeError: can't convert complex to float The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/numbers.py", line 291, in __float__ return self.numerator / self.denominator SystemError: PyEval_EvalFrameEx returned a result with an error set ---------- components: Extension Modules messages: 363477 nosy: auscompgeek priority: normal severity: normal status: open title: math.copysign raises SystemError with non-float x and custom y type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 01:31:52 2020 From: report at bugs.python.org (Andy Lester) Date: Fri, 06 Mar 2020 06:31:52 +0000 Subject: [New-bugs-announce] [issue39872] Remove unused args from four functions in Python/symtable.c Message-ID: <1583476312.99.0.496322079949.issue39872@roundup.psfhosted.org> New submission from Andy Lester : These four functions have unused arguments that can be removed: symtable_exit_block -> void *ast symtable_visit_annotations -> stmt_ty s symtable_exit_block -> void *ast symtable_visit_annotations -> stmt_ty s PR is forthcoming. ---------- components: Interpreter Core messages: 363486 nosy: petdance priority: normal severity: normal status: open title: Remove unused args from four functions in Python/symtable.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 04:24:50 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 06 Mar 2020 09:24:50 +0000 Subject: [New-bugs-announce] [issue39873] Debug mode: check if objects are valid Message-ID: <1583486690.53.0.82859773355.issue39873@roundup.psfhosted.org> New submission from STINNER Victor : One of the worst issue that I had to debug is a crash in the Python garbage collector. It is usually a crash in visit_decref(). See my notes: https://pythondev.readthedocs.io/debug_tools.html#debug-crash-in-garbage-collection-visit-decref My previous attempt to help debugging such issue failed: bpo-36389 "Add gc.enable_object_debugger(): detect corrupted Python objects in the GC". The idea was to check frequently if all objects tracked by the GC are valid. The problem is that even if the check looked trivial, checking all objects made Python way slower. Even when I tried to only check objects of the "young" GC generation (generation 0), it was still too slow. Here I propose a different approach: attempt to only check objects when they are accessed. Recently, I started to replace explicit cast to (PyObject *) type with an indirection: a new _PyObject_CAST() macro which should be the only way to cast any object pointer to (PyObject *). /* Cast argument to PyObject* type. */ #define _PyObject_CAST(op) ((PyObject*)(op)) This macro is used in many "core" macros like Py_TYPE(op), Py_REFCNT(op), Py_SIZE(op), Py_SETREF(op, op2), Py_VISIT(op), etc. The idea here is to inject code in _PyObject_CAST(op) when Python is built in debug mode to ensure that the object is valid. The intent is to detect corrupted objects earlier than a garbage collection, to ease debugging C extensions. The checks should be limited to reduce the performance overhead. Attached PR implemnts this idea. ---------- components: Interpreter Core messages: 363499 nosy: vstinner priority: normal severity: normal status: open title: Debug mode: check if objects are valid versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 05:11:51 2020 From: report at bugs.python.org (BongK) Date: Fri, 06 Mar 2020 10:11:51 +0000 Subject: [New-bugs-announce] [issue39874] Heappush of Maxheap version does not exist Message-ID: <1583489511.06.0.0647402677335.issue39874@roundup.psfhosted.org> New submission from BongK : heappush of maxheap version does not exist in heapq.py ---------- messages: 363501 nosy: vbnmzx1 priority: normal severity: normal status: open title: Heappush of Maxheap version does not exist _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 06:45:28 2020 From: report at bugs.python.org (henrik242) Date: Fri, 06 Mar 2020 11:45:28 +0000 Subject: [New-bugs-announce] [issue39875] urllib.request.urlopen sends POST data as query string Message-ID: <1583495128.12.0.395255556414.issue39875@roundup.psfhosted.org> New submission from henrik242 : curl correctly posts data to Solr: $ curl -v 'http://solr.example.no:12699/solr/my_coll/update?commit=true' \ --data 'KEY__9927.1\ {"result":0,"jobId":"9459695","jobNumber":"9927.1"}' The solr query log says: [20200306T111354,131] [my_coll_shard1_replica_n85] webapp=/solr path=/update params={commit=true} status=0 QTime=96 I'm trying to do the same thing with Python: >>> import urllib.request >>> data='KEY__9927.1{"result":0,"jobId":"9459695","jobNumber":"9927.1"}' >>> url='http://solr.example.no:12699/solr/my_coll/update?commit=true' >>> req = urllib.request.Request(url=url, data=data.encode('utf-8'), method='POST') >>> res = urllib.request.urlopen(req) But now the solr query log shows that the POST data has been added to the query param string: [20200306T112358,780] [my_coll_shard1_replica_n87] webapp=/solr path=/update params={commit=true&KEY__9927.1{"result":0,"jobId":"9459695","jobNumber":"9927.1"}} status=0 QTime=30 What is happening here? $ python3 -VV Python 3.7.6 (default, Dec 30 2019, 19:38:26) [Clang 11.0.0 (clang-1100.0.33.16)] ---------- components: Library (Lib) messages: 363502 nosy: henrik242 priority: normal severity: normal status: open title: urllib.request.urlopen sends POST data as query string type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 08:41:31 2020 From: report at bugs.python.org (Andreas Spar) Date: Fri, 06 Mar 2020 13:41:31 +0000 Subject: [New-bugs-announce] [issue39876] csv.DictReader.fieldnames interprets unicode as ascii Message-ID: <1583502091.92.0.330932178287.issue39876@roundup.psfhosted.org> New submission from Andreas Spar : with open(filename, "rt") as csvfile: csv_reader = csv.DictReader(csvfile, delimiter=csv_delimiter) filednames = csv_reader.fieldnames In Python 3.8 csv expects utf-8 encoded files but apperently doens't read the header with utf-8 format. If the csv file has an header named 'Franz?sisch' it will be saved as 'Franz??sisch'. ---------- components: Library (Lib) messages: 363506 nosy: sparan priority: normal severity: normal status: open title: csv.DictReader.fieldnames interprets unicode as ascii type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 09:22:28 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 06 Mar 2020 14:22:28 +0000 Subject: [New-bugs-announce] [issue39877] Daemon thread is crashing in PyEval_RestoreThread() while the main thread is exiting the process Message-ID: <1583504548.68.0.765659096989.issue39877@roundup.psfhosted.org> New submission from STINNER Victor : Sometimes, test_multiprocessing_spawn does crash in PyEval_RestoreThread() on FreeBSD with a coredump. This issue should be the root cause of bpo-39088: "test_concurrent_futures crashed with python.core core dump on AMD64 FreeBSD Shared 3.x", where the second comment is a test_multiprocessing_spawn failure with "... After: ['python.core'] ..." # Thread 1 (gdb) frame #0 0x00000000003b518c in PyEval_RestoreThread (tstate=0x801f23790) at Python/ceval.c:387 387 _PyRuntimeState *runtime = tstate->interp->runtime; (gdb) p tstate->interp $3 = (PyInterpreterState *) 0xdddddddddddddddd (gdb) info threads Id Target Id Frame * 1 LWP 100839 0x00000000003b518c in PyEval_RestoreThread (tstate=0x801f23790) at Python/ceval.c:387 2 LWP 100230 0x00000008006fbcfc in _fini () from /lib/libm.so.5 3 LWP 100192 _accept4 () at _accept4.S:3 # Thread 2 (gdb) thread 2 [Switching to thread 2 (LWP 100230)] #0 0x00000008006fbcfc in _fini () from /lib/libm.so.5 (gdb) where (...) #4 0x0000000800859431 in exit (status=0) at /usr/src/lib/libc/stdlib/exit.c:74 #5 0x000000000048f3d8 in Py_Exit (sts=0) at Python/pylifecycle.c:2349 (...) The problem is that Python already freed the memory of all PyThreadState structures, whereas PyEval_RestoreThread(tstate) dereferences tstate to get the _PyRuntimeState structure: void PyEval_RestoreThread(PyThreadState *tstate) { assert(tstate != NULL); _PyRuntimeState *runtime = tstate->interp->runtime; // <==== HERE === struct _ceval_runtime_state *ceval = &runtime->ceval; assert(gil_created(&ceval->gil)); int err = errno; take_gil(ceval, tstate); exit_thread_if_finalizing(tstate); errno = err; _PyThreadState_Swap(&runtime->gilstate, tstate); } I modified PyEval_RestoreThread(tstate) to get runtime from tstate: commit 01b1cc12e7c6a3d6a3d27ba7c731687d57aae92a. Extract of the change: diff --git a/Python/ceval.c b/Python/ceval.c index 9f4b43615e..ee13fd1ad7 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -384,7 +386,7 @@ PyEval_SaveThread(void) void PyEval_RestoreThread(PyThreadState *tstate) { - _PyRuntimeState *runtime = &_PyRuntime; + _PyRuntimeState *runtime = tstate->interp->runtime; struct _ceval_runtime_state *ceval = &runtime->ceval; if (tstate == NULL) { @@ -394,7 +396,7 @@ PyEval_RestoreThread(PyThreadState *tstate) int err = errno; take_gil(ceval, tstate); - exit_thread_if_finalizing(runtime, tstate); + exit_thread_if_finalizing(tstate); errno = err; _PyThreadState_Swap(&runtime->gilstate, tstate); By the way, exit_thread_if_finalizing(tstate) also get runtime from state. Before 01b1cc12e7c6a3d6a3d27ba7c731687d57aae92a, it was possible to call PyEval_RestoreThread() from a daemon thread even if tstate was a dangling pointer, since tstate wasn't dereferenced: _PyRuntime variable was accessed directly. -- One simple fix is to access directly _PyRuntime in PyEval_RestoreThread() with a comment explaining why runtime is not get from tstate. I'm concerned by the fact that only FreeBSD buildbot spotted the crash. test_multiprocessing_spawn seems to silently ignore crashes. The bug was only spotted because Python produced a coredump in the current directory. My Fedora 31 doesn't write coredump files in the current files, and so the issue is silently ignored even when using --fail-env-changed. IMHO the most reliable solution is to drop support for daemon threads: they are dangerous by design. But that would be an incompatible change. Maybe we should at least deprecate daemon threads. Python 3.9 now denies spawning a daemon thread in a Python subinterpreter: bpo-37266. ---------- components: Interpreter Core messages: 363512 nosy: eric.snow, nanjekyejoannah, ncoghlan, pablogsal, vstinner priority: normal severity: normal status: open title: Daemon thread is crashing in PyEval_RestoreThread() while the main thread is exiting the process versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 12:06:48 2020 From: report at bugs.python.org (Andy Lester) Date: Fri, 06 Mar 2020 17:06:48 +0000 Subject: [New-bugs-announce] [issue39878] Remove unused args in Python/formatter_unicode.c Message-ID: <1583514408.07.0.994530274834.issue39878@roundup.psfhosted.org> New submission from Andy Lester : The following functions have unused args: calc_number_widths -> PyObject *number fill_number -> Py_ssize_t d_end ---------- components: Interpreter Core messages: 363525 nosy: petdance priority: normal severity: normal status: open title: Remove unused args in Python/formatter_unicode.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 13:33:48 2020 From: report at bugs.python.org (Eric Snow) Date: Fri, 06 Mar 2020 18:33:48 +0000 Subject: [New-bugs-announce] [issue39879] Update language reference to specify that dict is insertion-ordered. Message-ID: <1583519628.32.0.541192808872.issue39879@roundup.psfhosted.org> New submission from Eric Snow : As of 3.7 [1], dict is guaranteed to preserve insertion order: the insertion-order preservation nature of dict objects has been declared to be an official part of the Python language spec. However, at least one key part of the language reference [2] was not updated to reflect this: "3.2. The standard type hierarchy" > "Mappings" > "Dictionaries". Note that the library docs [3] *were* updated. [1] https://docs.python.org/3/whatsnew/3.7.html#summary-release-highlights [2] https://docs.python.org/3/reference/datamodel.html#index-30 [3] https://docs.python.org/3/library/stdtypes.html#typesmapping ---------- assignee: docs at python components: Documentation keywords: easy messages: 363533 nosy: docs at python, eric.snow priority: normal severity: normal stage: needs patch status: open title: Update language reference to specify that dict is insertion-ordered. versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 16:14:18 2020 From: report at bugs.python.org (Clayton Bingham) Date: Fri, 06 Mar 2020 21:14:18 +0000 Subject: [New-bugs-announce] [issue39880] string.lstrip() with leading '3's Message-ID: <1583529258.07.0.323990026349.issue39880@roundup.psfhosted.org> New submission from Clayton Bingham : Code to reproduce the behavior: ``` string = 'h.pt3dadd(3333.994527806812,7310.741605031661,-152.492,0.2815384615384615,sec=sectionList[1396])\n' print(string.lstrip('h.pt3dadd(').split(',')) ``` The lstrip method removed 'h.pt3dadd(' but also removes the 3's before the first decimal in the remaining string. ---------- messages: 363555 nosy: Clayton Bingham priority: normal severity: normal status: open title: string.lstrip() with leading '3's versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 16:50:45 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Fri, 06 Mar 2020 21:50:45 +0000 Subject: [New-bugs-announce] [issue39881] Multiple Interpreters in the Stdlib (PEP 554) - High-level Implementation Message-ID: <1583531445.05.0.495452199299.issue39881@roundup.psfhosted.org> New submission from Joannah Nanjekye : This is to track the high-level implementation of PEP 554. Please see the PEP here: https://www.python.org/dev/peps/pep-0554/ *** Note: PEP not accepted yet. ---------- assignee: nanjekyejoannah components: Interpreter Core messages: 363561 nosy: eric.snow, nanjekyejoannah, ncoghlan, vstinner priority: normal severity: normal status: open title: Multiple Interpreters in the Stdlib (PEP 554) - High-level Implementation versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 17:54:10 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 06 Mar 2020 22:54:10 +0000 Subject: [New-bugs-announce] [issue39882] Py_FatalError(): log automatically the function name Message-ID: <1583535250.38.0.696176637813.issue39882@roundup.psfhosted.org> New submission from STINNER Victor : Attached PR modify Py_FatalError() to log automatically the function name. ---------- components: C API messages: 363565 nosy: vstinner priority: normal severity: normal status: open title: Py_FatalError(): log automatically the function name versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 22:50:57 2020 From: report at bugs.python.org (Todd Jennings) Date: Sat, 07 Mar 2020 03:50:57 +0000 Subject: [New-bugs-announce] [issue39883] Use BSD0 license for code in docs Message-ID: <1583553057.27.0.724137569708.issue39883@roundup.psfhosted.org> New submission from Todd Jennings : Currently using code examples and recipes from the documentation is complicated by the fact that they are all under the Python 2.0 license. Putting them under a more permissive license, particular the BSD0 license, would make them much easier to use in other projects. ---------- assignee: docs at python components: Documentation messages: 363573 nosy: docs at python, toddrjen priority: normal pull_requests: 18180 severity: normal status: open title: Use BSD0 license for code in docs type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 6 23:11:34 2020 From: report at bugs.python.org (Enji Cooper) Date: Sat, 07 Mar 2020 04:11:34 +0000 Subject: [New-bugs-announce] [issue39884] "SystemError: bad call flags" exceptions added as part of BPO-33012 are difficult to debug Message-ID: <1583554294.0.0.456473744337.issue39884@roundup.psfhosted.org> New submission from Enji Cooper : When a body of C extensions needs to be ported from python <3.8 to 3.8, one of the issues one might run into is improperly defined methods in a C extension, which results in SystemErrors stating: >>> SystemError: bad call flags This new behavior was added as part of Issue # 33012. While the issues definitely need to be resolved in the C extensions, where to start is not completely clear. I had to put `printfs` in PyCFunction_NewEx and PyDescr_NewMethod to track down the issues, e.g., >>> printf("method name: %s\n", method->ml_name); While this might be misleading for duplicate method definitions, it definitely helps narrow down the offending code. Adding the method name to the SystemError would be a big step in the right direction in terms of making it easier to resolve these issues. PS I realize that this might be masked by casting PyCFunction on methods or by not using gcc 8+, but I'd argue that C extensions need to have developer issues like this be clearer to the end-reader. ---------- components: Extension Modules messages: 363575 nosy: ngie priority: normal severity: normal status: open title: "SystemError: bad call flags" exceptions added as part of BPO-33012 are difficult to debug type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 00:08:09 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Sat, 07 Mar 2020 05:08:09 +0000 Subject: [New-bugs-announce] [issue39885] IDLE right click should unselect Message-ID: <1583557689.84.0.838761374918.issue39885@roundup.psfhosted.org> New submission from Terry J. Reedy : In text editors, right click commonly brings up a context menu, often including Cut, Copy, Paste, and Delete, with inapplicable entries grayed out. There are at least 2 'standard' behaviors (at least on Windows) with respect to the cursor and selections. 0 (Examples: Windows Notepad and Firefox entry box). The cursor stays where it is and a selection stays selected, even if one has scrolled the cursor and possible selection off the screen with mousewheel or scrollbar. Paste inserts at the possibly hidden cursor, deleting any possibly hidden selection. The view jumps back to the cursor after Paste but not after Copy and Close (Esc). 1 (Examples: Windows Notepad++ and Libre Office). The cursor jumps to the spot of the click, the same as with a left click. Any selection, possibly not visible, is unselected, the same as with a left click. Exception: a right click within an exception leaves the selection and cursor (at one of the ends) alone. IDLE follows a bit of each pattern and neither. Right click always moves the cursor, even within a selection, but never clears a selection. I believe that this is an accident of history. Originally, context menus only had 'Go to file/line' in Shell and grep output and 'Set/Clear Breakpoint' in editors. There was no reason to touch a selection even if the cursor was moved. Cut/Copy/Paste were added in 2012 because they are standard. To really be standard, right click should consistently act like left click and unselect. This prevents accidental deletion. This would also be consistent with making 'Go to line' act the same. Also, right click within a selection should not move the cursor. ---------- assignee: terry.reedy components: IDLE messages: 363577 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE right click should unselect type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 01:19:50 2020 From: report at bugs.python.org (Andy Lester) Date: Sat, 07 Mar 2020 06:19:50 +0000 Subject: [New-bugs-announce] [issue39886] Remove unused arg in config_get_stdio_errors in Python/initconfig.c Message-ID: <1583561990.84.0.843551382535.issue39886@roundup.psfhosted.org> New submission from Andy Lester : config_get_stdio_errors(const PyConfig *config) does not use its arg. Delete it. ---------- components: Interpreter Core messages: 363582 nosy: petdance priority: normal severity: normal status: open title: Remove unused arg in config_get_stdio_errors in Python/initconfig.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 03:49:00 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 07 Mar 2020 08:49:00 +0000 Subject: [New-bugs-announce] [issue39887] Duplicate C object description of vectorcallfunc Message-ID: <1583570940.02.0.236043511606.issue39887@roundup.psfhosted.org> New submission from Serhiy Storchaka : $ make html ... Warning, treated as error: /home/serhiy/py/cpython/Doc/c-api/call.rst:71:duplicate C object description of vectorcallfunc, other instance in /home/serhiy/py/cpython/Doc/c-api/typeobj.rst ---------- assignee: docs at python components: Documentation messages: 363584 nosy: docs at python, jdemeyer, petr.viktorin, serhiy.storchaka priority: high severity: normal status: open title: Duplicate C object description of vectorcallfunc type: compile error versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 09:32:48 2020 From: report at bugs.python.org (Mageshkumar) Date: Sat, 07 Mar 2020 14:32:48 +0000 Subject: [New-bugs-announce] [issue39888] modules not install Message-ID: <1583591568.44.0.104219514979.issue39888@roundup.psfhosted.org> New submission from Mageshkumar : pls kindly rectify it ---------- components: Windows files: modules install issues.txt messages: 363595 nosy: magesh, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: modules not install type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file48960/modules install issues.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 09:41:32 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 07 Mar 2020 14:41:32 +0000 Subject: [New-bugs-announce] [issue39889] Fix ast.unparse() for subscription by extended slices and tuples Message-ID: <1583592092.92.0.938156350005.issue39889@roundup.psfhosted.org> New submission from Serhiy Storchaka : ast.unparse() produces incorrect output for ExtSlice containing a single element: >>> print(ast.unparse(ast.parse('a[i:j,]'))) a[i:j] It also produces redundant parenthesis for Index containing Tuple: >>> print(ast.unparse(ast.parse('a[i, j]'))) a[(i, j)] ---------- components: Demos and Tools, Library (Lib) messages: 363596 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Fix ast.unparse() for subscription by extended slices and tuples type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 15:35:12 2020 From: report at bugs.python.org (Brandt Bucher) Date: Sat, 07 Mar 2020 20:35:12 +0000 Subject: [New-bugs-announce] [issue39890] The AST is mangled when compiling starred assignments. Message-ID: <1583613312.45.0.490677035056.issue39890@roundup.psfhosted.org> New submission from Brandt Bucher : It looks like assignment_helper is the only place where we actually change the semantic meaning of the AST during compilation (a starred name is changed to a regular name as a shortcut). This probably isn't a great idea, and it would bite us later if we started making multiple passes or reusing the AST or something. ---------- assignee: brandtbucher components: Interpreter Core messages: 363616 nosy: brandtbucher priority: normal severity: normal status: open title: The AST is mangled when compiling starred assignments. versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 18:27:55 2020 From: report at bugs.python.org (brian.gallagher) Date: Sat, 07 Mar 2020 23:27:55 +0000 Subject: [New-bugs-announce] [issue39891] [difflib] Improve get_close_matches() to better match when casing of words are different Message-ID: <1583623675.84.0.678348320324.issue39891@roundup.psfhosted.org> New submission from brian.gallagher : Currently difflib's get_close_matches() doesn't match similar words that differ in their casing very well. Example: user at host:~$ python3 Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import difflib >>> difflib.get_close_matches("apple", "APPLE") [] >>> difflib.get_close_matches("apple", "APpLe") [] >>> These seem like they should be considered close matches for each other, given the SequenceMatcher used in difflib.py attempts to produce a "human-friendly diff" of two words in order to yield "intuitive difference reports". One solution would be for the user of the function to perform their own transformation of the supplied data, such as converting all strings to lower-case for example. However, it seems like this might be a surprise to a user of the function if they weren't aware of this limitation. It would be preferable to provide this functionality by default in my eyes. If this is an issue the relevant maintainer(s) consider worth pursuing, I'd love to try my hand at preparing a patch for this. ---------- messages: 363618 nosy: brian.gallagher priority: normal severity: normal status: open title: [difflib] Improve get_close_matches() to better match when casing of words are different versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 19:53:52 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Sun, 08 Mar 2020 00:53:52 +0000 Subject: [New-bugs-announce] [issue39892] Enable DeprecationWarnings by default when not explicit in unittest.main() Message-ID: <1583628832.97.0.582883450111.issue39892@roundup.psfhosted.org> New submission from Gregory P. Smith : Recurring theme: The stdlib has the issue of DeprecationWarning being added to APIs we are changing or removing a few versions in the future yet we perceive that many people never actually bother to try checking their code for deprecation warnings. Only raising issues with a documented, warned, planned in advance API change when it actually happens. Could we reduce the chances of this by enabling DeprecationWarnings by default for processes started via unittest.main() and other common unittest entrypoints? (other test frameworks like pytest should also consider this if they don't already; do we have any existing external implementations of this for inspiration?) One issue with this is that some important warnings are at _parse_ time or _import_ time. But we can deal with import time ones if we are able to have the unittest entrypoint re-exec the process with the same args but with warnings enabled. (and _could_ surface parse time ones if we're willing to accept slower process startup by disabling use of pycs; i wouldn't go that far) Related work: https://www.python.org/dev/peps/pep-0565/ has this idea already been discussed? I don't remember and haven't searched backwards... ---------- components: Library (Lib), Tests messages: 363620 nosy: gregory.p.smith, ncoghlan, vstinner priority: normal severity: normal stage: needs patch status: open title: Enable DeprecationWarnings by default when not explicit in unittest.main() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 20:45:28 2020 From: report at bugs.python.org (wyz23x2) Date: Sun, 08 Mar 2020 01:45:28 +0000 Subject: [New-bugs-announce] [issue39893] Add set_terminate() to logging Message-ID: <1583631928.08.0.636438153182.issue39893@roundup.psfhosted.org> New submission from wyz23x2 : Sometimes, we want to remove the ending \n and sometimes replace it wit something else, like print(). But logging doesn't support that. I'd want a set_terminate() (Or set_end()) function that does that. I think it's easy. Just insert this at line 1119 of __init__ of 3.8.2: def set_terminator(string='\n'): StreamHandler.terminator = string Thanks! ---------- components: Library (Lib) messages: 363622 nosy: wyz23x2 priority: normal severity: normal status: open title: Add set_terminate() to logging type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 22:44:49 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 08 Mar 2020 03:44:49 +0000 Subject: [New-bugs-announce] [issue39894] `pathlib.Path.samefile()` calls `os.stat()` without using accessor Message-ID: <1583639089.94.0.893422780833.issue39894@roundup.psfhosted.org> New submission from Barney Gale : `Path.samefile()` calls `os.stat()` directly. It should use the path's accessor object, as `Path.stat()` does. ---------- components: Library (Lib) messages: 363629 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.samefile()` calls `os.stat()` without using accessor versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 22:53:09 2020 From: report at bugs.python.org (Andy Lester) Date: Sun, 08 Mar 2020 03:53:09 +0000 Subject: [New-bugs-announce] [issue39896] Const args and remove unused args in Python/compile.c Message-ID: <1583639589.56.0.292367890565.issue39896@roundup.psfhosted.org> New submission from Andy Lester : Remove unused args from: * binop * compiler_next_instr * inplace_binop Const arguments for: * assemble_jump_offsets * blocksize * check_caller * check_compare * check_index * check_is_arg * check_subscripter * compiler_error * compiler_new_block * compiler_pop_fblock * compiler_push_fblock * compiler_warn * compute_code_flags * dfs * find_ann * get_ref_type * merge_const_tuple * stackdepth ---------- components: Interpreter Core messages: 363632 nosy: petdance priority: normal severity: normal status: open title: Const args and remove unused args in Python/compile.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 22:52:39 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 08 Mar 2020 03:52:39 +0000 Subject: [New-bugs-announce] [issue39895] `pathlib.Path.touch()` calls `os.close()` without using accessor Message-ID: <1583639559.65.0.730763804779.issue39895@roundup.psfhosted.org> New submission from Barney Gale : `Path.touch()` does a lot of os-specific /stuff/ that should probably live in the accessor. Perhaps most importantly, is calls `os.close()` on whatever `accessor.open()` returns, which is problematic for those wishing to write their own accessor that doesn't work on a file descriptor level. ---------- components: Library (Lib) messages: 363631 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.touch()` calls `os.close()` without using accessor versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 23:11:08 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 08 Mar 2020 04:11:08 +0000 Subject: [New-bugs-announce] [issue39897] `pathlib.Path.is_mount()` calls `Path(self.parent)` and therefore misbehaves in `Path` subclasses Message-ID: <1583640668.8.0.328631342101.issue39897@roundup.psfhosted.org> New submission from Barney Gale : `pathlib.Path.is_mount()` calls `Path(self.parent)`, which: - Is needless, as `self.parent` is already a Path instance! - Prevents effective subclassing, as `self.parent` may be a `Path` subclass with its own `stat()` implementation ---------- components: Library (Lib) messages: 363633 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.is_mount()` calls `Path(self.parent)` and therefore misbehaves in `Path` subclasses versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 7 23:14:26 2020 From: report at bugs.python.org (Andy Lester) Date: Sun, 08 Mar 2020 04:14:26 +0000 Subject: [New-bugs-announce] [issue39898] Remove unused arg from append_formattedvalue in Python/ast_unparse.c Message-ID: <1583640866.93.0.894372784953.issue39898@roundup.psfhosted.org> New submission from Andy Lester : append_formattedvalue() has an unused bool is_format_spec. ---------- components: Interpreter Core messages: 363634 nosy: petdance priority: normal severity: normal status: open title: Remove unused arg from append_formattedvalue in Python/ast_unparse.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 00:06:12 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 08 Mar 2020 05:06:12 +0000 Subject: [New-bugs-announce] [issue39899] `pathlib.Path.expanduser()` does not call `os.path.expanduser()` Message-ID: <1583643972.77.0.25342179601.issue39899@roundup.psfhosted.org> New submission from Barney Gale : `pathlib.Path.expanduser()` does not call `os.path.expanduser()`, but instead re-implements it. The implementations look pretty similar and I can't see a good reason for the duplication. The only difference is that `pathlib.Path.expanduser()` raises `RuntimeError` when a home directory cannot be resolved, whereas `os.path.expanduser()` returns the path unchanged. ---------- components: Library (Lib) messages: 363635 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.expanduser()` does not call `os.path.expanduser()` versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 01:10:31 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 08 Mar 2020 06:10:31 +0000 Subject: [New-bugs-announce] [issue39900] `pathlib.Path.__bytes__()` calls `os.fsencode()` without using accessor Message-ID: <1583647831.98.0.39095900929.issue39900@roundup.psfhosted.org> New submission from Barney Gale : `pathlib.Path.__bytes__()` calls `os.fsencode()` without using path's accessor. To properly isolate Path objects from the underlying local filesystem, this should be routed via the accessor object. ---------- messages: 363638 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.__bytes__()` calls `os.fsencode()` without using accessor _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 05:26:58 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 08 Mar 2020 09:26:58 +0000 Subject: [New-bugs-announce] [issue39901] `pathlib.Path.owner()` and `group()` use `pwd` and `grp` modules directly Message-ID: <1583659618.45.0.720570374465.issue39901@roundup.psfhosted.org> New submission from Barney Gale : The implementations of `Path.owner()` and `Path.group()` directly import and use the `pwd` and `grp` modules. Given these modules provide information about the *local* system, I believe these implementations should instead live in `pathlib._NormalAccessor` for consistency with other methods that do "impure" things. ---------- components: Library (Lib) messages: 363643 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.owner()` and `group()` use `pwd` and `grp` modules directly versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 08:15:59 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sun, 08 Mar 2020 12:15:59 +0000 Subject: [New-bugs-announce] [issue39902] dis.Bytecode objects should be comparable Message-ID: <1583669759.63.0.311622974712.issue39902@roundup.psfhosted.org> New submission from Batuhan Taskaya : import dis >>> dis.Bytecode("print(1)") == dis.Bytecode("print(1)") False ---------- components: Library (Lib) messages: 363656 nosy: BTaskaya priority: normal severity: normal status: open title: dis.Bytecode objects should be comparable type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 08:24:04 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 08 Mar 2020 12:24:04 +0000 Subject: [New-bugs-announce] [issue39903] Double decref in _elementtree.Element.__getstate__ Message-ID: <1583670244.74.0.393826922537.issue39903@roundup.psfhosted.org> New submission from Serhiy Storchaka : There is very strange code in _elementtree.Element.__getstate__ which decrement references to elements of a list before decrementing a reference to the list itself. It happens only if creating a dict fails, so it is almost impossible to reproduce, but if it happens it will likely cause a crash. The proposed PR fixes the bug and also simplifies the code. ---------- components: Extension Modules, XML messages: 363657 nosy: eli.bendersky, scoder, serhiy.storchaka priority: normal severity: normal status: open title: Double decref in _elementtree.Element.__getstate__ type: crash versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 10:43:01 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 08 Mar 2020 14:43:01 +0000 Subject: [New-bugs-announce] [issue39904] Move handling of one-argument call of type() from type.__new__() to type.__call__() Message-ID: <1583678581.12.0.621267485575.issue39904@roundup.psfhosted.org> New submission from Serhiy Storchaka : The builtin type() serves two functions: 1. When called with a single positional argument it returns the type of the argument. >>> type(1) 2. Otherwise it acts as any class when called -- creates an instance of this class (a type). It includes calling corresponding __new__ and __init__ methods. >>> type('A', (str,), {'foo': lambda self: len(self)}) type is a class, and it can be subclassed. Subclasses of type serve only the latter function (the former was forbidden in issue27157). >>> class T(type): pass ... >>> T(1) Traceback (most recent call last): File "", line 1, in TypeError: type.__new__() takes exactly 3 arguments (1 given) >>> T('A', (str,), {'foo': lambda self: len(self)}) But surprisingly you can use the __new__ method for getting the type of the object. >>> type.__new__(type, 1) >>> T.__new__(T, 1) Traceback (most recent call last): File "", line 1, in TypeError: type.__new__() takes exactly 3 arguments (1 given) The proposed PR moves handling the special case of one-argument type() from type.__new__ to type.__call__. It does not fix any real bug, it does not add significant performance boost, it does not remove a lot of code, it just makes the code slightly more straightforward. It changes the behavior of type.__new__(type, obj) which is very unlikely called directly in real code. >>> type(1) >>> type.__new__(type, 1) Traceback (most recent call last): File "", line 1, in TypeError: type.__new__() takes exactly 3 arguments (1 given) ---------- components: Interpreter Core messages: 363664 nosy: gvanrossum, serhiy.storchaka priority: normal severity: normal status: open title: Move handling of one-argument call of type() from type.__new__() to type.__call__() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 16:01:08 2020 From: report at bugs.python.org (Yon Ar Chall) Date: Sun, 08 Mar 2020 20:01:08 +0000 Subject: [New-bugs-announce] [issue39905] Cannot load sub package having a 3-dot relative import Message-ID: <1583697668.13.0.326902035293.issue39905@roundup.psfhosted.org> New submission from Yon Ar Chall : Hi there ! I was trying to use importlib.util.spec_from_file_location() to import a nested package containing a 3-dot relative import. $ tree ~/myproj /home/yon/myproj ??? mypkg ??? __init__.py ??? subpkg ??? __init__.py ??? subsubpkg_abs ??? ??? __init__.py ??? subsubpkg_rel ??? __init__.py Relative import here : $ cat ~/myproj/mypkg/subpkg/subsubpkg_rel/__init__.py from ... import subpkg Absolute import here (for comparison purpose) : $ cat ~/myproj/mypkg/subpkg/subsubpkg_abs/__init__.py import mypkg.subpkg $ python3 Python 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from importlib.util import spec_from_file_location >>> spec = spec_from_file_location("subsubpkg_abs", "/home/yon/myproj/mypkg/subpkg/subsubpkg_abs/__init__.py") >>> spec.loader.load_module() >>> spec = spec_from_file_location("subsubpkg_rel", "/home/yon/myproj/mypkg/subpkg/subsubpkg_rel/__init__.py") >>> spec.loader.load_module() Traceback (most recent call last): File "", line 1, in File "", line 388, in _check_name_wrapper File "", line 809, in load_module File "", line 668, in load_module File "", line 268, in _load_module_shim File "", line 693, in _load File "", line 673, in _load_unlocked File "", line 665, in exec_module File "", line 222, in _call_with_frames_removed File "/home/yon/myproj/mypkg/subpkg/subsubpkg_rel/__init__.py", line 1, in from ... import subpkg ValueError: attempted relative import beyond top-level package >>> Raw import just works (as importlib.import_module() does) : >>> import mypkg.subpkg.subsubpkg_rel >>> ---------- components: Library (Lib) messages: 363678 nosy: yon priority: normal severity: normal status: open title: Cannot load sub package having a 3-dot relative import versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 16:35:19 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 08 Mar 2020 20:35:19 +0000 Subject: [New-bugs-announce] [issue39906] pathlib.Path: add `follow_symlinks` argument to `stat()` and `chmod()` Message-ID: <1583699719.8.0.302577003742.issue39906@roundup.psfhosted.org> New submission from Barney Gale : As of Python 3.3, `os.lstat()` and `os.lchmod()` are available as `os.stat(follow_symlinks=False)` and `os.chmod(follow_symlinks=False)`, but an equivalent change didn't make it into pathlib. I propose we add the `follow_symlinks` argument to `Path.stat()` and `Path.chmod()` for consistency with `os` and because the new notation is arguable clearer for people who don't know many system calls off by heart :) ---------- components: Library (Lib) messages: 363681 nosy: barneygale priority: normal severity: normal status: open title: pathlib.Path: add `follow_symlinks` argument to `stat()` and `chmod()` type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 20:17:57 2020 From: report at bugs.python.org (Barney Gale) Date: Mon, 09 Mar 2020 00:17:57 +0000 Subject: [New-bugs-announce] [issue39907] `pathlib.Path.iterdir()` wastes memory by using `os.listdir()` rather than `os.scandir()` Message-ID: <1583713077.79.0.39541814988.issue39907@roundup.psfhosted.org> New submission from Barney Gale : `pathlib.Path.iterdir()` uses `os.listdir()` rather than `os.scandir()`. I think this has a small performance cost, per PEP 471: > It returns a generator instead of a list, so that scandir acts as a true iterator instead of returning the full list immediately. As `scandir()` is already available from `_NormalAccessor` it's a simple patch to use `scandir()` instead. ---------- components: Library (Lib) messages: 363689 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.iterdir()` wastes memory by using `os.listdir()` rather than `os.scandir()` type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 8 23:59:59 2020 From: report at bugs.python.org (Andy Lester) Date: Mon, 09 Mar 2020 03:59:59 +0000 Subject: [New-bugs-announce] [issue39908] Remove unused args from init_set_builtins_open and _Py_FatalError_PrintExc in Python/pylifecycle.c Message-ID: <1583726399.05.0.763626668846.issue39908@roundup.psfhosted.org> New submission from Andy Lester : init_set_builtins_open(PyThreadState *tstate) -> unused arg _Py_FatalError_PrintExc(int fd) -> unused arg ---------- components: Interpreter Core messages: 363690 nosy: petdance priority: normal severity: normal status: open title: Remove unused args from init_set_builtins_open and _Py_FatalError_PrintExc in Python/pylifecycle.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 03:26:04 2020 From: report at bugs.python.org (=?utf-8?b?0JzQuNGF0LDQuNC7INCa0YvRiNGC0YvQvNC+0LI=?=) Date: Mon, 09 Mar 2020 07:26:04 +0000 Subject: [New-bugs-announce] [issue39909] Assignment expression in assert causes SyntaxError Message-ID: <1583738764.77.0.124975430682.issue39909@roundup.psfhosted.org> New submission from ?????? ???????? : Assignment expression in assert causes SyntaxError Minimal case: ``` assert var := None ``` Error: ``` File "", line 1 assert var := None ^ SyntaxError: invalid syntax ``` Workaround: ``` assert (var := None) ``` My use case: ``` my_dict = dict() assert value := my_dict.get('key') ``` ---------- components: Interpreter Core messages: 363698 nosy: ?????? ???????? priority: normal severity: normal status: open title: Assignment expression in assert causes SyntaxError type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 07:19:49 2020 From: report at bugs.python.org (Mingye Wang) Date: Mon, 09 Mar 2020 11:19:49 +0000 Subject: [New-bugs-announce] [issue39910] os.ftruncate on Windows should be sparse Message-ID: <1583752789.13.0.677978816709.issue39910@roundup.psfhosted.org> New submission from Mingye Wang : Consider this interaction: cmd> echo > 1.txt cmd> python -c "__import__('os').truncate('1.txt', 1024 ** 3)" cmd> fsutil sparse queryFlag 1.txt Not only takes a long time as is typical for a zero-write, but also reports non-sparse as an actual write would suggest. This is because internally, _chsize_s and friends enlarges files using a loop.[1] [1]: https://github.com/leelwh/clib/blob/master/c/chsize.c On Unix systems, ftruncate for enlarging is described as "... as if the extra space is zero-filled", but this is not to be taken literally. In practice, sparse files are used whenever available (GNU dd expects that) and people do expect the operation to be very fast without a lot of real writes. A FreeBSD bug exists around how ftruncate is too slow on UFS. The aria2 downloader gives a good example of how to truncate into a sparse file on Windows.[2] First a FSCTL_SET_SPARSE control is issued, and then a seek + SetEndOfFile would finish the job. Of course, a lseek to the end would be required to first determine the size of the file, so we know whether we are enlarging (sparse) or shrinking (don't sparse). [2]: https://github.com/aria2/aria2/blob/master/src/AbstractDiskWriter.cc#L507 ---------- components: Library (Lib) messages: 363717 nosy: Artoria2e5, steve.dower priority: normal severity: normal status: open title: os.ftruncate on Windows should be sparse versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 09:42:23 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 09 Mar 2020 13:42:23 +0000 Subject: [New-bugs-announce] [issue39911] "AMD64 Windows7 SP1 3.x" buildbot doesn't build anymore Message-ID: <1583761343.44.0.373306188217.issue39911@roundup.psfhosted.org> New submission from STINNER Victor : "AMD64 Windows7 SP1 3.x" buildbot worker fails to build Python. It should either be fixed, or the worker should be removed. ---------- components: Build, Tests messages: 363729 nosy: pablogsal, steve.dower, vstinner priority: normal severity: normal status: open title: "AMD64 Windows7 SP1 3.x" buildbot doesn't build anymore versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 10:05:44 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Mon, 09 Mar 2020 14:05:44 +0000 Subject: [New-bugs-announce] [issue39912] warnings.py should not raise AssertionError when validating arguments Message-ID: <1583762744.97.0.386872091092.issue39912@roundup.psfhosted.org> New submission from R?mi Lapeyre : Currently simplefilter() and filterwarnings() don't validate their arguments when using -O and AssertionError is not the most appropriate exception to raise. I will post a PR shortly. ---------- components: Library (Lib) messages: 363732 nosy: remi.lapeyre priority: normal severity: normal status: open title: warnings.py should not raise AssertionError when validating arguments versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 10:35:53 2020 From: report at bugs.python.org (daniel hahler) Date: Mon, 09 Mar 2020 14:35:53 +0000 Subject: [New-bugs-announce] [issue39913] Document warnings.WarningMessage ? Message-ID: <1583764553.02.0.37062728089.issue39913@roundup.psfhosted.org> New submission from daniel hahler : I've noticed that `warnings.WarningMessage` is not documented, i.e. it does not show up in the intersphinx object list. I'm not sure how to document it best, but maybe just describing its attributes? Ref: https://github.com/blueyed/cpython/blob/598d29c51c7b5a77f71eed0f615eb0b3865a4085/Lib/warnings.py#L398-L417 ---------- assignee: docs at python components: Documentation messages: 363735 nosy: blueyed, docs at python priority: normal severity: normal status: open title: Document warnings.WarningMessage ? _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 11:59:34 2020 From: report at bugs.python.org (Yuta Okamoto) Date: Mon, 09 Mar 2020 15:59:34 +0000 Subject: [New-bugs-announce] [issue39914] logging.config: '.' (dot) as a key is not documented Message-ID: <1583769574.27.0.968534931359.issue39914@roundup.psfhosted.org> New submission from Yuta Okamoto : I noticed that Configurators in logging.config module accepts '.' (dot) as a key to fill attributes for filters, formatters, and handlers directly like the following: handlers: syslog: class: logging.handlers.SysLogHandler .: ident: 'app-name: ' https://github.com/python/cpython/blob/46abfc1416ff8e450999611ef8f231ff871ab133/Lib/logging/config.py#L742 But it seems this functionality is not documented in https://docs.python.org/3/library/logging.config.html ---------- assignee: docs at python components: Documentation messages: 363744 nosy: Yuta Okamoto, docs at python priority: normal severity: normal status: open title: logging.config: '.' (dot) as a key is not documented type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 12:29:08 2020 From: report at bugs.python.org (Mads Sejersen) Date: Mon, 09 Mar 2020 16:29:08 +0000 Subject: [New-bugs-announce] [issue39915] AsyncMock doesn't work with asyncio.gather Message-ID: <1583771348.56.0.120635292587.issue39915@roundup.psfhosted.org> New submission from Mads Sejersen : When calling asyncio.gather the await_args_list is not correct. In the attached minimal example it contains only the latest call and not each of the three actual calls. Expected output: [call(0), call(1), call(2)] [call(1), call(2), call(3)] # or any permutation hereof Actual output: [call(0), call(1), call(2)] [call(3), call(3), call(3)] ---------- components: asyncio files: fail.py messages: 363748 nosy: Mads Sejersen, asvetlov, yselivanov priority: normal severity: normal status: open title: AsyncMock doesn't work with asyncio.gather type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48963/fail.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 12:54:33 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 09 Mar 2020 16:54:33 +0000 Subject: [New-bugs-announce] [issue39916] More reliable use of scandir in Path.glob() Message-ID: <1583772873.51.0.392726685418.issue39916@roundup.psfhosted.org> New submission from Serhiy Storchaka : Path.glob() uses os.scandir() in the following code. entries = list(scandir(parent_path)) It properly closes the internal file descriptor opened by scandir() if success because it is automatically closed closed when the iterator is exhausted. But if it was interrupted (by KeyboardInterrupt, MemoryError or OSError), the file descriptor will be closed only when the iterator be collected by the garbage collector. It is unreliable on implementations like PyPy and emits a ResourceWarning. The proposed code uses more reliable code with scandir(parent_path) as scandir_it: entries = list(scandir_it) which is used in other sites (in the shutil module). I have no idea why I did not write it in this form at first place. ---------- components: Library (Lib) messages: 363750 nosy: serhiy.storchaka priority: normal severity: normal status: open title: More reliable use of scandir in Path.glob() type: resource usage versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 16:01:52 2020 From: report at bugs.python.org (Todd Levi) Date: Mon, 09 Mar 2020 20:01:52 +0000 Subject: [New-bugs-announce] [issue39917] new_compiler() called 2nd time causes error Message-ID: <1583784112.25.0.514690253102.issue39917@roundup.psfhosted.org> New submission from Todd Levi : Action: Run the following command in py36, py37, and py3 for package uvloop python setup.py build_ext --inline bdist_wheel Expected behavior: The package is built and bundled as a wheel. Observed behavior: The build fails at the bdist_wheel stage with the following error: error: don't know how to compile C/C++ code on platform 'posix' with '' compiler Additional Notes: If I split the two commands (build_ext and bdist_wheel) into separate invocations (e.g. python setup.py build_ext --inline && python setup.py bdist_wheel) then the wheel is successfully built. It only (and always) fails when build_ext and bdist_wheel are on the same command line. What "seems" to be happening is that bdist_wheel is somehow inheriting the existing CCompiler object that was used by build_ext and is then passing that back to distutils.compiler.new_compiler(). The new_compiler() function simply checks to see if compiler is None and, if not, uses its value as a key to the compiler_class dict. The distutils/command/build_ext build_ext object initially sets self.compiler to None so the first invocation of new_compiler() in build_ext.run() will work as expected. In build_ext.run() at around line 306 (in master), however, it simply does self.compiler = new_compiler(compiler=self.compiler,...) so any subsequent invocation of run() seems like it will fail and produce the error message I'm seeing. new_compiler() is the only place I see that error message being emitted. The package I'm building (uvloop) is being built with Cython but all the object paths I've been able to track come back to distutils.ccompiler.py. That packages setup.py file doesn't seem to do anything weird that I can see (which doesn't mean it isn't doing something weird). It sets the sdist and build_ext cmdclass entries to their own methods (that don't seem to set compiler - just compiler options) and also registers an extension via ext_modules. The setup.py code is here: https://github.com/MagicStack/uvloop/blob/master/setup.py Possible Fix: Two simple possibilities come to mind. 1) In run, see if self.compiler is not None and alter the call to new_compiler() to use self.compiler.compiler_type. 2) In new_compiler(), check the type of compiler and simply return if its a CCompiler object. ---------- components: Distutils messages: 363765 nosy: dstufft, eric.araujo, televi priority: normal severity: normal status: open title: new_compiler() called 2nd time causes error type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 16:21:50 2020 From: report at bugs.python.org (Jerry James) Date: Mon, 09 Mar 2020 20:21:50 +0000 Subject: [New-bugs-announce] [issue39918] random.Random(False) weird error Message-ID: <1583785310.79.0.724541627547.issue39918@roundup.psfhosted.org> New submission from Jerry James : Python 3.8: >>> import random >>> r = random.Random(False) >>> r Python 3.9 alpha 4: >>> import random >>> r = random.Random(False) Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.9/random.py", line 100, in __init__ self.seed(x) File "/usr/lib64/python3.9/random.py", line 163, in seed super().seed(a) TypeError: descriptor '__abs__' of 'int' object needs an argument This arose in the context of Fedora builds with python 3.9. The networkx project reversed two arguments, resulting in False being passed to random.Random instead of the intended seed value. I'm glad we noticed the problem with 3.9 so the intended value will now be used, but that TypeError message doesn't really indicate the nature of the problem. Could you arrange for a better message? ---------- components: Library (Lib) messages: 363766 nosy: loganjerry priority: normal severity: normal status: open title: random.Random(False) weird error type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 19:15:35 2020 From: report at bugs.python.org (Enji Cooper) Date: Mon, 09 Mar 2020 23:15:35 +0000 Subject: [New-bugs-announce] [issue39919] C extension code reliant on static flags/behavior with PY_DEBUG (Py_SAFE_DOWNCAST, method flags) could potentially leverage _Static_assert Message-ID: <1583795735.01.0.546271118334.issue39919@roundup.psfhosted.org> New submission from Enji Cooper : Looking at Py_SAFE_DOWNCAST, it seems that the code could (in theory) leverage _Static_assert on C11 capable compilers [1]. Looking at some other code APIs, like module initialization with METH_VARARGS, etc, there are ways to determine whether or not the values are valid at compile-time with C11 capable compilers, instead of figuring out the issues on the tail end at runtime and having to play whackamole figuring out which offending methods are triggering issues (see also: bpo-39884). 1. https://en.cppreference.com/w/c/language/_Static_assert ---------- components: C API messages: 363785 nosy: ngie priority: normal severity: normal status: open title: C extension code reliant on static flags/behavior with PY_DEBUG (Py_SAFE_DOWNCAST, method flags) could potentially leverage _Static_assert _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 21:04:55 2020 From: report at bugs.python.org (Charles Machalow) Date: Tue, 10 Mar 2020 01:04:55 +0000 Subject: [New-bugs-announce] [issue39920] Pathlib path methods do not work with Window Dos devices Message-ID: <1583802295.48.0.863493828127.issue39920@roundup.psfhosted.org> New submission from Charles Machalow : I ran the following as admin in the Python interpreter (on Windows): >>> d = pathlib.Path(r'\\.\PHYSICALDRIVE0') >>> print(d) \\.\PHYSICALDRIVE0\ >>> d.exists() Traceback (most recent call last): File "", line 1, in File "C:\Python37\lib\pathlib.py", line 1318, in exists self.stat() File "C:\Python37\lib\pathlib.py", line 1140, in stat return self._accessor.stat(self) PermissionError: [WinError 31] A device attached to the system is not functioning: '\\\\.\\PHYSICALDRIVE0\\' >>> d.is_char_device() Traceback (most recent call last): File "", line 1, in File "C:\Python37\lib\pathlib.py", line 1403, in is_char_device return S_ISCHR(self.stat().st_mode) File "C:\Python37\lib\pathlib.py", line 1140, in stat return self._accessor.stat(self) PermissionError: [WinError 31] A device attached to the system is not functioning: '\\\\.\\PHYSICALDRIVE0\\' >>> d.is_block_device() Traceback (most recent call last): File "", line 1, in File "C:\Python37\lib\pathlib.py", line 1390, in is_block_device return S_ISBLK(self.stat().st_mode) File "C:\Python37\lib\pathlib.py", line 1140, in stat return self._accessor.stat(self) PermissionError: [WinError 31] A device attached to the system is not functioning: '\\\\.\\PHYSICALDRIVE0\\' I think that exists(), is_char_device(), and is_block_device() should be able to work on Windows in some form or fashion. At least without a traceback. ---------- messages: 363796 nosy: Charles Machalow priority: normal severity: normal status: open title: Pathlib path methods do not work with Window Dos devices type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 22:53:52 2020 From: report at bugs.python.org (Mageshkumar) Date: Tue, 10 Mar 2020 02:53:52 +0000 Subject: [New-bugs-announce] [issue39921] json module install error i was use windows 10 pro 64 bit, pls give solutions to rectify this issue Message-ID: <1583808832.46.0.617584210611.issue39921@roundup.psfhosted.org> New submission from Mageshkumar : C:\WINDOWS\system32>pip install jsonlib Collecting jsonlib Using cached jsonlib-1.6.1.tar.gz (43 kB) Installing collected packages: jsonlib Running setup.py install for jsonlib ... error ERROR: Command errored out with exit status 1: command: 'c:\users\mageshkumar\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Mageshkumar\AppData\Local\Temp\pip-record-7a5omup8\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\mageshkumar\appdata\local\programs\python\python38\Include\jsonlib' cwd: C:\Users\Mageshkumar\AppData\Local\Temp\pip-install-dz8cos59\jsonlib\ Complete output (41 lines): running install running build running build_py creating build creating build\lib.win-amd64-3.8 copying jsonlib.py -> build\lib.win-amd64-3.8 running build_ext building '_jsonlib' extension creating build\temp.win-amd64-3.8 creating build\temp.win-amd64-3.8\Release C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\mageshkumar\appdata\local\programs\python\python38\include -Ic:\users\mageshkumar\appdata\local\programs\python\python38\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tc_jsonlib.c /Fobuild\temp.win-amd64-3.8\Release\_jsonlib.obj _jsonlib.c _jsonlib.c(99): warning C4996: 'PyUnicode_GetSize': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\unicodeobject.h(177): note: see declaration of 'PyUnicode_GetSize' _jsonlib.c(450): warning C4996: 'PyLong_FromUnicode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\longobject.h(106): note: see declaration of 'PyLong_FromUnicode' _jsonlib.c(550): warning C4018: '<': signed/unsigned mismatch _jsonlib.c(643): warning C4020: 'PyFloat_FromString': too many actual parameters _jsonlib.c(655): warning C4996: 'PyLong_FromUnicode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\longobject.h(106): note: see declaration of 'PyLong_FromUnicode' _jsonlib.c(1186): warning C4013: 'PyString_CheckExact' undefined; assuming extern returning int _jsonlib.c(1188): warning C4013: 'PyString_Check' undefined; assuming extern returning int _jsonlib.c(1208): warning C4013: 'PyObject_Unicode' undefined; assuming extern returning int _jsonlib.c(1208): warning C4047: '=': 'PyObject *' differs in levels of indirection from 'int' _jsonlib.c(1406): warning C4013: 'PyInt_CheckExact' undefined; assuming extern returning int _jsonlib.c(1517): warning C4013: 'PyString_AS_STRING' undefined; assuming extern returning int _jsonlib.c(1517): warning C4047: 'function': 'const char *' differs in levels of indirection from 'int' _jsonlib.c(1517): warning C4024: 'ascii_constant': different types for formal and actual parameter 1 _jsonlib.c(1539): warning C4013: 'PyInt_Check' undefined; assuming extern returning int _jsonlib.c(1931): warning C4047: '=': 'const char *' differs in levels of indirection from 'int' _jsonlib.c(1970): warning C4047: 'function': 'const char *' differs in levels of indirection from 'int' _jsonlib.c(1970): warning C4024: 'serializer_append_ascii': different types for formal and actual parameter 2 _jsonlib.c(2089): warning C4996: 'PyUnicode_Encode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\cpython/unicodeobject.h(791): note: see declaration of 'PyUnicode_Encode' _jsonlib.c(2123): warning C4996: 'PyUnicode_Encode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\cpython/unicodeobject.h(791): note: see declaration of 'PyUnicode_Encode' _jsonlib.c(2161): warning C4013: 'Py_InitModule3' undefined; assuming extern returning int C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\users\mageshkumar\appdata\local\programs\python\python38\libs /LIBPATH:c:\users\mageshkumar\appdata\local\programs\python\python38\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" /EXPORT:PyInit__jsonlib build\temp.win-amd64-3.8\Release\_jsonlib.obj /OUT:build\lib.win-amd64-3.8\_jsonlib.cp38-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.8\Release\_jsonlib.cp38-win_amd64.lib LINK : error LNK2001: unresolved external symbol PyInit__jsonlib build\temp.win-amd64-3.8\Release\_jsonlib.cp38-win_amd64.lib : fatal error LNK1120: 1 unresolved externals error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120 ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\mageshkumar\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Mageshkumar\AppData\Local\Temp\pip-record-7a5omup8\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\mageshkumar\appdata\local\programs\python\python38\Include\jsonlib' Check the logs for full command output. C:\WINDOWS\system32> ---------- components: Installation files: json error.txt messages: 363797 nosy: magesh priority: normal severity: normal status: open title: json module install error i was use windows 10 pro 64 bit, pls give solutions to rectify this issue type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file48964/json error.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 23:17:44 2020 From: report at bugs.python.org (Andy Lester) Date: Tue, 10 Mar 2020 03:17:44 +0000 Subject: [New-bugs-announce] [issue39922] Remove unused args in Python/compile.c Message-ID: <1583810264.23.0.039012420408.issue39922@roundup.psfhosted.org> New submission from Andy Lester : These functions have unnecessary args that can be removed: * binop * compiler_add_o * compiler_next_instr * inplace_binop ---------- components: Interpreter Core messages: 363799 nosy: petdance priority: normal severity: normal status: open title: Remove unused args in Python/compile.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 9 23:45:41 2020 From: report at bugs.python.org (Mageshkumar) Date: Tue, 10 Mar 2020 03:45:41 +0000 Subject: [New-bugs-announce] [issue39923] Command errored out with exit status 1: while jsonlib Message-ID: New submission from Mageshkumar : hi i have detail issue of while i was install jsonlib, pls kindly provide the solutions *Thanks & Regards* *M.Mageshkumar* ---------- files: json error.txt messages: 363802 nosy: magesh priority: normal severity: normal status: open title: Command errored out with exit status 1: while jsonlib Added file: https://bugs.python.org/file48965/json error.txt _______________________________________ Python tracker _______________________________________ -------------- next part -------------- ??? C:\WINDOWS\system32>pip install jsonlib Collecting jsonlib Using cached jsonlib-1.6.1.tar.gz (43 kB) Installing collected packages: jsonlib Running setup.py install for jsonlib ... error ERROR: Command errored out with exit status 1: command: 'c:\users\mageshkumar\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Mageshkumar\AppData\Local\Temp\pip-record-7a5omup8\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\mageshkumar\appdata\local\programs\python\python38\Include\jsonlib' cwd: C:\Users\Mageshkumar\AppData\Local\Temp\pip-install-dz8cos59\jsonlib\ Complete output (41 lines): running install running build running build_py creating build creating build\lib.win-amd64-3.8 copying jsonlib.py -> build\lib.win-amd64-3.8 running build_ext building '_jsonlib' extension creating build\temp.win-amd64-3.8 creating build\temp.win-amd64-3.8\Release C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\mageshkumar\appdata\local\programs\python\python38\include -Ic:\users\mageshkumar\appdata\local\programs\python\python38\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tc_jsonlib.c /Fobuild\temp.win-amd64-3.8\Release\_jsonlib.obj _jsonlib.c _jsonlib.c(99): warning C4996: 'PyUnicode_GetSize': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\unicodeobject.h(177): note: see declaration of 'PyUnicode_GetSize' _jsonlib.c(450): warning C4996: 'PyLong_FromUnicode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\longobject.h(106): note: see declaration of 'PyLong_FromUnicode' _jsonlib.c(550): warning C4018: '<': signed/unsigned mismatch _jsonlib.c(643): warning C4020: 'PyFloat_FromString': too many actual parameters _jsonlib.c(655): warning C4996: 'PyLong_FromUnicode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\longobject.h(106): note: see declaration of 'PyLong_FromUnicode' _jsonlib.c(1186): warning C4013: 'PyString_CheckExact' undefined; assuming extern returning int _jsonlib.c(1188): warning C4013: 'PyString_Check' undefined; assuming extern returning int _jsonlib.c(1208): warning C4013: 'PyObject_Unicode' undefined; assuming extern returning int _jsonlib.c(1208): warning C4047: '=': 'PyObject *' differs in levels of indirection from 'int' _jsonlib.c(1406): warning C4013: 'PyInt_CheckExact' undefined; assuming extern returning int _jsonlib.c(1517): warning C4013: 'PyString_AS_STRING' undefined; assuming extern returning int _jsonlib.c(1517): warning C4047: 'function': 'const char *' differs in levels of indirection from 'int' _jsonlib.c(1517): warning C4024: 'ascii_constant': different types for formal and actual parameter 1 _jsonlib.c(1539): warning C4013: 'PyInt_Check' undefined; assuming extern returning int _jsonlib.c(1931): warning C4047: '=': 'const char *' differs in levels of indirection from 'int' _jsonlib.c(1970): warning C4047: 'function': 'const char *' differs in levels of indirection from 'int' _jsonlib.c(1970): warning C4024: 'serializer_append_ascii': different types for formal and actual parameter 2 _jsonlib.c(2089): warning C4996: 'PyUnicode_Encode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\cpython/unicodeobject.h(791): note: see declaration of 'PyUnicode_Encode' _jsonlib.c(2123): warning C4996: 'PyUnicode_Encode': deprecated in 3.3 c:\users\mageshkumar\appdata\local\programs\python\python38\include\cpython/unicodeobject.h(791): note: see declaration of 'PyUnicode_Encode' _jsonlib.c(2161): warning C4013: 'Py_InitModule3' undefined; assuming extern returning int C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\users\mageshkumar\appdata\local\programs\python\python38\libs /LIBPATH:c:\users\mageshkumar\appdata\local\programs\python\python38\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" /EXPORT:PyInit__jsonlib build\temp.win-amd64-3.8\Release\_jsonlib.obj /OUT:build\lib.win-amd64-3.8\_jsonlib.cp38-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.8\Release\_jsonlib.cp38-win_amd64.lib LINK : error LNK2001: unresolved external symbol PyInit__jsonlib build\temp.win-amd64-3.8\Release\_jsonlib.cp38-win_amd64.lib : fatal error LNK1120: 1 unresolved externals error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120 ---------------------------------------- ERROR: Command errored out with exit status 1: 'c:\users\mageshkumar\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\Mageshkumar\\AppData\\Local\\Temp\\pip-install-dz8cos59\\jsonlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Mageshkumar\AppData\Local\Temp\pip-record-7a5omup8\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\mageshkumar\appdata\local\programs\python\python38\Include\jsonlib' Check the logs for full command output. C:\WINDOWS\system32> From report at bugs.python.org Tue Mar 10 17:12:47 2020 From: report at bugs.python.org (Barney Gale) Date: Tue, 10 Mar 2020 21:12:47 +0000 Subject: [New-bugs-announce] [issue39924] pathlib handles missing `os.link`, `os.symlink` and `os.readlink` inconsistently Message-ID: <1583874767.79.0.0714252611287.issue39924@roundup.psfhosted.org> New submission from Barney Gale : Small bug report encompassing some related issues in `pathlib._NormalAccessor`: - `link_to()` should be named `link()` for consistency with other methods - `symlink()` doesn't need to guard against `os.symlink()` not accepting `target_is_directory` on non-Windows platforms; this has been fixed since 3.3 - `readlink()` doesn't raise `NotImplementedError` when `os.readlink()` is unavailable Only the last of these has a user impact, and only on exotic systems. ---------- components: Library (Lib) messages: 363845 nosy: barneygale priority: normal severity: normal status: open title: pathlib handles missing `os.link`, `os.symlink` and `os.readlink` inconsistently type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 10 17:31:06 2020 From: report at bugs.python.org (Barney Gale) Date: Tue, 10 Mar 2020 21:31:06 +0000 Subject: [New-bugs-announce] [issue39925] `pathlib.Path.link_to()` has the wrong argument order Message-ID: <1583875866.01.0.524310586519.issue39925@roundup.psfhosted.org> New submission from Barney Gale : `mylink.symlink_to(target)` and `mylink.link_to(target)` should both create a link (soft or hard) at *mylink* that points to *target*. But `link_to()` does the opposite - it creates *target* and points it towards *mylink*. Correct behaviour from `symlink_to()`: barney at acorn ~/projects/cpython $ touch /tmp/target barney at acorn ~/projects/cpython $ ./python Python 3.9.0a3+ (heads/bpo-39659-pathlib-getcwd-dirty:a4ba8a3, Feb 19 2020, 02:22:39) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pathlib >>> p = pathlib.Path('/tmp/link') >>> p.symlink_to('/tmp/target') >>> exit() barney at acorn ~/projects/cpython $ ls -l /tmp/link /tmp/target lrwxrwxrwx 1 barney barney 11 Mar 10 21:20 /tmp/link -> /tmp/target -rw-rw-r-- 1 barney barney 0 Mar 10 21:20 /tmp/target Incorrect behaviour from `link_to()`: barney at acorn ~/projects/cpython $ rm /tmp/link barney at acorn ~/projects/cpython $ ./python Python 3.9.0a3+ (heads/bpo-39659-pathlib-getcwd-dirty:a4ba8a3, Feb 19 2020, 02:22:39) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pathlib >>> p = pathlib.Path('/tmp/link') >>> p.link_to('/tmp/target') Traceback (most recent call last): File "", line 1, in File "/home/barney/projects/cpython/Lib/pathlib.py", line 1370, in link_to self._accessor.link_to(self, target) FileNotFoundError: [Errno 2] No such file or directory: '/tmp/link' -> '/tmp/target' >>> # but... >>> p = pathlib.Path('/tmp/target') >>> p.link_to('/tmp/link') >>> exit() barney at acorn ~/projects/cpython $ ls -l /tmp/link /tmp/target -rw-rw-r-- 2 barney barney 0 Mar 10 21:20 /tmp/link -rw-rw-r-- 2 barney barney 0 Mar 10 21:20 /tmp/target ---------- components: Library (Lib) messages: 363850 nosy: barneygale priority: normal severity: normal status: open title: `pathlib.Path.link_to()` has the wrong argument order type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 10 18:52:02 2020 From: report at bugs.python.org (Zufu Liu) Date: Tue, 10 Mar 2020 22:52:02 +0000 Subject: [New-bugs-announce] [issue39926] unicodedata for Unicode 13.0.0 Message-ID: <1583880722.38.0.0788382119247.issue39926@roundup.psfhosted.org> New submission from Zufu Liu : Unicode 13.0.0 was released on March 10. https://www.unicode.org/versions/latest/ http://www.unicode.org/versions/Unicode13.0.0/ ---------- messages: 363859 nosy: zufuliu priority: normal severity: normal status: open title: unicodedata for Unicode 13.0.0 type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 10 19:11:19 2020 From: report at bugs.python.org (dd) Date: Tue, 10 Mar 2020 23:11:19 +0000 Subject: [New-bugs-announce] [issue39927] IDLE 3.8.2 doesn't launch on Catalina 10.15.3- HELP Message-ID: <1583881879.01.0.624890561041.issue39927@roundup.psfhosted.org> New submission from dd : I just upgraded my Mac OS to 10.15.3 and Python 3.8.2 install went fine. But IDLE doesn't launch at all. I have already gone to System Preferences and allowed it to allow apps from anywhere. When I go to the command line and do Python 3 I see the correct version. But clicking on the IDLE app does nothing. And previous python scripts don't run at all and I don't see errors. Super grateful to anyone who may know the answer! ---------- assignee: terry.reedy components: IDLE messages: 363860 nosy: dd789, terry.reedy priority: normal severity: normal status: open title: IDLE 3.8.2 doesn't launch on Catalina 10.15.3- HELP versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 10 20:07:35 2020 From: report at bugs.python.org (Sandeep) Date: Wed, 11 Mar 2020 00:07:35 +0000 Subject: [New-bugs-announce] [issue39928] Pysftp Issue File Upload is not working - put command Message-ID: <1583885255.31.0.428229167076.issue39928@roundup.psfhosted.org> New submission from Sandeep : Hi We have requirement where we need to get file from client path and then upload the same to vendor directory path. I am not able to upload the file to vendor directory path , however when I tried to use the WINSCP it worked fine. So I thought of checking with Gurus what is wrong I am doing in my script. Appreciate your input. I will attach my script. Here is what I am doing. Step1. Clear the client directory path Step2. Make a call to HVAC Vault to get the username and password for client and vendor server Step3. Use the username and password to establish connection using pysftp for client. Step4. Store the file in local path. Step5. Segregate the file into different path based on file type Step6 Establish a connection to vendor and copy the file to vendor. Step7 Close the client and Vendor connection Please see the file attached. Also below is the error which I am getting --------------------------------------------------------- ERROR:root:Error in getting the file from EBS Outbound Server Traceback (most recent call last): File "FILE_TRANSFER_PROCESS.py", line 191, in file_transfer vendor.put(src_file, dst_file) File "/d01/python3/lib64/python3.6/site-packages/pysftp/__init__.py", line 364, in put confirm=confirm) File "/d01/python3/lib64/python3.6/site-packages/paramiko/sftp_client.py", line 759, in put return self.putfo(fl, remotepath, file_size, callback, confirm) File "/d01/python3/lib64/python3.6/site-packages/paramiko/sftp_client.py", line 720, in putfo s = self.stat(remotepath) File "/d01/python3/lib64/python3.6/site-packages/paramiko/sftp_client.py", line 493, in stat t, msg = self._request(CMD_STAT, path) File "/d01/python3/lib64/python3.6/site-packages/paramiko/sftp_client.py", line 813, in _request return self._read_response(num) File "/d01/python3/lib64/python3.6/site-packages/paramiko/sftp_client.py", line 865, in _read_response self._convert_status(msg) File "/d01/python3/lib64/python3.6/site-packages/paramiko/sftp_client.py", line 894, in _convert_status raise IOError(errno.ENOENT, text) FileNotFoundError: [Errno 2] /custom/OWO/ECE_OWO_20200303_143895.dat ---------- components: Tests files: File_Transfer_Process_Client.py messages: 363870 nosy: Sandeep priority: normal severity: normal status: open title: Pysftp Issue File Upload is not working - put command type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48967/File_Transfer_Process_Client.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 02:13:03 2020 From: report at bugs.python.org (bscarlett) Date: Wed, 11 Mar 2020 06:13:03 +0000 Subject: [New-bugs-announce] [issue39929] dataclasses.asdict will mangle collection.Counter instances Message-ID: <1583907183.73.0.710103063731.issue39929@roundup.psfhosted.org> New submission from bscarlett : I noticed that dataclasses.asdict seems to incorrectly reconstruct collections.Counter objects with the counter values as tuple keys. eg: In [1]: from collections import Counter In [2]: from dataclasses import dataclass, asdict In [3]: c = Counter() In [4]: c['stuff'] += 1 In [5]: @dataclass ...: class Bob: ...: c: Counter ...: In [6]: b = Bob(c) In [7]: c Out[7]: Counter({'stuff': 1}) In [9]: b.c Out[9]: Counter({'stuff': 1}) In [10]: asdict(b) Out[10]: {'c': Counter({('stuff', 1): 1})} In [11]: asdict(b)['c'] Out[11]: Counter({('stuff', 1): 1}) The Counter gets reconstructed with its item tuples as keys. This problem seems to have similar aspects to https://bugs.python.org/issue35540 ---------- components: Library (Lib) messages: 363884 nosy: brad.scarlett at gmail.com priority: normal severity: normal status: open title: dataclasses.asdict will mangle collection.Counter instances type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 05:19:34 2020 From: report at bugs.python.org (Russell Keith-Magee) Date: Wed, 11 Mar 2020 09:19:34 +0000 Subject: [New-bugs-announce] [issue39930] Embedded installer for Python 3.7.7 missing vcruntime140.dll Message-ID: <1583918374.95.0.227432259029.issue39930@roundup.psfhosted.org> New submission from Russell Keith-Magee : The Windows python-3.7.7-embed-amd64.zip installer (released Mar 11 2020) appears to be missing vcruntime140.dll. As a result, running the python.exe or pythonw.exe included in that installer fails with a system error notifying you of the missing DLL. The 3.7.6 embedded install included this file. ---------- components: Build, Windows messages: 363891 nosy: freakboy3742, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Embedded installer for Python 3.7.7 missing vcruntime140.dll versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 05:20:52 2020 From: report at bugs.python.org (agmt) Date: Wed, 11 Mar 2020 09:20:52 +0000 Subject: [New-bugs-announce] [issue39931] Global variables are not accessible from child processes (multiprocessing.Pool) Message-ID: <1583918452.53.0.6597480705.issue39931@roundup.psfhosted.org> New submission from agmt : Attached test works correctly on linux (3.7, 3.8) and mac (only 3.7). Mac python3.8 falls with exception: multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "test.py", line 8, in work print(F"Work={arg} args={args}") NameError: name 'args' is not defined """ ---------- components: Library (Lib), macOS files: test.py messages: 363892 nosy: agmt, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Global variables are not accessible from child processes (multiprocessing.Pool) type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48968/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 07:52:10 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 11 Mar 2020 11:52:10 +0000 Subject: [New-bugs-announce] [issue39932] test_multiprocessing_fork leaked [0, 2, 0] file descriptors on aarch64 RHEL8 Refleaks 3.7 buildbot Message-ID: <1583927530.29.0.751848754039.issue39932@roundup.psfhosted.org> New submission from STINNER Victor : aarch64 RHEL8 Refleaks 3.7: https://buildbot.python.org/all/#/builders/620/builds/20 test_multiprocessing_fork leaked [0, 2, 0] file descriptors, sum=2 0:40:22 load avg: 0.93 Re-running failed tests in verbose mode 0:40:22 load avg: 0.93 Re-running test_multiprocessing_fork in verbose mode test_multiprocessing_fork leaked [2, 0, 0] file descriptors, sum=2 ---------- components: Tests messages: 363909 nosy: vstinner priority: normal severity: normal status: open title: test_multiprocessing_fork leaked [0, 2, 0] file descriptors on aarch64 RHEL8 Refleaks 3.7 buildbot versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 07:56:43 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 11 Mar 2020 11:56:43 +0000 Subject: [New-bugs-announce] [issue39933] test_gdb fails on AMD64 FreeBSD Shared 3.x: ptrace: Operation not permitted Message-ID: <1583927803.55.0.643222253401.issue39933@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/152/builds/384 Example: ====================================================================== FAIL: test_pycfunction (test.test_gdb.PyBtTests) [_testcapi.MethClass().meth_fastcall_keywords] Verify that "py-bt" displays invocations of PyCFunction instances ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_gdb.py", line 896, in test_pycfunction self.assertIn(f' _______________________________________ From report at bugs.python.org Wed Mar 11 09:24:52 2020 From: report at bugs.python.org (myzhang1029) Date: Wed, 11 Mar 2020 13:24:52 +0000 Subject: [New-bugs-announce] [issue39934] Fatal Python error "XXX block stack overflow" when exception stacks >10 Message-ID: <1583933092.46.0.675431384436.issue39934@roundup.psfhosted.org> New submission from myzhang1029 : I apologize for describing this issue badly, but I'll try anyway. The code to demonstrate the issue is attached, so it might be better to read that instead. I noticed that when more than 10 exceptions are raised sequentially (i.e. one from another or one during the handling of another), the interpreter crashes saying "Fatal Python error: XXX block stack overflow". This happens in python 3.7, 3.8 and development(git 39c3493) versions, but not in python2.7. Using ipython also fixes this issue. I know this case is rare, but the maximum number of recursions is more than 2000, and the maximum number of statically nested blocks sepcified in frameobject.c is 20, so I'm pretty sure this isn't intended behavior. ---------- components: Interpreter Core files: exception_nest.py messages: 363914 nosy: myzhang1029 priority: normal severity: normal status: open title: Fatal Python error "XXX block stack overflow" when exception stacks >10 type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48969/exception_nest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 12:25:58 2020 From: report at bugs.python.org (Nazime Koussaila Lakehal) Date: Wed, 11 Mar 2020 16:25:58 +0000 Subject: [New-bugs-announce] [issue39935] argparse: help parameter not documented in add_subparsers().add_parser Message-ID: <1583943958.07.0.538829278301.issue39935@roundup.psfhosted.org> New submission from Nazime Koussaila Lakehal : The parameter help in the method add_parser of _SubParsersAction (that we obtain with ArgumentParser.add_subparsers()) Is not documented. In the documentation we can find: The add_subparsers() method is normally called with no arguments and returns a special action object. This object has a single method, add_parser(), which takes a command name and any ArgumentParser constructor arguments, and returns an ArgumentParser object that can be modified as usual. I found the parameter by accident and then I checked in the source code, it's unfortunate because the help parameter give really nice output when having sub commands. the parameter 'aliases' is also not documented. ---------- assignee: docs at python components: Documentation messages: 363932 nosy: Nazime Koussaila Lakehal, docs at python priority: normal severity: normal status: open title: argparse: help parameter not documented in add_subparsers().add_parser type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 13:42:14 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 11 Mar 2020 17:42:14 +0000 Subject: [New-bugs-announce] [issue39936] Python fails to build _asyncio on module on AIX Message-ID: <1583948534.42.0.125038756594.issue39936@roundup.psfhosted.org> New submission from STINNER Victor : The commit 1ec63b62035e73111e204a0e03b83503e1c58f2e of bpo-39763 broke the Python compilation on AIX: https://bugs.python.org/issue39763#msg363749 -- The last successful build was before the commit 1ec63b62035e73111e204a0e03b83503e1c58f2e: https://buildbot.python.org/all/#/builders/119/builds/383 _socket compilation: (...) -o build/lib.aix-7100-9898-32-3.9-pydebug/_socket.so pythoninfo: sys.path: [ '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build', '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/target/lib/python39.zip', '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib', '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/build/lib.aix-7100-9898-32-3.9-pydebug', '/home/buildbot/.local/lib/python3.9/site-packages'] Both steps use "build/lib.aix-7100-9898-32-3.9-pydebug/" directory. -- Recent failure: https://buildbot.python.org/all/#/builders/119/builds/451 _socket compilation: (...) -o build/lib.aix-7104-1806-32-3.9-pydebug/_socket.so pythoninfo: sys.path: [ '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build', '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/target/lib/python39.zip', '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/Lib', '/home/buildbot/buildarea/3.x.aixtools-aix-power6/build/build/lib.aix-7100-9898-32-3.9-pydebug', '/home/buildbot/.local/lib/python3.9/site-packages'] So the compilation uses "build/lib.aix-7104-1806-32-3.9-pydebug/" directory, whereas pythoninfo uses "build/lib.aix-7100-9898-32-3.9-pydebug/" directory. It can explain why setup.py fails to import _socket later: it was built in a different directory. -- I see that _aix_support._aix_bosmp64() has two code paths depending on the subprocess module can be imported: # subprocess is not necessarily available early in the build process # if not available, the config_vars are also definitely not available # supply substitutes to bootstrap the build try: import subprocess _have_subprocess = True _tmp_bd = get_config_var("AIX_BUILDDATE") _bgt = get_config_var("BUILD_GNU_TYPE") except ImportError: # pragma: no cover _have_subprocess = False _tmp_bd = None _bgt = "powerpc-ibm-aix6.1.7.0" def _aix_bosmp64(): # type: () -> Tuple[str, int] """ Return a Tuple[str, int] e.g., ['7.1.4.34', 1806] The fileset bos.mp64 is the AIX kernel. It's VRMF and builddate reflect the current ABI levels of the runtime environment. """ if _have_subprocess: # We expect all AIX systems to have lslpp installed in this location out = subprocess.check_output(["/usr/bin/lslpp", "-Lqc", "bos.mp64"]) out = out.decode("utf-8").strip().split(":") # type: ignore # Use str() and int() to help mypy see types return str(out[2]), int(out[-1]) else: from os import uname osname, host, release, version, machine = uname() return "{}.{}.0.0".format(version, release), _MISSING_BD -- _aix_support._aix_bosmp64() is called by _aix_support.aix_platform() which is called by get_host_platform() of distutils.util. Currently, setup.py does: * Inject _bootsubprocess into sys.modules['subprocess'] so "import subprocess" works * Build all C extensions * Remove sys.modules['subprocess'], so the next "import subprocess" may or may not load Lib/subprocess.py which uses the newly built C extensions like _posixsubprocess and select * Attempt to load C extensions: if an import fails, rename the file: it happens for _asyncio on AIX, that's the bug ---------- components: Build messages: 363946 nosy: vstinner priority: normal severity: normal status: open title: Python fails to build _asyncio on module on AIX versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 13:51:38 2020 From: report at bugs.python.org (Rahul Kumaresan) Date: Wed, 11 Mar 2020 17:51:38 +0000 Subject: [New-bugs-announce] [issue39937] Suggest the usage of Element.iter() instead of iter() in whatsnew Message-ID: <1583949098.86.0.806272470528.issue39937@roundup.psfhosted.org> New submission from Rahul Kumaresan : In the whatsnew section, under the point which mentions the deprecation of getchildren() and getiterator() through bpo-36543, it is suggested to use iter() instead. Ideally there should be a suggestion to use Element.iter() instead. ---------- assignee: docs at python components: Documentation messages: 363949 nosy: docs at python, rahul-kumi, xtreak priority: normal pull_requests: 18290 severity: normal status: open title: Suggest the usage of Element.iter() instead of iter() in whatsnew versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 14:54:14 2020 From: report at bugs.python.org (Eric Govatos) Date: Wed, 11 Mar 2020 18:54:14 +0000 Subject: [New-bugs-announce] [issue39938] RotatingFileHandler does not support any other mode than 'a'. Message-ID: <1583952854.31.0.172490606353.issue39938@roundup.psfhosted.org> New submission from Eric Govatos : The RotatingFileHandler class located within lib/logging/handlers.py does not accept any other 'mode' value than its default value of 'a' - even when the parameter is supplied, such as 'ab', to append bytes, this is disregarded and the mode is set back to 'a' if the maxBytes variable is greater than 0. While the reasoning behind this does make sense as the rotating file handler should only be appending data to files, this means that supplying a mode to append bytes is not supported. You can currently get around this by supplying the mode after constructing the handler. Proposed solution would be to remove lines 146 and 147 from the class entirely and let the user decide on the file mode themselves. ---------- components: Library (Lib) messages: 363957 nosy: elgovatos priority: normal severity: normal status: open title: RotatingFileHandler does not support any other mode than 'a'. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 15:11:38 2020 From: report at bugs.python.org (Dennis Sweeney) Date: Wed, 11 Mar 2020 19:11:38 +0000 Subject: [New-bugs-announce] [issue39939] Add str methods to remove prefixes or suffixes Message-ID: <1583953898.19.0.613718077904.issue39939@roundup.psfhosted.org> New submission from Dennis Sweeney : Following discussion here ( https://mail.python.org/archives/list/python-ideas at python.org/thread/RJARZSUKCXRJIP42Z2YBBAEN5XA7KEC3/ ), there is a proposal to add new methods str.cutprefix and str.cutsuffix to alleviate the common misuse of str.lstrip and str.rstrip. I think sticking with the most basic possible behavior def cutprefix(self: str, prefix: str) -> str: if self.startswith(prefix): return self[len(prefix):] # return a copy to work for bytearrays return self[:] def cutsuffix(self: str, suffix: str) -> str: if self.startswith(suffix): # handles the "[:-0]" issue return self[:len(self)-len(suffix)] return self[:] would be best (refusing to guess in the face of ambiguous multiple arguments). Someone can do, e.g. >>> 'foo.tar.gz'.cutsuffix('.gz').cutsuffix('.tar') 'foo' to cut off multiple suffixes. More complicated behavior for multiple arguments could be added later, but it would be easy to make a mistake in prematurely generalizing right now. In bikeshedding method names, I think that avoiding the word "strip" would be nice so users can have a consistent feeling that "'strip' means character sets; 'cut' means substrings". ---------- components: Interpreter Core messages: 363958 nosy: Dennis Sweeney priority: normal severity: normal status: open title: Add str methods to remove prefixes or suffixes type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 17:44:12 2020 From: report at bugs.python.org (Marco Sulla) Date: Wed, 11 Mar 2020 21:44:12 +0000 Subject: [New-bugs-announce] [issue39940] Micro-optimizations to PySequence_Tuple() Message-ID: <1583963052.53.0.534992223538.issue39940@roundup.psfhosted.org> New submission from Marco Sulla : This is a little PR with some micro-optimizations to the PySequence_Tuple() function. Mainly, it simply add a support variable new_n_tmp_1 instead of reassigning newn multiple times. ---------- components: Interpreter Core messages: 363974 nosy: Marco Sulla priority: normal pull_requests: 18296 severity: normal status: open title: Micro-optimizations to PySequence_Tuple() type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 19:10:56 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 11 Mar 2020 23:10:56 +0000 Subject: [New-bugs-announce] [issue39941] multiprocessing: Process.join() should emit a warning if the process is killed by a signal Message-ID: <1583968256.43.0.607036154625.issue39941@roundup.psfhosted.org> New submission from STINNER Victor : While debugging bpo-39877, I was surprising that Python crash was only noticed on FreeBSD by a side effect. On FreeBSD, coredump files are created in the current directory. But Python regrtest fails if a test creates a file and doesn't remove it. I found that multiprocessing.Process.join() deletes the subprocess.Popen object as soon as the process completes, but it doesn't log any warning if the process is killed by a signal. The caller has no way to be notified. I propose to enhance Process to log a warning if such case happens. ---------- components: Library (Lib) messages: 363980 nosy: vstinner priority: normal severity: normal status: open title: multiprocessing: Process.join() should emit a warning if the process is killed by a signal versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 20:11:51 2020 From: report at bugs.python.org (jack1142) Date: Thu, 12 Mar 2020 00:11:51 +0000 Subject: [New-bugs-announce] [issue39942] Making instance of `TypeVar` fails because of missing `__name__` Message-ID: <1583971911.89.0.874740392795.issue39942@roundup.psfhosted.org> New submission from jack1142 : Example code: ``` code = """ import typing T = typing.TypeVar("T") """ exec(code, {}) ``` Traceback: ``` Traceback (most recent call last): File "", line 1, in File "", line 3, in File "C:\Python38\lib\typing.py", line 603, in __init__ def_mod = sys._getframe(1).f_globals['__name__'] # for pickling KeyError: '__name__' ``` If this problem with `__name__` is not something that needs to be fixed, then I also noticed that the same line in typing.py will also raise when platform doesn't have `sys._getframe()` ---------- components: Library (Lib) messages: 363990 nosy: jack1142 priority: normal severity: normal status: open title: Making instance of `TypeVar` fails because of missing `__name__` type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 11 23:23:07 2020 From: report at bugs.python.org (Andy Lester) Date: Thu, 12 Mar 2020 03:23:07 +0000 Subject: [New-bugs-announce] [issue39943] Meta: Clean up various issues Message-ID: <1583983387.49.0.277830283994.issue39943@roundup.psfhosted.org> New submission from Andy Lester : This is a meta-ticket for a number of small PRs that clean up some internals. Issues will include: * Removing unnecessary casts * consting pointers that can be made const * Removing unused function arguments * etc ---------- components: Interpreter Core messages: 363993 nosy: petdance priority: normal severity: normal status: open title: Meta: Clean up various issues _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 02:32:03 2020 From: report at bugs.python.org (Dennis Sweeney) Date: Thu, 12 Mar 2020 06:32:03 +0000 Subject: [New-bugs-announce] [issue39944] UserString.join should return UserString Message-ID: <1583994723.34.0.552353280143.issue39944@roundup.psfhosted.org> New submission from Dennis Sweeney : It seems that `.join` methods typically return the type of the separator on which they are called: >>> bytearray(b" ").join([b"a", b"b"]) bytearray(b'a b') >>> b" ".join([bytearray(b"a"), bytearray(b"b")]) b'a b' This is broken in UserString.join: >>> from collections import UserString as US >>> x = US(" ").join(["a", "b"]) >>> type(x) Furthermore, this method cannot even accept UserStrings from the iterable: >>> US(" ").join([US("a"), US("b")]) Traceback (most recent call last): ... TypeError: sequence item 0: expected str instance, UserString found. I can submit a PR to fix this. ---------- components: Library (Lib) messages: 363995 nosy: Dennis Sweeney priority: normal severity: normal status: open title: UserString.join should return UserString type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 04:54:06 2020 From: report at bugs.python.org (Tomas Hak) Date: Thu, 12 Mar 2020 08:54:06 +0000 Subject: [New-bugs-announce] [issue39945] Wrong example result in docs Message-ID: <1584003246.97.0.382809912153.issue39945@roundup.psfhosted.org> New submission from Tomas Hak : Step to reproduce: Open page: https://docs.python.org/3/library/sched.html Location: Documentation : sched ? Event scheduler Description: In the text is definition of priority: "Events scheduled for the same time will be executed in the order of their priority. A lower number represents a higher priority." I assume it is correct, but example shows different behavior. There are scheduled two events with priority two and one. In print on the end of example, they are in wrong order : Code: s.enter(5, 2, print_time, argument=('positional',)) s.enter(5, 1, print_time, kwargs={'a': 'keyword'}) Current Result: >From print_time 930343695.274 positional >From print_time 930343695.275 keyword Expected result: >From print_time 930343695.274 keyword >From print_time 930343695.275 positional Conclusion : I tested the example code and it gave me expected result in order "keyword","positional" ---------- assignee: docs at python components: Documentation messages: 364004 nosy: docs at python, hook priority: normal severity: normal status: open title: Wrong example result in docs versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 13:40:48 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 12 Mar 2020 17:40:48 +0000 Subject: [New-bugs-announce] [issue39946] Is it time to remove _PyThreadState_GetFrame() hook? Message-ID: <1584034848.74.0.766421748348.issue39946@roundup.psfhosted.org> New submission from STINNER Victor : Python has an internal function to get the frame of the PyThreadState: /* hook for PyEval_GetFrame(), requested for Psyco */ #define _PyThreadState_GetFrame _PyRuntime.gilstate.getframe It is used by the public function PyEval_GetFrame() for example. The indirection was added in 2002 by: commit 019a78e76d3542d4d56a08015e6980f8c8aeaba1 Author: Michael W. Hudson Date: Fri Nov 8 12:53:11 2002 +0000 Assorted patches from Armin Rigo: [ 617309 ] getframe hook (Psyco #1) [ 617311 ] Tiny profiling info (Psyco #2) [ 617312 ] debugger-controlled jumps (Psyco #3) These are forward ports from 2.2.2. ... but psyco is outdated for a very long time (superseded by PyPy which is no longer based on CPython). Is it time to drop _PyThreadState_GetFrame() (which became _PyRuntime.gilstate.getframe in the meanwhile)? Or if we keep it, we should use it rather accessing directly PyThreadState.frame (read or write). See also PEP 523 "Adding a frame evaluation API to CPython" and a recent discussion on this PEP: bpo-38500. ---------- components: Interpreter Core messages: 364031 nosy: vstinner priority: normal severity: normal status: open title: Is it time to remove _PyThreadState_GetFrame() hook? versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 13:46:54 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 12 Mar 2020 17:46:54 +0000 Subject: [New-bugs-announce] [issue39947] Move PyThreadState structure to the internal C API Message-ID: <1584035214.47.0.672795796186.issue39947@roundup.psfhosted.org> New submission from STINNER Victor : Python 3.8 moved PyInterpreterState to the internal C API (commit be3b295838547bba267eb08434b418ef0df87ee0 of bpo-35886)... which caused bpo-38500 issue. In Python 3.9, I provided Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() as regular functions for the limited API: commit f4b1e3d7c64985f5d5b00f6cc9a1c146bbbfd613 of bpo-38644. Previously, there were defined as macros, but these macros didn?t compile with the limited C API which cannot access PyThreadState.recursion_depth field (the structure is opaque in the limited C API). That's an enhancement for the limited C API, but PyThreadState is still exposed to the "cpython" C API (Include/cpython/). We should prepare the C API to make the PyThreadState structure opaque. It cannot be done at once, there are different consumers of the PyThreadState structure. In CPython code base, I found: * Py_TRASHCAN_BEGIN_CONDITION and Py_TRASHCAN_END macros access tstate->trash_delete_nesting field. Maybe we can hide these implementation details into new private function calls. * faulthandler.c: faulthandler_py_enable() reads tstate->interp. We should maybe provide a getter function. * _tracemalloc.c: traceback_get_frames() reads tstate->frame. We should maybe provide a getter function. * Private _Py_EnterRecursiveCall() and _Py_LeaveRecursiveCall() access tstate->recursion_depth. One solution is to move these functions to the internal C API. faulthandler and _tracemalloc are low-level debugging C extensions. Maybe it's ok for them to use the internal C API. But they are examples of C extensions accessing directly the PyThreadState structure. See also bpo-39946 "Is it time to remove _PyThreadState_GetFrame() hook?" about PyThreadState.frame. ---------- components: C API messages: 364034 nosy: vstinner priority: normal severity: normal status: open title: Move PyThreadState structure to the internal C API versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 17:44:12 2020 From: report at bugs.python.org (dgelessus) Date: Thu, 12 Mar 2020 21:44:12 +0000 Subject: [New-bugs-announce] [issue39948] Python 3.8 unconditionally uses functions not available on OS X 10.4 and 10.5 Message-ID: <1584049452.89.0.509835184184.issue39948@roundup.psfhosted.org> New submission from dgelessus : In particular, the implementation of posix._fcopyfile uses (available since OS X 10.5), and the implementation of threading.get_native_id uses pthread_threadid_np (available since OS X 10.6). This breaks builds for OS X 10.5 and older. I'm aware that the oldest officially supported OS X version is 10.6, but according to a python-dev post (https://mail.python.org/pipermail/python-dev/2018-May/153725.html), earlier versions are "supported on a best-effort basis". Would patches for these old versions still be accepted? I have the patch for this issue almost completely worked out, and it's not very complicated or intrusive - it mainly just adds standard autoconf checks for the functions/headers in question. ---------- components: macOS messages: 364051 nosy: dgelessus, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Python 3.8 unconditionally uses functions not available on OS X 10.4 and 10.5 versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 17:49:27 2020 From: report at bugs.python.org (Seth Troisi) Date: Thu, 12 Mar 2020 21:49:27 +0000 Subject: [New-bugs-announce] [issue39949] truncating match in regular expression match objects repr Message-ID: <1584049767.16.0.188415292253.issue39949@roundup.psfhosted.org> New submission from Seth Troisi : Following on https://bugs.python.org/issue17087 Today I was mystified by why a regex wasn't working. >>> import re >>> re.match(r'.{10}', 'A'*49+'B') <_sre.SRE_Match object; span=(0, 10), match='AAAAAAAAAA'> >>> re.match(r'.{49}', 'A'*49+'B') <_sre.SRE_Match object; span=(0, 49), match='AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA> >>> re.match(r'.{50}', 'A'*49+'B') <_sre.SRE_Match object; span=(0, 50), match='AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA> I became confused on why the B wasn't matching in the third example; It is matching just in the interactive debugger it doesn't fit on the line and doesn't show My suggestion would be to truncate match (in the repr) and append '...' when it's right quote wouldn't show with short matches (or exactly enough space) there would be no change >>> re.match(r'.{48}', string.ascii_letters) <_sre.SRE_Match object; span=(0, 48), match='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUV'> when not all of match can be displayed >>> re.match(r'.{49}', string.ascii_letters) <_sre.SRE_Match object; span=(0, 49), match='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVW> <_sre.SRE_Match object; span=(0, 49), match='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRS'...> I'm happy to help out by writing tests or impl if folks thing this is a good idea. I couldn't think of other examples (urllib maybe?) in Python of how this is handled but I could potentially look for some if that would help ---------- components: Library (Lib) messages: 364052 nosy: Seth.Troisi, serhiy.storchaka priority: normal severity: normal status: open title: truncating match in regular expression match objects repr type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 20:50:05 2020 From: report at bugs.python.org (Barney Gale) Date: Fri, 13 Mar 2020 00:50:05 +0000 Subject: [New-bugs-announce] [issue39950] Add pathlib.Path.hardlink_to() Message-ID: <1584060605.38.0.95953512485.issue39950@roundup.psfhosted.org> New submission from Barney Gale : Per bpo-39291, the argument order for `pathlib.Path.link_to()` is inconsistent with `symlink_to()` and its own documentation. This ticket covers adding a new `hardlink_to()` method with the correct argument order, and deprecating `link_to()`. Discussion on python-dev: https://mail.python.org/archives/list/python-dev at python.org/thread/7QPLYW36ZK6QTW4SV4FI6C343KYWCPAT/ ---------- components: Library (Lib) messages: 364060 nosy: barneygale priority: normal severity: normal status: open title: Add pathlib.Path.hardlink_to() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 23:40:01 2020 From: report at bugs.python.org (Dima Tisnek) Date: Fri, 13 Mar 2020 03:40:01 +0000 Subject: [New-bugs-announce] [issue39951] Ignore specific errors when closing ssl connections Message-ID: <1584070801.03.0.867475481059.issue39951@roundup.psfhosted.org> New submission from Dima Tisnek : When a connection wrapped in ssl is closed, sometimes the ssl library reports an error, which I believe should be ignored. The error code is `291` and the name of the error is either SSL_R_KRB5_S_INIT (KRB5_S_INIT) or SSL_R_APPLICATION_DATA_AFTER_CLOSE_NOTIFY depending on openssl header file. It's only one code, somehow `ssl.h` (depending on version?) has different symbolic name for the error. TBH, I consider `KRB5_S_INIT` a misnomer, there's no Kerberos here. The explanation for openssl reporting this error is here: https://github.com/openssl/openssl/blob/6d53ad6b5cf726d92860e973d7bc8c1930762086/ssl/record/rec_layer_s3.c#L1657-L1668 > The peer is continuing to send application data, but we have > already sent close_notify. If this was expected we should have > been called via SSL_read() and this would have been handled > above. This situation is easily achieved, because of network delays. Just because we sent "close notify", doesn't mean the other end has received it, and even if it did, there could still be return data in flight. Reproducer is here: https://gist.github.com/dimaqq/087c66dd7b4a85a669a00221dc3792ea ---------- components: Extension Modules, Library (Lib) messages: 364071 nosy: Dima.Tisnek priority: normal severity: normal status: open title: Ignore specific errors when closing ssl connections versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 12 23:46:39 2020 From: report at bugs.python.org (Lin Gao) Date: Fri, 13 Mar 2020 03:46:39 +0000 Subject: [New-bugs-announce] [issue39952] Using VS2019 to automatically build Python3 and it failed to build Message-ID: <1584071199.52.0.120678224037.issue39952@roundup.psfhosted.org> New submission from Lin Gao : We (the MSVC ++ team) is trying to use VS2019 to replace VS2017 to automatically build Python3(branch 3.6)on Windows. First build failed with error MSB8036: The Windows SDK version 10.0.10586.0 was not found. So we modified the build command line to 'build -e -r -v "/p:PlatformToolset=v142" "/p:WindowsTargetPlatformVersion=10.0.18362.0"' , error MSB8036 has disappeared, but triggers another error 'C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(173,1): fatal error C1189: #error: "No Target Architecture"'. After investigation, this error is reported whenever the F:\gitP\python\cpython\Modules\_io\_iomodule.c file is compiled. And this error will disappear when we add #include to the _iomodule.c file.But when I rebuilt again, I encountered many errors. Does Python3 currently support VS2019 automated builds? If support Could you please help to take a look? Thank you so much! Here is repro steps: 1. git clone -b "3.6" -c core.autocrlf=true https://github.com/python/cpython.git F:\gitP\python\cpython 2. Open a VS 2019 16.4.5 x86 command prompt and browse to F:\gitP\python\cpython 3. checkout the revision to f1f9c0c. 4. Add #include to the _iomodule.c. 5. cd F:\gitP\python\cpython\PCBuild 6. devenv /upgrade pcbuild.sln 7. build -e -r -v "/p:PlatformToolset=v142" "/p:WindowsTargetPlatformVersion=10.0.18362.0" Error: 19>LINK : fatal error LNK1181: cannot open input file 'Microsoft.VisualStudio.Setup.Configuration.Native.lib' 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(20779): error C2059: syntax error: 'constant' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(20791): error C2059: syntax error: '}' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(20792): error C2059: syntax error: '}' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(20793): error C2143: syntax error: missing '{' before '*' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(20801): error C2061: syntax error: identifier 'IMAGE_POLICY_ENTRY' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(20802): error C2059: syntax error: '}' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\winnt.h(20803): error C2143: syntax error: missing '{' before '*' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1032): error C2059: syntax error: 'constant' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1034): error C2059: syntax error: '}' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1151): error C2059: syntax error: 'constant' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1153): error C2059: syntax error: '}' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1032): error C2059: syntax error: 'constant' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1034): error C2059: syntax error: '}' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1151): error C2059: syntax error: 'constant' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um\processthreadsapi.h(1153): error C2059: syntax error: '}' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\HostX86\x86\cl.EXE"' : return code '0x2' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] 40>NMAKE : fatal error U1077: 'xcopy' : return code '0x4' [F:\gitP\python\cpython\PCbuild\tk.vcxproj] Stop. 40>C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: The command "setlocal [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: if not exist "F:\gitP\python\cpython\externals\tcltk\bin\tk86t.dll" goto build [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: if not exist "F:\gitP\python\cpython\externals\tcltk\include\tk.h" goto build [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: if not exist "F:\gitP\python\cpython\externals\tcltk\lib\tk86t.lib" goto build [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: if not exist "F:\gitP\python\cpython\externals\tcltk\lib\tk8.6" goto build [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: goto :eof [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: :build [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: set VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\ [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: cd /D "F:\gitP\python\cpython\externals\tk-8.6.6.0\win" [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: nmake /nologo -f makefile.vc RC=rc MACHINE=IX86 OPTS=msvcrt BUILDDIRTOP="Release" TCLDIR="F:\gitP\python\cpython\externals\tcl-core-8.6.6.0" INSTALLDIR="F:\gitP\python\cpython\externals\tcltk" all [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: nmake /nologo -f makefile.vc RC=rc MACHINE=IX86 OPTS=msvcrt BUILDDIRTOP="Release" TCLDIR="F:\gitP\python\cpython\externals\tcl-core-8.6.6.0" INSTALLDIR="F:\gitP\python\cpython\externals\tcltk" install-binaries install-libraries [F:\gitP\python\cpython\PCbuild\tk.vcxproj] C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.MakeFile.Targets(44,5): error MSB3073: " exited with code 2. [F:\gitP\python\cpython\PCbuild\tk.vcxproj] ---------- components: Build files: build.log messages: 364072 nosy: Lin priority: normal severity: normal status: open title: Using VS2019 to automatically build Python3 and it failed to build type: compile error versions: Python 3.6 Added file: https://bugs.python.org/file48973/build.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 00:07:50 2020 From: report at bugs.python.org (Dima Tisnek) Date: Fri, 13 Mar 2020 04:07:50 +0000 Subject: [New-bugs-announce] [issue39953] Let's update ssl error codes Message-ID: <1584072470.83.0.437860249809.issue39953@roundup.psfhosted.org> New submission from Dima Tisnek : Let's consider ssl error `291` (https://bugs.python.org/issue39951): It was introduced into openssl 2 years ago: https://github.com/openssl/openssl/commit/358ffa05cd3a088822c7d06256bc87516d918798 The documentation states: SSL_R_APPLICATION_DATA_AFTER_CLOSE_NOTIFY:291:\ application data after close notify The `ssl.h` header file contains: # define SSL_R_APPLICATION_DATA_AFTER_CLOSE_NOTIFY 291 The master branch of openssl contains this definition too: https://github.com/openssl/openssl/blob/master/include/openssl/sslerr.h # define SSL_R_APPLICATION_DATA_AFTER_CLOSE_NOTIFY 291 But what does Python say? ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2629) What's KRB5? It supposedly stands for Kerberos5, and it too is seemingly present in openssl header file: /usr/local/Cellar/openssl/1.0.2s/include/openssl/ssl.h 2951:# define SSL_R_KRB5_S_INIT 291 Moreover, cpython source code contains a fallback, should this value not be defined: https://github.com/python/cpython/blob/master/Modules/_ssl_data.h #ifdef SSL_R_KRB5_S_INIT {"KRB5_S_INIT", ERR_LIB_SSL, SSL_R_KRB5_S_INIT}, #else {"KRB5_S_INIT", ERR_LIB_SSL, 291}, #endif Thus, today, Python reports an error with wrong *label* but correct *text*: [SSL: KRB5_S_INIT] application data after close notify The label and text don't match each other, because... well... I guess that's why we should fix it :) ---------- components: Extension Modules messages: 364074 nosy: Dima.Tisnek priority: normal severity: normal status: open title: Let's update ssl error codes versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 08:24:25 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 13 Mar 2020 12:24:25 +0000 Subject: [New-bugs-announce] [issue39954] test_subprocess: test_specific_shell() fails on AMD64 FreeBSD Shared 3.x Message-ID: <1584102265.92.0.572809355016.issue39954@roundup.psfhosted.org> New submission from STINNER Victor : It started to at build 385 or 386. AMD64 FreeBSD Shared 3.x: https://buildbot.python.org/all/#/builders/152/builds/391 ====================================================================== FAIL: test_specific_shell (test.test_subprocess.POSIXProcessTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_subprocess.py", line 2180, in test_specific_shell self.assertEqual(p.stdout.read().strip(), bytes(sh, 'ascii')) AssertionError: b'' != b'/usr/local/bin/bash' ---------- components: Tests messages: 364089 nosy: vstinner priority: normal severity: normal status: open title: test_subprocess: test_specific_shell() fails on AMD64 FreeBSD Shared 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 09:22:58 2020 From: report at bugs.python.org (Ying Zhang) Date: Fri, 13 Mar 2020 13:22:58 +0000 Subject: [New-bugs-announce] [issue39955] argparse print_help breaks when help is blank space Message-ID: <1584105778.75.0.565516989827.issue39955@roundup.psfhosted.org> New submission from Ying Zhang : Code is attached. Comments in line. from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter parser1 = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter) parser1.add_argument('--foo', default='default_value_for_foo', required=False) # this will not print the default value for foo. I think this is not the most natural choice, given that the user has asked for ArgumentDefaultsHelpFormatter, but acceptable since the user didn't define help here. parser1.add_argument('--bar', help='', default='default_value_for_bar', required=False) # this will not print the default value for bar. Again, acceptable but I feel not the most natural. parser1.add_argument('--baz', help=' ', default='default_value_for_baz', required=False) # this will print the default value for baz. parser1.print_help() parser2 = ArgumentParser() parser2.add_argument('--baz', help=' ', default='default_value_for_baz', required=False) # this will break, which surprises me. parser2.print_help() ---------------- Result: python argparse_help_demo.py usage: argparse_help_demo.py [-h] [--foo FOO] [--bar BAR] [--baz BAZ] optional arguments: -h, --help show this help message and exit --foo FOO --bar BAR --baz BAZ (default: default_value_for_baz) Traceback (most recent call last): File "argparse_help_demo.py", line 21, in parser2.print_help() File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 2474, in print_help self._print_message(self.format_help(), file) File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 2458, in format_help return formatter.format_help() File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 284, in format_help help = self._root_section.format_help() File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 215, in format_help item_help = join([func(*args) for func, args in self.items]) File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 215, in item_help = join([func(*args) for func, args in self.items]) File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 215, in format_help item_help = join([func(*args) for func, args in self.items]) File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 215, in item_help = join([func(*args) for func, args in self.items]) File "/nfs/statbuild/zhangyi/conda_envs/net37_env0/lib/python3.7/argparse.py", line 527, in _format_action parts.append('%*s%s\n' % (indent_first, '', help_lines[0])) IndexError: list index out of range ---------- components: Library (Lib) messages: 364091 nosy: Ying Zhang priority: normal severity: normal status: open title: argparse print_help breaks when help is blank space type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 13:08:02 2020 From: report at bugs.python.org (zd nex) Date: Fri, 13 Mar 2020 17:08:02 +0000 Subject: [New-bugs-announce] [issue39956] Numeric Literals vs string "1_1" on input int() or float() or literal_eval Message-ID: <1584119282.92.0.936171127004.issue39956@roundup.psfhosted.org> New submission from zd nex : So currently if python code contains 1_1 it is handled as number 11. When user uses int("1_1") it also creates 11 and when ast.literal_eval is used it is also created instead of string. How can user get SyntaxError input on int or literal_eval with obviously wrong input (some keyboards have . next to _) like int(input()) in REPL? In python2.7 this was checked, but now even string is handled as number. Is there some reason? I understand reasoning behind PEP515, that int(1_1) can create 11, but why int("1_1") creates it also? Previously users used literal_eval for safe check of values, but now user can put 1_1 and it is transferred as number. Is there some plan to be able control behavior of these functions? I was now with some students, which used python2.7 and they find it also confusing. Most funny thing is that when they do same thing in JavaScript parseInt("1_1") they get 1, in old python this was error and now we give them 11. I would suggest that it would be possible to strictly check strings, as it was in old Python2.7. This way user would be able to use _ in code to arrange numbers, but it would also allow checks on wrong inputs of users which were meant something else, for example if you use it in try/except in console. ---------- messages: 364112 nosy: zd nex priority: normal severity: normal status: open title: Numeric Literals vs string "1_1" on input int() or float() or literal_eval type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 15:24:43 2020 From: report at bugs.python.org (Jens Reidel) Date: Fri, 13 Mar 2020 19:24:43 +0000 Subject: [New-bugs-announce] [issue39957] bpo39775 not fixed - inspect.Signature.parameters still dict/mappingproxy Message-ID: <1584127483.7.0.559336521807.issue39957@roundup.psfhosted.org> New submission from Jens Reidel : Hi guys, compiling CPython from the master branch will result in a git history with the commit https://github.com/python/cpython/commit/211055176157545ce98e6c02b09d624719e6dd30 included and in Lib/inspect.py, however the return type is still like before and behaviour has not changed. Python 3.9.0a4+ (heads/master:be79373a78, Mar 11 2020, 16:36:27) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import inspect >>> inspect.signature(lambda x, y: None).parameters == inspect.signature(lambda y, x: None).parameters True >>> I have been able to confirm this on all builds I've done. To get it to do expected behaviour and return False on above code, I need to patch back the commit that changed OrderedDict to dict (https://raw.githubusercontent.com/Gelbpunkt/python-image/master/inspect.patch is the file I am using to patch). I have compiled against the codebase of https://github.com/python/cpython/commit/be79373a78c0d75fc715ab64253c9b757987a848 and believe this is some issue with the Lib/inspect.py code internally if the patch file can fix it. ---------- components: Library (Lib) messages: 364118 nosy: gelbpunkt priority: normal severity: normal status: open title: bpo39775 not fixed - inspect.Signature.parameters still dict/mappingproxy type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 15:27:43 2020 From: report at bugs.python.org (Atle Solbakken) Date: Fri, 13 Mar 2020 19:27:43 +0000 Subject: [New-bugs-announce] [issue39958] Deadlock in _PyInterpreterState_DeleteExceptMain with HEAD_LOCK(runtime) Message-ID: <1584127663.81.0.927144728358.issue39958@roundup.psfhosted.org> New submission from Atle Solbakken : In _PyInterpreterState_DeleteExceptMain() aqcuires lock with HEAD_LOCK(runtime). With the lock still held and if a interpreter state is to be cleared, _PyInterpreterState_Clear() is called. HEAD_LOCK(runtime) is then called again (from within Clear()), causing deadlock. Backtrace is from 3.8.2-1. Looks like this is also present in Python-3.8.2rc2 and Python-3.9.0a4. #0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x555555619cc0) at ../sysdeps/unix/sysv/linux/futex-internal.h:208 #1 do_futex_wait (sem=sem at entry=0x555555619cc0, abstime=0x0, clockid=0) at sem_waitcommon.c:112 #2 0x00007ffff7f08488 in __new_sem_wait_slow (sem=sem at entry=0x555555619cc0, abstime=0x0, clockid=0) at sem_waitcommon.c:184 #3 0x00007ffff7f08501 in __new_sem_wait (sem=sem at entry=0x555555619cc0) at sem_wait.c:42 #4 0x00007ffff6ee6cf3 in PyThread_acquire_lock_timed (lock=0x555555619cc0, microseconds=, intr_flag=0) at ../Python/thread_pthread.h:471 #5 0x00007ffff6ef6789 in _PyInterpreterState_Clear (interp=0x555555630310, runtime=, runtime=) at ../Python/pystate.c:261 #6 0x00007ffff6ef8018 in _PyInterpreterState_DeleteExceptMain (runtime=0x7ffff7297a80 <_PyRuntime>) at ../Python/pystate.c:380 #7 0x00007ffff6e49912 in PyOS_AfterFork_Child () at ../Modules/posixmodule.c:472 #8 0x00007ffff7f9681c in __fork_main_tstate_callback (arg=) at python3.c:626 #9 0x00007ffff7f989d1 in rrr_py_with_global_tstate_do (callback=0x7ffff7f96800 <__fork_main_tstate_callback>, arg=0x0) at python3.c:1222 #10 0x00007ffff7f8b563 in rrr_socket_with_lock_do (callback=callback at entry=0x7ffff7f98a00 <__fork_callback>, arg=arg at entry=0x0) at rrr_socket.c:169 #11 0x00007ffff7f96d07 in __rrr_py_fork_intermediate (function=function at entry=0x7ffff632a700, fork_data=fork_data at entry=0x555555639e40, child_method=child_method at entry=0x7ffff7f965f0 <__rrr_py_start_persistent_thread_rw_child>) at python3.c:653 #12 0x00007ffff7f97075 in __rrr_py_start_persistent_rw_thread_intermediate (function=function at entry=0x7ffff632a700, fork=fork at entry=0x555555639e40) at python3.c:749 #13 0x00007ffff7f972bd in __rrr_py_start_thread (result_fork=0x5555555d2dd8, rrr_objects=0x5555555d2de8, module_name=0x5555556a70e0 "testing", function_name=0x5555555fb6d0 "process", start_method=0x7ffff7f97040 <__rrr_py_start_persistent_rw_thread_intermediate>) at python3.c:855 #14 0x00007ffff65dc3a0 in ?? () #15 0x0000000000000000 in ?? () ---------- components: C API messages: 364120 nosy: Atle Solbakken priority: normal severity: normal status: open title: Deadlock in _PyInterpreterState_DeleteExceptMain with HEAD_LOCK(runtime) type: crash versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 15:43:03 2020 From: report at bugs.python.org (Diogo Flores) Date: Fri, 13 Mar 2020 19:43:03 +0000 Subject: [New-bugs-announce] [issue39959] (Possible) bug on multiprocessing.shared_memory Message-ID: <1584128583.89.0.486447704556.issue39959@roundup.psfhosted.org> New submission from Diogo Flores : Hello, I came across with what seems like a bug (or at least a disagreement with the current documentation). Discussion: I expected that after creating a numpy-array (on terminal 1), which is backed by shared memory, I would be able to use it in other terminals until I would call `shm.unlink()` (on terminal 1), at which point, the memory block would be released and no longer accessible. What happened is that after accessing the numpy-array from terminal 2, I called 'close()' on the local 'existing_shm' instance and exited the interpreter, which displayed the `warning` seen below. After, I tried to access the same shared memory block from terminal 3, and a FileNotFoundError was raised. (The same error was also raised when I tried to call 'shm.unlink()' on terminal 1, after calling 'close()' on terminal 2.) It seems that calling `close()` on an instance, destroys further access to the shared memory block from any point, while what I expected was to be able to access the array (i.e. on terminal 2), modify it, "close" my access to it, and after be able to access the modified array on i.e. terminal 3. If the error is on my side I apologize for raising this issue and I would appreciate for clarification on what I am doing wrong. Thank you. Diogo Please check below for the commands issued: ## Terminal 1 >>> from multiprocessing import shared_memory >>> import numpy as np >>> >>> a = np.array([x for x in range(10)]) >>> shm = shared_memory.SharedMemory(create=True, size=a.nbytes) >>> b = np.ndarray(a.shape, dtype=a.dtype, buffer=shm.buf) >>> b[:] = a[:] >>> >>> shm.name 'psm_592ec635' ## Terminal 2 >>> from multiprocessing import shared_memory >>> import numpy as np >>> >>> existing_shm = shared_memory.SharedMemory('psm_592ec635') >>> c = np.ndarray((10,), dtype=np.int64, buffer=existing_shm.buf) >>> >>> c array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> >>> del c >>> existing_shm.close() >>> >>> exit() ~: /usr/lib/python3.8/multiprocessing/resource_tracker.py:203: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' ## Finally, on terminal 3 >>> from multiprocessing import shared_memory >>> import numpy as np >>> >>> existing_shm = shared_memory.SharedMemory('psm_592ec635') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 100, in __init__ self._fd = _posixshmem.shm_open( FileNotFoundError: [Errno 2] No such file or directory: '/psm_592ec635' ---------- components: Library (Lib) messages: 364121 nosy: dxflores priority: normal severity: normal status: open title: (Possible) bug on multiprocessing.shared_memory versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 18:31:04 2020 From: report at bugs.python.org (Matthias Braun) Date: Fri, 13 Mar 2020 22:31:04 +0000 Subject: [New-bugs-announce] [issue39960] Using typename.__setattr__ in extension type with Py_TPFLAGS_HEAPTYPE is broken (hackcheck too eager?) Message-ID: <1584138664.49.0.0727456636956.issue39960@roundup.psfhosted.org> New submission from Matthias Braun : This is about an extension type created via `PyType_FromSpec` that overrides `tp_setattro` (minimal example attached). In this case cpython does not let me grab and use the `__setattr__` function "manually". Example: ``` >>> import demo >>> mytype_setattr = demo.MyType.__setattr__ >>> i = demo.MyType() >>> mytype_setattr(i, "foo", "bar") Traceback (most recent call last): File "", line 1, in TypeError: can't apply this __setattr__ to object object ``` I suspect this is related to the loop at the beginning of typeobject.c / hackcheck() that skips over types with Py_TPFLAGS_HEAPOBJECT. (Though removing the loop breaks things like the enum module). ---------- components: C API files: demomodule.zip messages: 364123 nosy: Matthias Braun priority: normal severity: normal status: open title: Using typename.__setattr__ in extension type with Py_TPFLAGS_HEAPTYPE is broken (hackcheck too eager?) versions: Python 3.7 Added file: https://bugs.python.org/file48974/demomodule.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 19:09:18 2020 From: report at bugs.python.org (Clem Wang) Date: Fri, 13 Mar 2020 23:09:18 +0000 Subject: [New-bugs-announce] [issue39961] warning: this use of "defined" may not be portable (Mac OS) Message-ID: <1584140958.39.0.802157581006.issue39961@roundup.psfhosted.org> New submission from Clem Wang : pyenv install 3.8.2 results in: BUILD FAILED (OS X 10.15.3 using python-build 20180424) Inspect or clean up the working tree at /var/folders/jy/10md97xn3mz_x_b42l1r2r8c0000gp/T/python-build.20200313154805.37448 Results logged to /var/folders/jy/10md97xn3mz_x_b42l1r2r8c0000gp/T/python-build.20200313154805.37448.log Last 10 log lines: 331 | #if !_PTHREAD_SWIFT_IMPORTER_NULLABILITY_COMPAT | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/pthread.h:331:6: warning: this use of "defined" may not be portable [-Wexpansion-to-defined] /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/pthread.h:540:6: warning: this use of "defined" may not be portable [-Wexpansion-to-defined] 540 | #if !_PTHREAD_SWIFT_IMPORTER_NULLABILITY_COMPAT | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/pthread.h:540:6: warning: this use of "defined" may not be portable [-Wexpansion-to-defined] cc1: some warnings being treated as errors make: *** [Objects/floatobject.o] Error 1 make: *** Waiting for unfinished jobs.... The real problem is on line 199 of /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/pthread.h /* */ #define _PTHREAD_SWIFT_IMPORTER_NULLABILITY_COMPAT \ defined(SWIFT_CLASS_EXTRA) && (!defined(SWIFT_SDK_OVERLAY_PTHREAD_EPOCH) || (SWIFT_SDK_OVERLAY_PTHREAD_EPOCH < 1)) I'm not sure if this is a problem for Apple to fix or whether the Python build needs to be more tolerant of warnings. ---------- components: macOS messages: 364125 nosy: clem.wang, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: warning: this use of "defined" may not be portable (Mac OS) type: compile error versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 13 19:39:01 2020 From: report at bugs.python.org (Cezary Wagner) Date: Fri, 13 Mar 2020 23:39:01 +0000 Subject: [New-bugs-announce] [issue39962] Wrong tell function results. Message-ID: <1584142741.89.0.229638402821.issue39962@roundup.psfhosted.org> New submission from Cezary Wagner : I wrote code which scan very large file PGN (chess games database). But I found that tell() function is buggy see results. Here is some code: with open('../s01_parser_eval/data/out-6976.txt') as pgn: is_game_parsed = parser.parse_game(visitor=visitor) # if processing_statistics.games % 100 == 0: print(processing_statistics.games, processing_statistics.positions, processing_statistics.moves, '%.2f' % processing_statistics.get_games_to_moves(), '%.2f' % processing_statistics.get_positions_to_moves(), '%.2f' % speed if speed else speed, pgn.tell()) print(pgn.tell()) This code can be simplified to this: with open('../s01_parser_eval/data/out-6976.txt') as pgn: while True: pgn.readline() print(pgn.tell()) 1 1 0 0.00 0.00 318.64 1008917597 1008917597 2 47 46 23.00 1.02 343.64 1008917599 1008917599 3 47 46 15.33 1.02 291.08 1008920549 1008920549 4 107 107 26.75 1.00 292.03 1008920551 1008920551 5 107 107 21.40 1.00 185.41 18446744074718477807 <- ??? 18446744074718477807 6 234 235 39.17 1.00 157.63 1008926192 1008926192 7 234 235 33.57 1.00 167.75 1008928371 1008928371 8 276 278 34.75 0.99 180.48 1008928373 1008928373 9 276 278 30.89 0.99 185.30 1008931145 1008931145 10 334 336 33.60 0.99 192.58 1008931147 1008931147 11 334 336 30.55 0.99 164.90 1008937220 1008937220 12 468 472 39.33 0.99 149.00 1008937222 1008937222 13 468 472 36.31 0.99 157.58 1008938833 1008938833 14 495 502 35.86 0.99 165.96 1008938835 1008938835 15 495 502 33.47 0.99 167.89 1008941875 1008941875 16 556 567 35.44 0.98 172.10 1008941877 1008941877 17 556 567 33.35 0.98 177.84 1008943769 1008943769 18 591 604 33.56 0.98 184.09 1008943771 1008943771 19 591 604 31.79 0.98 185.38 1008946692 1008946692 20 653 666 33.30 0.98 188.68 1008946694 1008946694 21 653 666 31.71 0.98 192.90 18446744074718500485 <- ??? 18446744074718500485 ---------- messages: 364126 nosy: Cezary.Wagner priority: normal severity: normal status: open title: Wrong tell function results. type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 14 01:35:00 2020 From: report at bugs.python.org (Mario de OOurtzborun) Date: Sat, 14 Mar 2020 05:35:00 +0000 Subject: [New-bugs-announce] [issue39963] Subclassing slice objects Message-ID: <1584164100.31.0.19477580242.issue39963@roundup.psfhosted.org> New submission from Mario de OOurtzborun : Is there any reason why slice objects aren't subclassable? I want a mutable slice object, but there is no way to create one that will work with lists or tuples. And __index__ method requires to return int. I want to prepare a git merge request about this issue if there is no specific reason to forbid them becoming a base class. ---------- messages: 364143 nosy: mariode1 priority: normal severity: normal status: open title: Subclassing slice objects _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 14 23:19:20 2020 From: report at bugs.python.org (Richard King) Date: Sun, 15 Mar 2020 03:19:20 +0000 Subject: [New-bugs-announce] [issue39964] adding a string to a list works differently with x+='' compared to x=x+'' Message-ID: <1584242360.88.0.483321661554.issue39964@roundup.psfhosted.org> New submission from Richard King : x = ['a'] x += ' ' results in ['a',' '] x = x + ' ' results in an exception: Traceback (most recent call last): File "", line 1, in TypeError: can only concatenate list (not "str") to list It behaves the same in 2.7.15 and 3.7.2. ---------- components: Windows messages: 364213 nosy: paul.moore, rickbking, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: adding a string to a list works differently with x+='' compared to x=x+'' type: behavior versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 14 23:50:53 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 15 Mar 2020 03:50:53 +0000 Subject: [New-bugs-announce] [issue39965] await is valid in non async functions if PyCF_ALLOW_TOP_LEVEL_AWAIT is set Message-ID: <1584244253.08.0.00539812271843.issue39965@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : If PyCF_ALLOW_TOP_LEVEL_AWAIT is set this code is valid: def f(): await foo And this should raise a "SyntaxError: 'await' outside async function". The reason is that the PyCF_ALLOW_TOP_LEVEL_AWAIT is global in the compiler and affects everything without checking if the current code being compiled is actually in the TOP level or not. ---------- components: asyncio messages: 364216 nosy: asvetlov, pablogsal, yselivanov priority: normal severity: normal status: open title: await is valid in non async functions if PyCF_ALLOW_TOP_LEVEL_AWAIT is set type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 15 02:39:44 2020 From: report at bugs.python.org (Avram) Date: Sun, 15 Mar 2020 06:39:44 +0000 Subject: [New-bugs-announce] [issue39966] mock 3.9 bug: Wrapped objects without __bool__ raise exception Message-ID: <1584254384.28.0.0187140133308.issue39966@roundup.psfhosted.org> New submission from Avram : This bug was introduced with Issue25597 Here's some code that demonstrates the error: import sys from unittest.mock import patch with patch.object(sys, 'stdout', wraps=sys.stdout) as mockstdout: bool(sys.stdout) This works fine in 3.8 and earlier, but fails in 3.9 It seems the goal was to be able to access dunder methods for wrapped objects. Before this change __bool__ wasn't actually being checked, but was forced to True, which works for basic existence tests. The new code method._mock_wraps = getattr(mock._mock_wraps, name) has no fallthrough in case the attribute isn't there such as the case with __bool__ on sys.stdout. ---------- components: Library (Lib) messages: 364222 nosy: Darragh Bailey, anthonypjshaw, aviso, cjw296, lisroach, mariocj89, michael.foord, pconnell, r.david.murray, rbcollins, xtreak priority: normal severity: normal status: open title: mock 3.9 bug: Wrapped objects without __bool__ raise exception type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 15 05:27:24 2020 From: report at bugs.python.org (daniel hahler) Date: Sun, 15 Mar 2020 09:27:24 +0000 Subject: [New-bugs-announce] [issue39967] bdb calls linecache.checkcache, resulting in source being different from code Message-ID: <1584264444.94.0.414903908366.issue39967@roundup.psfhosted.org> New submission from daniel hahler : `Bdb.reset` calls `linecache.checkcache`, which will clear the cache for any updated source files. This however might result in displayed source code being different from the actual code, in case you are editing the file being currently debugged. I think it is better to keep the initially cached version (which might still get invalidated/checked via inspect itself), but that is another issue. The code is very old already, merged in b6775db241: commit b6775db241 Author: Guido van Rossum Date: Mon Aug 1 11:34:53 1994 +0000 Merge alpha100 branch back to main trunk I will try a PR that removes it to see if it causes any test failures. Code ref: https://github.com/python/cpython/blob/598d29c51c7b5a77f71eed0f615eb0b3865a4085/Lib/bdb.py#L56-L57 ---------- components: Library (Lib) messages: 364224 nosy: blueyed priority: normal severity: normal status: open title: bdb calls linecache.checkcache, resulting in source being different from code versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 15 09:42:06 2020 From: report at bugs.python.org (hai shi) Date: Sun, 15 Mar 2020 13:42:06 +0000 Subject: [New-bugs-announce] [issue39968] move extension modules' macros of `get_xx_state()` to inline function. Message-ID: <1584279726.3.0.17378093364.issue39968@roundup.psfhosted.org> New submission from hai shi : as victor and petr said in PR18613?inline function is more better than macros, so I plan move all extension modules' macros of `get_xx_state()` to inline function. Note: some inline get_xx_state() can not be used directly in `tp_traverse`?`tp_free` and `tp_clear`(issue39824). ---------- components: Extension Modules messages: 364231 nosy: petr.viktorin, shihai1991, vstinner priority: normal severity: normal status: open title: move extension modules' macros of `get_xx_state()` to inline function. type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 15 10:56:22 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sun, 15 Mar 2020 14:56:22 +0000 Subject: [New-bugs-announce] [issue39969] Remove Param expression context from AST Message-ID: <1584284182.17.0.0536293535661.issue39969@roundup.psfhosted.org> New submission from Batuhan Taskaya : Param is an expression context that is no longer in use, we can simply remove it. This node predates the arguments node, and if I am not misguessing used inside of function signatures. ---------- components: Library (Lib) messages: 364238 nosy: BTaskaya priority: normal severity: normal status: open title: Remove Param expression context from AST versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 15 11:01:14 2020 From: report at bugs.python.org (Yi Luan) Date: Sun, 15 Mar 2020 15:01:14 +0000 Subject: [New-bugs-announce] [issue39970] Combined behavior of datetime.datetime.timestamp() and datetime.datetime.utcnow() on non-UTC timezoned machines Message-ID: <1584284474.21.0.996854093569.issue39970@roundup.psfhosted.org> New submission from Yi Luan : Hello, Apologies if this was a duplicate issue. I guess the most concise way of saying this is that when doing: >>> datetime.datetime.utcnow().timestamp() on a machine whose local time isn't the UTC time, the above code will not return the correct timestamp. Because datetime.datetime.timestamp() and datetime.datetime.fromtimestamp() will intrinsically convert the timestamp based on the local time of the running machine, when fed with data that are already converted to UTC, these functions will double convert them hence will return incorrect result. For example: On a machine that is in CST time: >>> dt = datetime.datetime.utcnow() >>> dt datetime.datetime(2020, 3, 15, 14, 33, 10, 213664) >>> datetime.datetime.fromtimestamp(dt.timestamp(), datetime.timezone.utc) datetime.datetime(2020, 3, 15, 6, 33, 10, 213664) Meanwhile, on a machine that is in UTC time: >>> dt = datetime.datetime.utcnow() >>> dt datetime.datetime(2020, 3, 15, 14, 41, 2, 203275) >>> datetime.datetime.fromtimestamp(dt.timestamp(), datetime.timezone.utc) datetime.datetime(2020, 3, 15, 14, 41, 2, 203275) I understand that one should probably use datetime.datetime.fromtimestamp() to construct time, but the output of the above code is inconsistent on machines that are set to different timezones. The above code explicitly asked to get the UTC time now and get the timestamp, then convert from a UTC timestamp to a datetime object. The result should be the same on the first machine but it didn't. >From my point of view, timestamp() functions should not shift any datetime objects since it returns an object that is naive about the tzinfo anyway. Timestamp data generated by Python should be correct and code should do what the programmer asked the code to do. In the above example, datetime.datetime.utcnow().timestamp() should return the timestamp of now in UTC time but in fact on a machine in CST time it would return the timestamp 8 hours before the UTC timestamp of now. The intrinsic behavior of timestamp() functions will cause ambiguity in code, therefore I suggest that timestamp() functions, unless used on tz aware objects, should not shift any date time based on the running machine's local time. ---------- components: Library (Lib) messages: 364240 nosy: Yi Luan priority: normal severity: normal status: open title: Combined behavior of datetime.datetime.timestamp() and datetime.datetime.utcnow() on non-UTC timezoned machines type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 15 12:39:23 2020 From: report at bugs.python.org (Diogo Flores) Date: Sun, 15 Mar 2020 16:39:23 +0000 Subject: [New-bugs-announce] [issue39971] Error on documentation - Quick fix. Message-ID: <1584290363.88.0.212644969816.issue39971@roundup.psfhosted.org> New submission from Diogo Flores : Hello, A code example from the 'functional programming how-to' raises an error. The simplest fix would be to remove the Ellipsis object from the 'line_list'. Thank you, Diogo Please check below for the commands issued: # https://docs.python.org/3/howto/functional.html#generator-expressions-and-list-comprehensions >>> line_list = [' line 1\n', 'line 2 \n', ...] >>> >>> # Generator expression -- returns iterator >>> stripped_iter = (line.strip() for line in line_list) >>> >>> # List comprehension -- returns list >>> stripped_list = [line.strip() for line in line_list] Traceback (most recent call last): File "", line 1, in File "", line 1, in AttributeError: 'ellipsis' object has no attribute 'strip' ---------- assignee: docs at python components: Documentation messages: 364247 nosy: docs at python, dxflores priority: normal severity: normal status: open title: Error on documentation - Quick fix. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 00:42:58 2020 From: report at bugs.python.org (Ion SKALAMERA) Date: Mon, 16 Mar 2020 04:42:58 +0000 Subject: [New-bugs-announce] [issue39972] Math library Bug Return None for "degrees(0)" Message-ID: <1584333778.69.0.393057197826.issue39972@roundup.psfhosted.org> New submission from Ion SKALAMERA : I tried programming a recursive fractal with Python Turtle on repl.it: https://repl.it/@IonSKALAMERA/simplefractalrecursive and when my code came to calculating the angle degrees(0) returned NoneType which is a serious bug in the math library. I dont know with which Python version it operates ---------- messages: 364285 nosy: Ion SKALAMERA priority: normal severity: normal status: open title: Math library Bug Return None for "degrees(0)" type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 01:23:17 2020 From: report at bugs.python.org (Zackery Spytz) Date: Mon, 16 Mar 2020 05:23:17 +0000 Subject: [New-bugs-announce] [issue39973] The documentation for PyObject_GenericSetDict() is incorrect Message-ID: <1584336197.86.0.863686228864.issue39973@roundup.psfhosted.org> New submission from Zackery Spytz : PyObject_GenericSetDict() takes three arguments, but the documentation states that it takes just two. ---------- assignee: docs at python components: Documentation messages: 364287 nosy: ZackerySpytz, docs at python priority: normal severity: normal status: open title: The documentation for PyObject_GenericSetDict() is incorrect versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 02:02:00 2020 From: report at bugs.python.org (tzickel) Date: Mon, 16 Mar 2020 06:02:00 +0000 Subject: [New-bugs-announce] [issue39974] A race condition with GIL releasing exists in stringlib_bytes_join Message-ID: <1584338520.98.0.250313296967.issue39974@roundup.psfhosted.org> New submission from tzickel : bpo 36051 added optimization to release GIL on certain conditions of bytes joining, but it has missed a critical path. If the number of items joining is less or equal to NB_STATIC_BUFFERS (10) than static_buffers will be used to hold the buffers. https://github.com/python/cpython/blob/5b66ec166b81c8a77286da2c0d17be3579c3069a/Objects/stringlib/join.h#L54 But then the decision to release the GIL or not (drop_gil) does not take this into consideration, and the GIL might be released and then another thread is free to do the same code path, and hijack the static_buffers for it's own usage, causing a race condition. A decision should be made if it's worth for the optimization to not use the static buffers in this case (although it's an early part of the code...) or not drop the GIL anyhow if it's static buffers (a thing which might make this optimization not worth it, since based on length of data to join, and not number of items to join). ---------- messages: 364288 nosy: tzickel priority: normal severity: normal status: open title: A race condition with GIL releasing exists in stringlib_bytes_join versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 03:18:39 2020 From: report at bugs.python.org (Noel del rosario) Date: Mon, 16 Mar 2020 07:18:39 +0000 Subject: [New-bugs-announce] [issue39975] Group of commands running in Python 3.7.6 Shell, but failing as Script file. Message-ID: <1584343119.59.0.433664804662.issue39975@roundup.psfhosted.org> New submission from Noel del rosario : Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license()" for more information. from future import absolute_import, division, print_function, unicode_literals import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras print(tf.version) 2.1.0 BUT IF I RUN THESE COMMANDS as a PYTHON SCRIPT FILE iy FAILS. ======================== RESTART: D:\PythonCode-1\tmp.py ======================= Traceback (most recent call last): File "D:\PythonCode-1\tmp.py", line 7, in import tensorflow as tf File "C:\Python37\lib\site-packages\tensorflow_init_.py", line 101, in from tensorflow_core import * File "C:\Python37\lib\site-packages\tensorflow_core_init_.py", line 40, in from tensorflow.python.tools import module_util as _module_util ModuleNotFoundError: No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package Why is it failing as a Script file ? Is there something wrong in my Procedure ? Hope to teceive your reply and Thanks in Advanced. ---------- messages: 364293 nosy: rosarion priority: normal severity: normal status: open title: Group of commands running in Python 3.7.6 Shell, but failing as Script file. versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 03:52:43 2020 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 16 Mar 2020 07:52:43 +0000 Subject: [New-bugs-announce] [issue39976] Add "**other_popen_kwargs" to subprocess API signatures in docs Message-ID: <1584345163.73.0.77779853177.issue39976@roundup.psfhosted.org> New submission from Nick Coghlan : Two of my colleagues missed the "The arguments shown above are merely the most common ones, ..." caveat on the subprocess.run documentation, and assumed that Python 3.5 only supported the "cwd" option in the low level Popen API, and not any of the higher level APIs. Something we could potential do is include a "**other_popen_kwargs" placeholder in the affected API signatures (run, call, check_call, check_output) that makes it clear there are more options beyond the explicitly listed ones. ---------- assignee: docs at python components: Documentation messages: 364295 nosy: docs at python, ncoghlan priority: normal severity: normal stage: needs patch status: open title: Add "**other_popen_kwargs" to subprocess API signatures in docs type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 06:26:37 2020 From: report at bugs.python.org (foldr) Date: Mon, 16 Mar 2020 10:26:37 +0000 Subject: [New-bugs-announce] [issue39977] Python aborts trying to load libcrypto.dylib Message-ID: <1584354397.7.0.17560175577.issue39977@roundup.psfhosted.org> New submission from foldr : Good morning. I recently updated my system to MacOS Catalina and python crashes if it tries to load libcrypto.dylib. I have attached the crash report generated by MacOS. Steps to reproduce: Calling a binary that loads the library causes the crash: $ luigi [1] 70375 abort luigi $ dex2call [1] 70451 abort dex2call Loading the library from the interpreter triggers the crash too: $ python Python 3.7.6 (default, Dec 30 2019, 19:38:28) [Clang 11.0.0 (clang-1100.0.33.16)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import dex2call [1] 70536 abort python I have tested with https://pypi.org/project/luigi/ and https://pypi.org/project/dex2call/. Invoking python without any script or with (I suppose) a script that does not require libcrypto works fine: $ python Python 3.7.6 (default, Dec 30 2019, 19:38:28) [Clang 11.0.0 (clang-1100.0.33.16)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> $ python ~/scripts/yt2nb.py Traceback (most recent call last): File "/Users/foldr/scripts/yt2nb.py", line 6, in e = xml.etree.ElementTree.parse(sys.argv[1]).getroot() IndexError: list index out of range The content of yt2nb.py is: #!/usr/bin/env python3 import xml.etree.ElementTree import sys e = xml.etree.ElementTree.parse(sys.argv[1]).getroot() for outline in e.iter('outline'): if "type" in outline.attrib and outline.attrib["type"] == "rss": url = outline.attrib['xmlUrl'] name = outline.attrib['title']#.encode("utf-8") print("%s youtube \"~%s\"" % (url, str(name))) I have Python installed from brew and the crash report has all the relevant version numbers. Let me know if you need more information for testing or reproducibility. Thank you. Daniel. ---------- components: macOS files: python-crash.txt messages: 364306 nosy: foldr, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Python aborts trying to load libcrypto.dylib type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48976/python-crash.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 10:09:28 2020 From: report at bugs.python.org (Mark Shannon) Date: Mon, 16 Mar 2020 14:09:28 +0000 Subject: [New-bugs-announce] [issue39978] Vectorcall implementation should conform to PEP 590. Message-ID: <1584367768.43.0.96665023225.issue39978@roundup.psfhosted.org> New submission from Mark Shannon : The implementation of `PyObject_Vectorcall` adds unnecessary overhead to PEP 590, which undermines its purpose. The implementation was changed in https://github.com/python/cpython/pull/17052, which I objected to at the time. The change has a negative impact on performance as it add calls to both `PyThreadState_GET()` and `_Py_CheckFunctionResult()`. This is practically an invitation for callers to skip `PyObject_Vectorcall` and access the underlying function pointer directly, making `PyObject_Vectorcall` pointless. https://github.com/python/cpython/pull/17052 should be reverted. ---------- keywords: 3.9regression messages: 364325 nosy: Mark.Shannon, petr.viktorin, vstinner priority: normal severity: normal status: open title: Vectorcall implementation should conform to PEP 590. versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 12:17:28 2020 From: report at bugs.python.org (Gle) Date: Mon, 16 Mar 2020 16:17:28 +0000 Subject: [New-bugs-announce] [issue39979] Cannot tune scrypt with large enough parameters Message-ID: <1584375448.39.0.689012483556.issue39979@roundup.psfhosted.org> New submission from Gle : I can use scrypt KDF with the cryptography module https://cryptography.io/en/latest/hazmat/primitives/key-derivation-functions/#cryptography.hazmat.primitives.kdf.scrypt.Scrypt with large parameters (n=2**20, r=16, p=1) On the other hand, using scrypt KDF from hashlib with the same parameters yields "Invalid combination of n, r, p, maxmem" (I use maxmem=0). Shouldn't they behave the same ? As they both seem to be wrappers around OpenSSL ? I've also included a set of functioning parameters as hashlib's scrypt works fine on small parameter values. Notice that the output from hashlib's scrypt is different than the output from the cryptography module. Shouldn't they be the same ? (I'm no cryptography expert) I would really like to be able to use scrypt for hardened password hashing using only python standard library's hashlib. Maybe I'm missing something ? Python is great ! Thanks for all the good work ! ---------- components: Library (Lib) files: compare.py messages: 364334 nosy: Gle, christian.heimes, gregory.p.smith priority: normal severity: normal status: open title: Cannot tune scrypt with large enough parameters type: crash versions: Python 3.8 Added file: https://bugs.python.org/file48977/compare.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 14:56:13 2020 From: report at bugs.python.org (Krzysztof Rusek) Date: Mon, 16 Mar 2020 18:56:13 +0000 Subject: [New-bugs-announce] [issue39980] importlib.resources.path() may return incorrect path when using custom loader Message-ID: <1584384973.79.0.673963797108.issue39980@roundup.psfhosted.org> New submission from Krzysztof Rusek : importlib.resources.path() function may return a path to a file with different contents than expected. This may happen when using a custom loader implementation that uses fake filenames (like ''). I'm attaching a reproduction test (resources.py). ---------- components: Library (Lib) files: resources.py messages: 364352 nosy: Krzysztof Rusek priority: normal severity: normal status: open title: importlib.resources.path() may return incorrect path when using custom loader versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48978/resources.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 15:17:20 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Mon, 16 Mar 2020 19:17:20 +0000 Subject: [New-bugs-announce] [issue39981] Default values for AST Nodes Message-ID: <1584386240.94.0.329600152717.issue39981@roundup.psfhosted.org> New submission from Batuhan Taskaya : For omitting some defaults, @serhiy.storchaka already added support to initialize some ast nodes with some default values (optional fields). An example; >>> ast.Constant().kind is None True This isn't exactly a default value, but some kind of class attribute. I think we can push this one step further and initialize all kinds of default values (both optionals and sequences). An example; >>> func = ast.FunctionDef("easy_func", ast.arguments(), body=[ast.Pass()]) >>> func = ast.fix_missing_locations(func) >>> exec(compile(ast.Module(body=[func]), "", "exec")) >>> easy_func() compared to this (other way around, compiler gives errors so does most of the ast based tool, including ast.unparser) >>> func = ast.FunctionDef("easy_func", ast.arguments(posonlyargs=[], args=[], kwonlyargs=[], kw_defaults=[], defaults=[]), decorator_list=[], body=[ast.Pass()]) >>> func = ast.fix_missing_locations(func) >>> exec(compile(ast.Module(body=[func], type_ignores=[]), "", "exec")) >>> easy_func() ---------- components: Library (Lib) messages: 364355 nosy: BTaskaya, benjamin.peterson, pablogsal, serhiy.storchaka priority: normal severity: normal status: open title: Default values for AST Nodes type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 17:59:26 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 16 Mar 2020 21:59:26 +0000 Subject: [New-bugs-announce] [issue39982] FreeBSD: SCTP tests of test_socket fails on AMD64 FreeBSD Shared 3.x Message-ID: <1584395966.63.0.545981270163.issue39982@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 FreeBSD Shared 3.x: https://buildbot.python.org/all/#/builders/152/builds/409 ====================================================================== ERROR: testSendmsg (test.test_socket.SendmsgSCTPStreamTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 342, in _setUp self.__setUp() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 2716, in setUp super().setUp() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 2533, in setUp super().setUp() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 646, in setUp conn, addr = self.serv.accept() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/socket.py", line 293, in accept fd, addr = self._accept() ConnectionAbortedError: [Errno 53] Software caused connection abort (...) ====================================================================== ERROR: testRecvmsgAfterClose (test.test_socket.RecvmsgSCTPStreamTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 342, in _setUp self.__setUp() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 2533, in setUp super().setUp() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 646, in setUp conn, addr = self.serv.accept() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/socket.py", line 293, in accept fd, addr = self._accept() ConnectionAbortedError: [Errno 53] Software caused connection abort (...) ====================================================================== ERROR: testRecvmsgIntoGenerator (test.test_socket.RecvmsgIntoSCTPStreamTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 342, in _setUp self.__setUp() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 2533, in setUp super().setUp() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_socket.py", line 646, in setUp conn, addr = self.serv.accept() File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/socket.py", line 293, in accept fd, addr = self._accept() ConnectionAbortedError: [Errno 53] Software caused connection abort (...) ---------- components: Tests messages: 364362 nosy: koobs, vstinner priority: normal severity: normal status: open title: FreeBSD: SCTP tests of test_socket fails on AMD64 FreeBSD Shared 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 18:21:11 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 16 Mar 2020 22:21:11 +0000 Subject: [New-bugs-announce] [issue39983] test.regrtest: test marked as failed (env changed), but no warning: test_multiprocessing_forkserver Message-ID: <1584397271.26.0.591205335177.issue39983@roundup.psfhosted.org> New submission from STINNER Victor : Tests run with --fail-env-changed. test_multiprocessing_forkserver failed with "env changed", but no warning was logged to explain why :-( PPC64LE RHEL7 3.8: https://buildbot.python.org/all/#/builders/401/builds/69 ./python ./Tools/scripts/run_tests.py -j 1 -u all -W --slowest --fail-env-changed --timeout=900 -j2 --junit-xml test-results.xml -j10 ---------- components: Tests messages: 364365 nosy: vstinner priority: normal severity: normal status: open title: test.regrtest: test marked as failed (env changed), but no warning: test_multiprocessing_forkserver versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 20:53:21 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Mar 2020 00:53:21 +0000 Subject: [New-bugs-announce] [issue39984] Move some ceval fields from _PyRuntime.ceval to PyInterpreterState.ceval Message-ID: <1584406401.63.0.604728519645.issue39984@roundup.psfhosted.org> New submission from STINNER Victor : The _PyRuntime.ceval structure should be made "per-interpreter". I don't want to make the GIL per-interpreter: that's out of the scope of this issue. So I propose to only move a few fields to make more ceval fields "per interpreter". ---------- components: Interpreter Core messages: 364378 nosy: vstinner priority: normal severity: normal status: open title: Move some ceval fields from _PyRuntime.ceval to PyInterpreterState.ceval versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 21:42:20 2020 From: report at bugs.python.org (Maxwell Bernstein) Date: Tue, 17 Mar 2020 01:42:20 +0000 Subject: [New-bugs-announce] [issue39985] str.format and string.Formatter subscript behaviors diverge Message-ID: <1584409340.15.0.380295025491.issue39985@roundup.psfhosted.org> New submission from Maxwell Bernstein : As I understand it, str.format and string.Formatter are supposed to behave the same, with string.Formatter being a pluggable variant. While poking at string.Formatter, I noticed that they do not behave the same when formatting a nameless subscript: ``` import string str.format("{[0]}", "hello") # => "h" string.Formatter().format("{[0]}", "hello") # => KeyError("") ``` They seem to work the same in the case where the arg is either indexed by number or by name: ``` import string str.format("{0[0]}", "hello") # => "h" string.Formatter().format("{0[0]}", "hello") # => "h" str.format("{a[0]}", a="hello") # => "h" string.Formatter().format("{a[0]}", a="hello") # => "h" ``` After some digging, I have come up with a couple ideas: * Change _string.formatter_field_name_split to treat an empty string field name as 0, so that string.Formatter.get_value looks up the arg in args, instead of kwargs * Change string.Formatter.get_value to treat empty string key as 0, and look up the arg in args, instead of kwargs I'm happy to submit a PR if people find one of these two solutions palatable or have some solutions of their own. (Note: this may appear in other versions, but I don't have them on my machine to test.) ---------- components: Library (Lib) messages: 364382 nosy: tekknolagi priority: normal severity: normal status: open title: str.format and string.Formatter subscript behaviors diverge type: behavior versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 16 22:22:20 2020 From: report at bugs.python.org (Matthias Braun) Date: Tue, 17 Mar 2020 02:22:20 +0000 Subject: [New-bugs-announce] [issue39986] test_os / test_listdir failed as root-directory changed during test Message-ID: <1584411740.74.0.439695982402.issue39986@roundup.psfhosted.org> New submission from Matthias Braun : The test_listdir test from Lib/test/test_os.py is using os.listdir() twice in the root directory with and without parameters and compares the results. I just had the test fail for me, because an unrelated process happened to create a file in the root directory between the two invocations of os.listdir. In my case it was rsyslog creating '/imjournal.state.tmp', but the problem is a general one. The test failed with: ``` ..test test_os failed -- Traceback (most recent call last): File "/home/matthiasb/dev/fbcpython/Lib/test/test_os.py", line 1914, in test_listdir self.assertEqual(set(os.listdir()), set(os.listdir(os.sep))) AssertionError: Items in the first set but not the second: 'imjournal.state.tmp' ``` ---------- components: Tests messages: 364383 nosy: Matthias Braun priority: normal severity: normal status: open title: test_os / test_listdir failed as root-directory changed during test type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 02:54:30 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 17 Mar 2020 06:54:30 +0000 Subject: [New-bugs-announce] [issue39987] Simplify setting line numbers in the compiler Message-ID: <1584428070.24.0.802483669477.issue39987@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently the compiler can set the line number to the instruction or keep it unset (zero). Then, when create linenotab it adds entries only for set line numbers. But in rare cases (docstring or the first instruction in the module, definition of a function with default arguments, maybe more) it may add multiple entries for the same lineno even if the bytecode offset is less than 255. Seems the only effect of this is a suboptimal linenotab. The proposed PR simplifies setting the line number in the compiler (it was the primary goal) and also fixes the above minor issue. The simplification is needed for several future changes in the compiler. ---------- components: Interpreter Core messages: 364388 nosy: benjamin.peterson, brett.cannon, pablogsal, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Simplify setting line numbers in the compiler versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 03:29:51 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 17 Mar 2020 07:29:51 +0000 Subject: [New-bugs-announce] [issue39988] Remove AugLoad and AugStore expression context from AST Message-ID: <1584430191.22.0.892292018718.issue39988@roundup.psfhosted.org> New submission from Serhiy Storchaka : AugLoad and AugStore are never exposed to the user. They are not generated by the parser and the compiler does not accept it. >>> from ast import * >>> tree = Module(body=[AugAssign(target=Name(id='x', ctx=AugStore()), op=Add(), value=Constant(value=1))], type_ignores=[]) >>> compile(fix_missing_locations(tree), 'sample', 'exec') Traceback (most recent call last): File "", line 1, in ValueError: expression must have Store context but has AugStore instead They are only used in temporary nodes created by the compiler. But the support of AugLoad and AugStore is spread across many sites in the compiler, adding special cases which have very little in common with the "normal" cases of Load, Store and Del. The proposed PR removes AugLoad and AugStore. It moves support of the augmented assignment into a separate function and cleans up the rest of the code. This saves around 70 lines of handwritten code and around 60 lines of generated code. The PR depends on issue39987. See also similar issue39969. ---------- components: Interpreter Core messages: 364390 nosy: BTaskaya, benjamin.peterson, brett.cannon, pablogsal, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Remove AugLoad and AugStore expression context from AST type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 04:06:12 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 17 Mar 2020 08:06:12 +0000 Subject: [New-bugs-announce] [issue39989] Output closing parenthesis in ast.dump() on separate line Message-ID: <1584432372.22.0.904187572723.issue39989@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently ast.dump() in multiline mode (see issue37995) appends closing parenthesis to the end of the line: >>> import ast >>> node = ast.parse('spam(eggs, "and cheese")') >>> print(ast.dump(node, indent=3)) Module( body=[ Expr( value=Call( func=Name(id='spam', ctx=Load()), args=[ Name(id='eggs', ctx=Load()), Constant(value='and cheese')], keywords=[]))], type_ignores=[]) It uses vertical space more efficiently (which is especially important on Windows console). But I got a feedback about output closing parenthesis on separate lines (msg363783): Module( body=[ Expr( value=Call( func=Name(id='spam', ctx=Load()), args=[ Name(id='eggs', ctx=Load()), Constant(value='and cheese') ], keywords=[] ) ) ], type_ignores=[] ) It looks more "balanced", but less vertical space efficient. It adds almost 300 lines to 57 examples in Doc/library/ast.rst. And after omitting optional list arguments like keywords=[] and type_ignores=[] (see issue39981) the stairs of parenthesis will look even longer. The proposed PR changes the output of ast.dump() by moving closing parenthesis on separate lines. I am still not sure what output is better. ---------- components: Library (Lib) messages: 364391 nosy: benjamin.peterson, pablogsal, rhettinger, serhiy.storchaka, terry.reedy priority: normal severity: normal status: open title: Output closing parenthesis in ast.dump() on separate line type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 05:21:34 2020 From: report at bugs.python.org (=?utf-8?q?Nguy=E1=BB=85n_Gia_Phong?=) Date: Tue, 17 Mar 2020 09:21:34 +0000 Subject: [New-bugs-announce] [issue39990] help output should make use of typing.get_type_hints Message-ID: <1584436894.78.0.585396462695.issue39990@roundup.psfhosted.org> New submission from Nguy?n Gia Phong : With PEP 563, it is legal to annotate a function as follows def foo(bar: 'int') -> 'bool': pass Currently, help(foo) would print the exact signature in foo.__annotations__ and it's not really pretty. My proposal is to use the type hints from typing.get_type_hints to make documentations more readable from the user's perspective. I might not be aware of all use cases and disadvantages of this proposal however. ---------- assignee: docs at python components: Documentation messages: 364399 nosy: McSinyx, docs at python priority: normal severity: normal status: open title: help output should make use of typing.get_type_hints _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 07:27:33 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Mar 2020 11:27:33 +0000 Subject: [New-bugs-announce] [issue39991] test_uuid.test_netstat_getnode() fails on FreeBSD VM: uuid._netstat_getnode() uses IPv6 address as MAC address Message-ID: <1584444453.93.0.776725863736.issue39991@roundup.psfhosted.org> New submission from STINNER Victor : My FreeBSD VM has a NIC with the IPv6 address fe80::5054:ff:fe9: local-link IPv6 address. It's used by uuid._netstat_getnode() as a MAC address, but it seems like this IPv6 address doesn't respect RFC 4122 and so should be skipped. _find_mac_under_heading() should reject IPv6 address: only use MAC address. vstinner at freebsd$ uname -a FreeBSD freebsd 12.1-RELEASE-p2 FreeBSD 12.1-RELEASE-p2 GENERIC amd64 ====================================================================== FAIL: test_netstat_getnode (test.test_uuid.TestInternalsWithExtModule) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/vstinner/python/master/Lib/test/test_uuid.py", line 767, in test_netstat_getnode self.check_node(node, 'netstat') File "/usr/home/vstinner/python/master/Lib/test/test_uuid.py", line 736, in check_node self.assertTrue(0 < node < (1 << 48), AssertionError: False is not true : fe805054fffe9 is not an RFC 4122 node ID ====================================================================== FAIL: test_netstat_getnode (test.test_uuid.TestInternalsWithoutExtModule) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/vstinner/python/master/Lib/test/test_uuid.py", line 767, in test_netstat_getnode self.check_node(node, 'netstat') File "/usr/home/vstinner/python/master/Lib/test/test_uuid.py", line 736, in check_node self.assertTrue(0 < node < (1 << 48), AssertionError: False is not true : fe805054fffe9 is not an RFC 4122 node ID It's using a qemu VM run by virt-manager. fe805054fffe9 seems to be the MAC address of my vtnet network interface: vstinner at freebsd$ netstat -ian Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll vtnet 1500 52:54:00:9d:0e:67 10017 0 0 8174 0 0 vtnet - fe80::%vtnet0 fe80::5054:ff:fe9 0 - - 4 - - vtnet - 192.168.122.0 192.168.122.45 8844 - - 8171 - - lo0 16384 lo0 260148 0 0 260148 0 0 lo0 - ::1/128 ::1 193 - - 193 - - ff01::1%lo0 ff02::2:2eb7:74fa ff02::2:ff2e:b774 ff02::1%lo0 ff02::1:ff00:1%lo lo0 - fe80::%lo0/64 fe80::1%lo0 0 - - 0 - - ff01::1%lo0 ff02::2:2eb7:74fa ff02::2:ff2e:b774 ff02::1%lo0 ff02::1:ff00:1%lo lo0 - 127.0.0.0/8 127.0.0.1 259955 - - 259955 - - 224.0.0.1 ---------- components: Tests messages: 364412 nosy: barry, vstinner priority: normal severity: normal status: open title: test_uuid.test_netstat_getnode() fails on FreeBSD VM: uuid._netstat_getnode() uses IPv6 address as MAC address versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 08:09:46 2020 From: report at bugs.python.org (Vladimir) Date: Tue, 17 Mar 2020 12:09:46 +0000 Subject: [New-bugs-announce] [issue39992] Windows line endings of pyc file detected on Ubuntu Message-ID: <1584446986.58.0.898200280545.issue39992@roundup.psfhosted.org> New submission from Vladimir : I have problem to run pyc file on one machine with Ubuntu Server 18.04.4 LTS. This is my source code of the file: #!/root/PycharmProjects/Project/venv/bin/python3.7 print("Hi") When I compile it in python console with commands: import py_compile py_compile.compile('test2.py') I get test2.cpython-37.pyc file. Then I add execution access by chmod +x test2.cpython-37.pyc If I run ./test2.cpython-37.pyc on first machine (Ubuntu Server 18.04.4 LTS) I get simple "Hi". But if I run similarly compiled file on other machine with the same OS - Ubuntu Server 18.04.4 LTS, I get: ./test2.cpython-37.pyc: line 1: $'B\r\r': command not found ./test2.cpython-37.pyc: line 2: syntax error near unexpected token `)' ./test2.cpython-37.pyc: line 2: `z?p^=?@s ed?dS)ZHiN)?print?rrtest2.py?' It looks like it is reading Windows line endings. But why? It is created, compiled and run on Ubuntu machine. How can I solve this issue, and run this pyc file with the right result on second machine? ---------- messages: 364417 nosy: vladinko0 priority: normal severity: normal status: open title: Windows line endings of pyc file detected on Ubuntu versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 09:08:22 2020 From: report at bugs.python.org (Michael S) Date: Tue, 17 Mar 2020 13:08:22 +0000 Subject: [New-bugs-announce] [issue39993] Language Reference - Function definition parameter_list item definition not equivalent to implementation. Message-ID: <1584450502.26.0.366268948098.issue39993@roundup.psfhosted.org> New submission from Michael S : I just read https://docs.python.org/3/reference/compound_stmts.html#function-definitions The item parameter_list is defined as: parameter_list ::= defparameter ("," defparameter)* "," "/" ["," [parameter_list_no_posonly]] | parameter_list_no_posonly This definition states that the "," "/" after ("," defparameter)* are mandatory. But this is not true in Python 3.8, because you can define a function as def f(a): pass Did I miss something? ---------- assignee: docs at python components: Documentation messages: 364425 nosy: Michael S2, docs at python priority: normal severity: normal status: open title: Language Reference - Function definition parameter_list item definition not equivalent to implementation. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 12:19:33 2020 From: report at bugs.python.org (Palak Kumar Jha) Date: Tue, 17 Mar 2020 16:19:33 +0000 Subject: [New-bugs-announce] [issue39994] Redundant code in pprint module. Message-ID: <1584461973.52.0.394481201775.issue39994@roundup.psfhosted.org> New submission from Palak Kumar Jha : In the PrettyPrinter._format method, since self._dispatch has dict.__repr__ [key] mapped to self._pprint_dict [value] the elif block is not needed. Its work is already being done by the if block above, which searches self._dispatch to fetch the appropriate value. ---------- messages: 364442 nosy: palakjha priority: normal severity: normal status: open title: Redundant code in pprint module. type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 13:01:54 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Mar 2020 17:01:54 +0000 Subject: [New-bugs-announce] [issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with Message-ID: <1584464514.72.0.0566985296328.issue39995@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Ubuntu Shared 3.x: https://buildbot.python.org/all/#/builders/101/builds/532 test_crash (test.test_concurrent_futures.ProcessPoolSpawnExecutorDeadlockTest) ... Stdout: 15.51s Stderr: Warning -- threading_cleanup() failed to cleanup 0 threads (count: 0, dangling: 3) Dangling thread: Dangling thread: <_MainThread(MainThread, started 140540525950528)> Dangling thread: <_ExecutorManagerThread(Thread-111, stopped daemon 140540401628928)> (...) ====================================================================== ERROR: test_crash (test.test_concurrent_futures.ProcessPoolSpawnExecutorDeadlockTest) [exit at task unpickle] ---------------------------------------------------------------------- Traceback (most recent call last): File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/test_concurrent_futures.py", line 1119, in test_crash executor.shutdown(wait=True) File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/concurrent/futures/process.py", line 721, in shutdown self._executor_manager_thread_wakeup.wakeup() File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/concurrent/futures/process.py", line 93, in wakeup self._writer.send_bytes(b"") File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/multiprocessing/connection.py", line 205, in send_bytes self._send_bytes(m[offset:offset + size]) File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/multiprocessing/connection.py", line 416, in _send_bytes self._send(header + buf) File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/multiprocessing/connection.py", line 373, in _send n = write(self._handle, buf) OSError: [Errno 9] Bad file descriptor Stdout: 15.51s Stderr: Warning -- threading_cleanup() failed to cleanup 0 threads (count: 0, dangling: 3) Dangling thread: Dangling thread: <_MainThread(MainThread, started 140540525950528)> Dangling thread: <_ExecutorManagerThread(Thread-111, stopped daemon 140540401628928)> -- On the same build, test_concurrent_futures timed out after 15 min, while running test_ressources_gced_in_workers(): 0:29:08 load avg: 1.46 Re-running test_concurrent_futures in verbose mode (...) test_map_exception (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... ok test_map_timeout (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... ok test_max_workers_negative (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... ok test_max_workers_too_large (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... skipped 'Windows-only process limit' test_no_stale_references (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... ok Timeout (0:15:00)! Thread 0x00007f38bf766700 (most recent call first): File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/threading.py", line 303 in wait File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/multiprocessing/queues.py", line 227 in _feed File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/threading.py", line 882 in run File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/threading.py", line 944 in _bootstrap_inner File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/threading.py", line 902 in _bootstrap Thread 0x00007f38bff67700 (most recent call first): File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/selectors.py", line 415 in select File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/multiprocessing/connection.py", line 936 in wait File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/concurrent/futures/process.py", line 372 in wait_result_broken_or_wakeup File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/concurrent/futures/process.py", line 319 in run File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/threading.py", line 944 in _bootstrap_inner File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/threading.py", line 902 in _bootstrap Thread 0x00007f38c7128640 (most recent call first): File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/threading.py", line 303 in wait File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/concurrent/futures/_base.py", line 434 in result File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/test_concurrent_futures.py", line 955 in test_ressources_gced_in_workers File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/case.py", line 616 in _callTestMethod File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/case.py", line 659 in run File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/case.py", line 719 in __call__ File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/suite.py", line 122 in run File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/suite.py", line 84 in __call__ File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/suite.py", line 122 in run File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/suite.py", line 84 in __call__ File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/suite.py", line 122 in run File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/suite.py", line 84 in __call__ File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/unittest/runner.py", line 176 in run File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/support/__init__.py", line 2079 in _run_suite File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/support/__init__.py", line 2201 in run_unittest File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/runtest.py", line 209 in _test_module File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/runtest.py", line 153 in _runtest File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/runtest.py", line 193 in runtest File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/main.py", line 318 in rerun_failed_tests File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/main.py", line 691 in _main File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/main.py", line 634 in main File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/libregrtest/main.py", line 712 in main File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/__main__.py", line 2 in File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/runpy.py", line 87 in _run_code File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/runpy.py", line 194 in _run_module_as_main test_ressources_gced_in_workers (test.test_concurrent_futures.ProcessPoolForkProcessPoolExecutorTest) ... Makefile:1171: recipe for target 'buildbottest' failed make: *** [buildbottest] Error 1 command timed out: 1200 seconds without output running [b'make', b'buildbottest', b'TESTOPTS=-j2 --junit-xml test-results.xml ${BUILDBOT_TESTOPTS}', b'TESTPYTHONOPTS=', b'TESTTIMEOUT=900'], attempting to kill program finished with exit code 2 elapsedTime=3849.667593 ---------- components: Tests messages: 364451 nosy: vstinner priority: normal severity: normal status: open title: test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 13:50:48 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Mar 2020 17:50:48 +0000 Subject: [New-bugs-announce] [issue39996] test_multiprocessing_fork hangs on AMD64 FreeBSD Shared 3.x Message-ID: <1584467448.23.0.447277683737.issue39996@roundup.psfhosted.org> New submission from STINNER Victor : Sadly, faulthandler failed to dump the Python traceback and kill the process. Instead, the main libregrtest process had to kill the child process (process group) :-( https://buildbot.python.org/all/#/builders/152/builds/420 ... 0:56:25 load avg: 0.22 running: test_multiprocessing_fork (35 min 42 sec) 0:56:55 load avg: 0.40 running: test_multiprocessing_fork (36 min 12 sec) 0:57:25 load avg: 0.24 running: test_multiprocessing_fork (36 min 42 sec) 0:57:55 load avg: 0.25 running: test_multiprocessing_fork (37 min 12 sec) Kill process group 0:58:13 load avg: 0.24 [420/420/3] test_multiprocessing_fork timed out (37 min 30 sec) (37 min 30 sec) See also bpo-39995: test_concurrent_futures.test_ressources_gced_in_workers() timed out after 15 minutes on AMD64 Ubuntu Shared 3.x. ---------- components: Tests messages: 364464 nosy: koobs, vstinner priority: normal severity: normal status: open title: test_multiprocessing_fork hangs on AMD64 FreeBSD Shared 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 16:08:08 2020 From: report at bugs.python.org (Yurii) Date: Tue, 17 Mar 2020 20:08:08 +0000 Subject: [New-bugs-announce] [issue39997] "is" operator doesn't work on method returned from method descriptor Message-ID: <1584475688.33.0.103660819792.issue39997@roundup.psfhosted.org> New submission from Yurii : I reproduced this in python 3.8 and python 3.6. The last line displays the bug itself, all other lines do the setup and pretty much explain WHY I think that is the bug. class Class: def method(self): ... instance = Class() # expected: ids match assert id(Class.method.__get__(None, Class)) == id(Class.method) # expected: __eq__ returns True assert Class.method.__get__(None, Class) == Class.method # expected: is returns True assert Class.method.__get__(None, Class) is Class.method # expected: ids match assert id(Class.method.__get__(instance, Class)) == id(instance.method) # expected: __eq__ returns True assert Class.method.__get__(instance, Class) == instance.method # UNEXPECTED: is returns False, why?.. assert Class.method.__get__(instance, Class) is not instance.method # why? ---------- messages: 364474 nosy: yandrieiev priority: normal severity: normal status: open title: "is" operator doesn't work on method returned from method descriptor type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 17 19:39:02 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 17 Mar 2020 23:39:02 +0000 Subject: [New-bugs-announce] [issue39998] [C API] Remove PyEval_AcquireLock() and PyEval_ReleaseLock() functions Message-ID: <1584488342.57.0.981206411156.issue39998@roundup.psfhosted.org> New submission from STINNER Victor : The PyEval_AcquireLock() and PyEval_ReleaseLock() functions are misleading and deprecated since Python 3.2. bpo-10913 deprecated them: commit 5ace8e98da6401827f607292a066da05df3ec5c1 Author: Antoine Pitrou Date: Sat Jan 15 13:11:48 2011 +0000 Issue #10913: Deprecate misleading functions PyEval_AcquireLock() and PyEval_ReleaseLock(). The thread-state aware APIs should be used instead. It's now time to remove them! I *discovered* these functions while working on bpo-39984. Previously, I never ever used them nor really see them. I only made refactoring them in their code, without paying attention to them. ---------- components: C API messages: 364487 nosy: vstinner priority: normal severity: normal status: open title: [C API] Remove PyEval_AcquireLock() and PyEval_ReleaseLock() functions versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 05:38:50 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 18 Mar 2020 09:38:50 +0000 Subject: [New-bugs-announce] [issue39999] Fix some issues with AST node classes Message-ID: <1584524330.37.0.654792714903.issue39999@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR fixes some issues related to recent changes in the AST node classes. 1. Re-add removed classes Suite, Param, AugLoad and AugStore. They are not used in Python 3, are not created by the parser and are not accepted by the compiler. Param was used in 2.7, other classes were not used longer time. But some third-party projects (e.g. pyflakes) use them for isinstance checks. 2. Add docstrings for all dummy AST classes (Constant subclasses, Index, ExtTuple and the above four classes). Otherwise they inherited docstrings from the parent class. 3. Add docstrings for all attribute aliases. 4. Set __module__ = "ast" instead of "_ast" for all classes defined in the _ast module. Otherwise the help for the ast module would show only dummy classes, not actual AST node classes. It also makes pickles more compatible between versions. ---------- components: Library (Lib) messages: 364504 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Fix some issues with AST node classes type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 05:46:11 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Wed, 18 Mar 2020 09:46:11 +0000 Subject: [New-bugs-announce] [issue40000] Improve AST validation for Constant nodes Message-ID: <1584524771.97.0.157412190348.issue40000@roundup.psfhosted.org> New submission from Batuhan Taskaya : When something that isn't constant found in a ast.Constant node's body, python reports errors like this >>> e = ast.Expression(body=ast.Constant(value=type)) >>> ast.fix_missing_locations(e) <_ast.Expression object at 0x7fc2c23981c0> >>> compile(e, "", "eval") Traceback (most recent call last): File "", line 1, in TypeError: got an invalid type in Constant: type But if something is part of constant tuple and frozenset isn't constant, the error reporting is wrong >>> e = ast.Expression(body=ast.Constant(value=(1,2,type))) >>> compile(e, "", "eval") Traceback (most recent call last): File "", line 1, in TypeError: got an invalid type in Constant: tuple This should've been TypeError: got an invalid type in Constant: type ---------- messages: 364505 nosy: BTaskaya priority: normal severity: normal status: open title: Improve AST validation for Constant nodes _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 08:06:49 2020 From: report at bugs.python.org (Aviram) Date: Wed, 18 Mar 2020 12:06:49 +0000 Subject: [New-bugs-announce] [issue40001] ignore errors in SimpleCookie Message-ID: <1584533209.73.0.816552573912.issue40001@roundup.psfhosted.org> New submission from Aviram : SimpleCookie (http/cookies.py) load method fails if one of the has an issue. In real life scenarios, we want to be tolerant toward faulty cookies, and just ignore those. My suggestion is to add ignore_errors keyword argument to the load method of SimpleCookie, skipping invalid Morsels. ---------- components: Library (Lib) messages: 364514 nosy: aviramha priority: normal severity: normal status: open title: ignore errors in SimpleCookie type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 09:26:23 2020 From: report at bugs.python.org (Bar Harel) Date: Wed, 18 Mar 2020 13:26:23 +0000 Subject: [New-bugs-announce] [issue40002] Cookie load error inconsistency Message-ID: <1584537983.88.0.845939425655.issue40002@roundup.psfhosted.org> New submission from Bar Harel : ATM loading cookies is inconsistent. If you encounter an invalid cookie, BaseCookie.load will sometimes raise CookieError and sometimes silently ignore the load: from http.cookies import SimpleCookie s = SimpleCookie() s.load("invalid\x00=cookie") # Silently ignored s.load("invalid/=cookie") # Raises CookieError ---------- components: Library (Lib) messages: 364519 nosy: bar.harel priority: normal severity: normal status: open title: Cookie load error inconsistency versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 11:48:15 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 18 Mar 2020 15:48:15 +0000 Subject: [New-bugs-announce] [issue40003] test.regrtest: add an option to run test.bisect_cmd on failed tests, use it on Refleaks buildbots Message-ID: <1584546495.72.0.913336015216.issue40003@roundup.psfhosted.org> New submission from STINNER Victor : There are some tests which fail randomly in general, but fail in a deterministic way on some specific buildbot workers. bpo-39932 is a good example: test_multiprocessing_fork fails with "test_multiprocessing_fork leaked [0, 2, 0] file descriptors". The test fails while run in paralle, but it also fails when re-run sequentially. Except that when I connect to the buildbot worker, it does not fail anymore. test_multiprocessing_fork contains 356 test methods, the test file (Lib/test/_test_multiprocessing.py) has 5741 lines of Python code, and the multiprocessing is made of 8149 lines of Python code and 1133 lines of C code. It's hard to audit such code. The multiprocessing uses multiple proceses, pipes, signals, etc. It's really hard to debug. I propose to add an --bisect-failed option to test.regrtest to run test.bisect_cmd on failed tests. We can start to experiment it on Refleaks buildbots. Regular tests (not Refleaks tests) are easier to reproduce in general. It should speedup analysis of reference leak and "altered environment" test failures. Having less test methods to audit is way simpler. The implement should be that at the end of regrtest, after tests are re-run, run each failed test in test.bisect_cmd with the same command line arguments than test.regrtest. test.bisect_cmd uses 100 iterations by default. It's ok if the bisection fails to reduce the number of test methods. At least, it should reduce the list in some cases. ---------- components: Tests messages: 364528 nosy: vstinner priority: normal severity: normal status: open title: test.regrtest: add an option to run test.bisect_cmd on failed tests, use it on Refleaks buildbots versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 13:25:19 2020 From: report at bugs.python.org (=?utf-8?q?Bo=C5=A1tjan_Mejak?=) Date: Wed, 18 Mar 2020 17:25:19 +0000 Subject: [New-bugs-announce] [issue40004] String comparison with dotted numerical values wrong Message-ID: <1584552319.24.0.0522121424801.issue40004@roundup.psfhosted.org> New submission from Bo?tjan Mejak : I stumbled upon a possible bug in the Python interpreter while doing some Python version comparisons. Look at this: "3.10.2" < "3.8.2" True # This is not true as a version number comparison Now look at this: "3.10.2" < "3.08.2" False # Adding a leading 0 compares those two version numbers correctly Is it possible Python is fixed to correctly compare such numbers? That would make comparing Python version numbers possible in the future. import platform if platform.python_version() < "3.8.2": # Do something This is currently possible and correct, but this will break when Python version number becomes 3.10 and you wanna compare this version number to, say, 3.9. ---------- components: Interpreter Core messages: 364535 nosy: PedanticHacker, gvanrossum priority: normal severity: normal status: open title: String comparison with dotted numerical values wrong type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 15:50:43 2020 From: report at bugs.python.org (Bharat Solanki) Date: Wed, 18 Mar 2020 19:50:43 +0000 Subject: [New-bugs-announce] [issue40005] Getting different result in python 2.7 and 3.7. Message-ID: <1584561043.46.0.995570290808.issue40005@roundup.psfhosted.org> New submission from Bharat Solanki : Hi Team, Below code is giving different result in python 2.7 and 3.7 version. Code is running fine when i am using 2.7 but in 3.7, it is showing error. from multiprocessing import Pool import traceback class Utils: def __init__(self): self.count = 10 def function(): global u1 u1 = Utils() l1 = range(3) process_pool = Pool(1) try: process_pool.map(add, l1, 1) process_pool.close() process_pool.join() except Exception as e: process_pool.terminate() process_pool.join() print(traceback.format_exc()) print(e) def add(num): total = num + u1.count print(total) if __name__ == "__main__": function() Could you please help me on this how can it run in 3.7 version. Thanks, Bharat ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 364559 nosy: Bharatsolanki priority: normal severity: normal status: open title: Getting different result in python 2.7 and 3.7. type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 16:16:49 2020 From: report at bugs.python.org (Ram Rachum) Date: Wed, 18 Mar 2020 20:16:49 +0000 Subject: [New-bugs-announce] [issue40006] enum: Add documentation for _create_pseudo_member_ and composite members Message-ID: <1584562609.71.0.431349886951.issue40006@roundup.psfhosted.org> New submission from Ram Rachum : Looking at the enum source code, there's a method `_create_pseudo_member_` that's used in a bunch of places. Its docstring says "Create a composite member iff value contains only members", which would have been useful if I had any idea what "composite member" meant. It would be good if the documentation for the enum module would include more information about these two concepts. ---------- assignee: docs at python components: Documentation messages: 364561 nosy: cool-RR, docs at python priority: normal severity: normal status: open title: enum: Add documentation for _create_pseudo_member_ and composite members type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 17:00:01 2020 From: report at bugs.python.org (tzickel) Date: Wed, 18 Mar 2020 21:00:01 +0000 Subject: [New-bugs-announce] [issue40007] An attempt to make asyncio.transport.writelines (selector) use Scatter I/O Message-ID: <1584565201.4.0.349669786045.issue40007@roundup.psfhosted.org> New submission from tzickel : I have a code that tries to be smart and prepare data to be chunked efficiently before sending, so I was happy to read about: https://docs.python.org/3/library/asyncio-protocol.html#asyncio.WriteTransport.writelines Only to see that it simply does: self.write(b''.join(lines)) So I've attempted to write an version that uses sendmsg (scatter I/O) instead (will be attached in PR). What I've learnt is: 1. It's hard to benchmark (If someone has an good example on checking if it's worth it, feel free to add such). 2. sendmsg has an OS limit on how many items can be done in one call. If the user does not call writer.drain() it might have too many items in the buffer, in that case I concat them (might be an expensive operation ? but that should not be th enormal case). 3. socket.socket.sendmsg can accept any bytes like iterable, but os.writev can only accept sequences, is that a bug ? 4. This is for the selector stream socket for now. ---------- components: asyncio messages: 364565 nosy: asvetlov, tzickel, yselivanov priority: normal pull_requests: 18416 severity: normal status: open title: An attempt to make asyncio.transport.writelines (selector) use Scatter I/O type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 17:57:07 2020 From: report at bugs.python.org (Jack Reigns) Date: Wed, 18 Mar 2020 21:57:07 +0000 Subject: [New-bugs-announce] [issue40008] Best Mac Cleaner Software and Optimization Utilities Message-ID: <1584568627.94.0.447674354664.issue40008@roundup.psfhosted.org> New submission from Jack Reigns : Mac Optimizer Pro a bunch of tools that can be used to perform various actions on your Mac. A number of the tools on offer can be used to clean your Mac. ---------- files: MacOptimizerPro.jpg messages: 364570 nosy: Jack Reigns priority: normal severity: normal status: open title: Best Mac Cleaner Software and Optimization Utilities type: performance Added file: https://bugs.python.org/file48980/MacOptimizerPro.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 18:02:32 2020 From: report at bugs.python.org (Jack Reigns) Date: Wed, 18 Mar 2020 22:02:32 +0000 Subject: [New-bugs-announce] [issue40009] Best Mac Cleaner Apps to Optimize and Speed up your Mac Message-ID: <1584568952.13.0.436638052986.issue40009@roundup.psfhosted.org> New submission from Jack Reigns : Mac Optimizer Pro's simple drag and drop functionality makes it the best program to clean up Mac. Apart from this, you can also select an app and view its detailed information so that you know what is hogging up your device?s space. Moreover, it lets you clean the cache and rebuild the database regardless of the OS version you are using. Other than this, it also has some premium features such as software uninstaller and updater to optimize and give your Mac?s hard disk the much-needed breathing space. Call at +1 866-252-2104 for more info or visit:- https://www.macoptimizerpro.com/ ---------- files: MacOptimizerPro.jpg messages: 364572 nosy: Jack Reigns priority: normal severity: normal status: open title: Best Mac Cleaner Apps to Optimize and Speed up your Mac type: performance Added file: https://bugs.python.org/file48981/MacOptimizerPro.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 20:57:19 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 19 Mar 2020 00:57:19 +0000 Subject: [New-bugs-announce] [issue40010] Inefficient sigal handling in multithreaded applications Message-ID: <1584579439.15.0.43767534151.issue40010@roundup.psfhosted.org> New submission from STINNER Victor : When a thread gets a signal, SIGNAL_PENDING_SIGNALS() sets signals_pending to 1 and eval_breaker to 1. In this case, _PyEval_EvalFrameDefault() calls handle_signals(), but since it's not the main thread, it does nothing and signals_pending value remains 1. Moreover, eval_breaker value remains 1 which means that the following code will be called before executing *each* bytecode instruction. if (_Py_atomic_load_relaxed(eval_breaker)) { (...) opcode = _Py_OPCODE(*next_instr); if (opcode == SETUP_FINALLY || ...) { ... } if (_Py_atomic_load_relaxed(&ceval->signals_pending)) { if (handle_signals(tstate) != 0) { goto error; } } if (_Py_atomic_load_relaxed(&ceval->pending.calls_to_do)) { ... } if (_Py_atomic_load_relaxed(&ceval->gil_drop_request)) { ... } if (tstate->async_exc != NULL) { ... } } This is inefficient. I'm working on a PR modifying SIGNAL_PENDING_SIGNALS() to not set eval_breaker to 1 if the current thread is not the main thread. ---------- components: Interpreter Core messages: 364580 nosy: vstinner priority: normal severity: normal status: open title: Inefficient sigal handling in multithreaded applications versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 18 21:45:57 2020 From: report at bugs.python.org (=?utf-8?q?H=C3=AAnio_Tierra_Sampaio?=) Date: Thu, 19 Mar 2020 01:45:57 +0000 Subject: [New-bugs-announce] [issue40011] Widget events are of type Tuple Message-ID: <1584582357.44.0.253053056041.issue40011@roundup.psfhosted.org> New submission from H?nio Tierra Sampaio : I'm maintainer for a project for the RaspberryPi, a GUI for OMXPlayer called TBOPlayer. This GUI was originally made for Python 2, but with Python 2 deprecation, I decided to convert it to Python 3. After (supposedly) converting all of it I had a surprise to see that the events that worked in the original code with Python 2 didn't work anymore in Python 3 with errors like File "/home/henio/Projetos/tboplayer/lib/tboplayer.py", line 1566, in select_track sel = event.widget.curselection() AttributeError: 'tuple' object has no attribute 'widget' And upon investigation, I noticed all the widget events (originally of type tkinter.Event) were now of type Tuple. WTF. Ok, I tried to circumvent this by using the tuple "attributes", like, in the event (('17685', '1', '??', '??', '??', '256', '59467466', '??', '212', '11', '??', '0', '??', '??', '.!listbox', '5', '1030', '344', '??'),) I can access the x position of the cursor in relation to the widget by doing: event[0][8] Which I did. However, I obviously cannot use any of the Event methods, and this way I cannot get the current selection from a Listbox, for example, and trying to gives me the exact error mentioned above. This renders TBOPlayer useless as the user cannot even select a track for playing. I could circumvent this specific problem by keeping track of the previous clicked position over the Listbox and do a simple math calculation to get the current list item, but that's is very annoying, and not the right way to do it, meaning it would make the code more complex than it should be, and making maintaing more difficult. And unfortunately, I was unable to reproduce this issue with a minimum code, so I have no idea what's going on. This issue comment describes how to reproduce the bug inside of TBOPlyaer: https://github.com/KenT2/tboplayer/issues/175#issuecomment-600861514 ---------- components: Tkinter files: tboplayer.py hgrepos: 387 messages: 364586 nosy: H?nio Tierra Sampaio priority: normal severity: normal status: open title: Widget events are of type Tuple type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48982/tboplayer.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 05:35:59 2020 From: report at bugs.python.org (Peter Bittner) Date: Thu, 19 Mar 2020 09:35:59 +0000 Subject: [New-bugs-announce] [issue40012] Avoid Python 2 documentation to appear in Web search results Message-ID: <1584610559.01.0.880804484155.issue40012@roundup.psfhosted.org> New submission from Peter Bittner : Currently, when you do a Web search (e.g. using Google, Bing, Yahoo!, DuckDuckGo, et al.) for a Python module or function call you'll find a link to the related Python 2 documentation first. How to reproduce: 1. Search for simply "os.environ" in your favorite search engine. 2. Find a link to the Python documentation in the first 3 results. Typically, this will point to the Python 2 docs first. (Side note: Google seems to now actively manipulate the results ranking Python 3 results higher. Apparently, this is the only popular search engine behaving like that.) Expected result: - When searching for Python modules, functions, builtins, etc. on the Web, no search results for Python 2 should pop up at all if the same content exists for Python 3 Possible implementation: - Add a "noindex" meta tag to the header of the generated HTML documentation - see https://support.google.com/webmasters/answer/93710 ---------- messages: 364597 nosy: bittner priority: normal severity: normal status: open title: Avoid Python 2 documentation to appear in Web search results type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 06:37:30 2020 From: report at bugs.python.org (Moshe Sambol) Date: Thu, 19 Mar 2020 10:37:30 +0000 Subject: [New-bugs-announce] [issue40013] CSV DictReader parameter documentation Message-ID: <1584614250.73.0.590316001908.issue40013@roundup.psfhosted.org> New submission from Moshe Sambol : The csv.DictReader constructor takes two optional parameters, restkey and restval. restkey is documented well, but restval is not: "If a row has more fields than fieldnames, the remaining data is put in a list and stored with the fieldname specified by restkey (which defaults to None). If a non-blank row has fewer fields than fieldnames, the missing values are filled-in with None." Since restval is not mentioned here, the reader may assume that the next sentence applies to it: "All other optional or keyword arguments are passed to the underlying reader instance." But this is not the case for restval. I suggest that the text be amended to "If a non-blank row has fewer fields than fieldnames, the missing values are filled-in with the value of restval (which defaults to None)." ---------- assignee: docs at python components: Documentation messages: 364598 nosy: Moshe Sambol, docs at python priority: normal severity: normal status: open title: CSV DictReader parameter documentation versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 12:04:54 2020 From: report at bugs.python.org (Dong-hee Na) Date: Thu, 19 Mar 2020 16:04:54 +0000 Subject: [New-bugs-announce] [issue40014] os.getgrouplist can raise OSError during the Display build info Message-ID: <1584633894.99.0.197171599241.issue40014@roundup.psfhosted.org> New submission from Dong-hee Na : example: https://github.com/python/cpython/pull/19073/checks?check_run_id=519539592 I suggest to not to add information for os.getgrouplist if the OSError is raised. ---------- components: Tests messages: 364607 nosy: corona10, vstinner priority: normal severity: normal status: open title: os.getgrouplist can raise OSError during the Display build info type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 12:48:06 2020 From: report at bugs.python.org (Venkatesh-Prasad Ranganath) Date: Thu, 19 Mar 2020 16:48:06 +0000 Subject: [New-bugs-announce] [issue40015] logging.Logger.disabled field is redundant Message-ID: <1584636486.61.0.64483912043.issue40015@roundup.psfhosted.org> New submission from Venkatesh-Prasad Ranganath : `logging.Logger.disabled` field is assigned `False` while initializing `logging.Logger` instance and never updated. However, this field is also involved in two checks: https://github.com/python/cpython/blob/da1fe768e582387212201ab8737a1a5f26110664/Lib/logging/__init__.py#L1586 and https://github.com/python/cpython/blob/da1fe768e582387212201ab8737a1a5f26110664/Lib/logging/__init__.py#L1681 that are executed in the context of every logging method. So, these checks are likely to contribute to unnecessary computation while logging. Further, since the library documentation does not mention this field, the field is probably not part of the public API of the logging library. So, the field seems to be redundant. Given the checks on the hot paths are redundant as the field never changes value and fields is not part of the public API, removing it will help improve logging performance and simplify the code base. ---------- components: Library (Lib) messages: 364612 nosy: rvprasad priority: normal severity: normal status: open title: logging.Logger.disabled field is redundant type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 13:40:50 2020 From: report at bugs.python.org (Ram Rachum) Date: Thu, 19 Mar 2020 17:40:50 +0000 Subject: [New-bugs-announce] [issue40016] Clarify flag case in `re` module Message-ID: <1584639650.79.0.177965379054.issue40016@roundup.psfhosted.org> New submission from Ram Rachum : Today I was tripped up by an inconsistency in the `re` docstring. I wanted to use DOTALL as a flag inside my regex, rather than as an argument to the `compile` function. Here are two lines from the docstring: (?aiLmsux) Set the A, I, L, M, S, U, or X flag for the RE (see below). ... S DOTALL "." matches any character at all, including the newline. The DOTALL flag appears as an uppercase S in 2 places, and as a lowercase s in one place. This is confusing, and I initially tried using the uppercase S only to get an error. I'm attaching a PR to this ticket. ---------- components: Library (Lib) messages: 364617 nosy: cool-RR priority: normal severity: normal status: open title: Clarify flag case in `re` module type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 15:14:59 2020 From: report at bugs.python.org (Russell Owen) Date: Thu, 19 Mar 2020 19:14:59 +0000 Subject: [New-bugs-announce] [issue40017] Please support CLOCK_TAI in the time module. Message-ID: <1584645299.09.0.45620556685.issue40017@roundup.psfhosted.org> New submission from Russell Owen : It is becoming common (at least in astronomy) to want to use TAI as a time standard because it is a uniform time with no leap seconds, and differs from UTC (standard computer clock time) by an integer number of seconds that occasionally changes. Linux offers a clock for TAI time: CLOCK_TAI. It would be very helpful to have this constant in the time module, e.g. for calling time.clock_gettime Caveat: linux CLOCK_TAI will return UTC time if the leap second table has not been set up. Both ntp and ptp can be configured to maintain this table. So this is a caveat worth mentioning in the docs. But I hope it is not sufficient reason to deny the request. ---------- components: Library (Lib) messages: 364633 nosy: r3owen priority: normal severity: normal status: open title: Please support CLOCK_TAI in the time module. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 16:55:46 2020 From: report at bugs.python.org (Benjamin Peterson) Date: Thu, 19 Mar 2020 20:55:46 +0000 Subject: [New-bugs-announce] [issue40018] test_ssl fails with OpenSSL 1.1.1e Message-ID: <1584651346.13.0.299383361269.issue40018@roundup.psfhosted.org> New submission from Benjamin Peterson : ====================================================================== ERROR: test_ciphers (test.test_ssl.SimpleBackgroundTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 2120, in test_ciphers s.connect(self.server_addr) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1342, in connect self._real_connect(addr, False) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1333, in _real_connect self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ====================================================================== ERROR: test_connect (test.test_ssl.SimpleBackgroundTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 1944, in test_connect s.connect(self.server_addr) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1342, in connect self._real_connect(addr, False) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1333, in _real_connect self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ====================================================================== ERROR: test_connect_cadata (test.test_ssl.SimpleBackgroundTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 2062, in test_connect_cadata s.connect(self.server_addr) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1342, in connect self._real_connect(addr, False) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1333, in _real_connect self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ====================================================================== ERROR: test_connect_capath (test.test_ssl.SimpleBackgroundTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 2041, in test_connect_capath s.connect(self.server_addr) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1342, in connect self._real_connect(addr, False) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1333, in _real_connect self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ====================================================================== ERROR: test_connect_with_context (test.test_ssl.SimpleBackgroundTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 2002, in test_connect_with_context s.connect(self.server_addr) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1342, in connect self._real_connect(addr, False) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1333, in _real_connect self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ====================================================================== ERROR: test_get_server_certificate (test.test_ssl.SimpleBackgroundTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 2107, in test_get_server_certificate _test_get_server_certificate(self, *self.server_addr, cert=SIGNING_CA) File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 2272, in _test_get_server_certificate pem = ssl.get_server_certificate((host, port), ca_certs=cert) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1484, in get_server_certificate with context.wrap_socket(sock) as sslsock: File "/home/benjamin/repos/cpython/Lib/ssl.py", line 500, in wrap_socket return self.sslsocket_class._create( File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1040, in _create self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ====================================================================== ERROR: test_session_handling (test.test_ssl.ThreadedTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 4346, in test_session_handling s.connect((HOST, server.port)) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1342, in connect self._real_connect(addr, False) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1333, in _real_connect self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ====================================================================== ERROR: test_tls_unique_channel_binding (test.test_ssl.ThreadedTests) Test tls-unique channel binding. ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/benjamin/repos/cpython/Lib/test/test_ssl.py", line 3922, in test_tls_unique_channel_binding s.connect((HOST, server.port)) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1342, in connect self._real_connect(addr, False) File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1333, in _real_connect self.do_handshake() File "/home/benjamin/repos/cpython/Lib/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer ---------------------------------------------------------------------- ---------- assignee: christian.heimes components: SSL messages: 364638 nosy: benjamin.peterson, christian.heimes priority: normal severity: normal status: open title: test_ssl fails with OpenSSL 1.1.1e versions: Python 2.7, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 19:20:34 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 19 Mar 2020 23:20:34 +0000 Subject: [New-bugs-announce] [issue40019] test_gdb should better detect when Python is optimized Message-ID: <1584660034.18.0.372884225596.issue40019@roundup.psfhosted.org> New submission from STINNER Victor : On my PR 19077 which changes Python/ceval.c, test_gdb fails on Travis CI with Python compiled with clang -Og. The -Og optimization level is a compromise between performance and the ability to debug Python. The problem is that gdb fails to retrieve some information and so test_gdb fails. I proposed bpo-38350 "./configure --with-pydebug should use -O0 rather than -Og", but the status quo is to continue to use -Og by default. See examples of test_gdb failures from PR 19077 below. I propose to skip a test if one of the follow pattern is found in gdb output: * '', * '(frame information optimized out)', * 'Unable to read information on python frame', ====================================================================== FAIL: test_basic_command (test.test_gdb.PyListTests) Verify that the "py-list" command works ---------------------------------------------------------------------- (...) AssertionError: (...) 'Breakpoint 1 at 0x5aabf1: file Python/bltinmodule.c, line 1173. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=42) at Python/bltinmodule.c:1173 1173\t PyObject *id = PyLong_FromVoidPtr(v); Unable to read information on python frame ' did not end with ' 5 6 def bar(a, b, c): 7 baz(a, b, c) 8 9 def baz(*args): >10 id(42) 11 12 foo(1, 2, 3) ' ====================================================================== FAIL: test_bt (test.test_gdb.PyBtTests) Verify that the "py-bt" command works ---------------------------------------------------------------------- (...) AssertionError: 'Breakpoint 1 at 0x5aabf1: file Python/bltinmodule.c, line 1173. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=42) at Python/bltinmodule.c:1173 1173\t PyObject *id = PyLong_FromVoidPtr(v); Traceback (most recent call first): (frame information optimized out) File "/home/travis/build/python/cpython/Lib/test/gdb_sample.py", line 7, in bar baz(a, b, c) File "/home/travis/build/python/cpython/Lib/test/gdb_sample.py", line 4, in foo bar(a, b, c) (frame information optimized out) ' did not match '^.* Traceback \\(most recent call first\\): File ".*gdb_sample.py", line 10, in baz id\\(42\\) File ".*gdb_sample.py", line 7, in bar baz\\(a, b, c\\) File ".*gdb_sample.py", line 4, in foo bar\\(a, b, c\\) File ".*gdb_sample.py", line 12, in foo\\(1, 2, 3\\) ' ---------- components: Tests messages: 364640 nosy: vstinner priority: normal severity: normal status: open title: test_gdb should better detect when Python is optimized versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 20:58:27 2020 From: report at bugs.python.org (Alexander Riccio) Date: Fri, 20 Mar 2020 00:58:27 +0000 Subject: [New-bugs-announce] [issue40020] growable_comment_array_add leaks, causes crash Message-ID: <1584665907.06.0.965824419022.issue40020@roundup.psfhosted.org> New submission from Alexander Riccio : growable_comment_array_add in parsetok.c incorrectly uses realloc, which leaks the array when allocation fails, and then causes a null pointer deref crash later when the array is freed in growable_comment_array_deallocate (the array pointer is dereferenced, passing null to free is fine). It's unlikely that this codepath is reached in normal use, since type comments need to be turned on (via the PyCF_TYPE_COMMENTS compiler flag), but I've managed to replicate the issue by injecting faults with Application Verifier. It's easiest to cause it to fail with a very large number of type comments, but presumably this could also happen with some form of heap fragmentation. The buggy code is: static int growable_comment_array_add(growable_comment_array *arr, int lineno, char *comment) { if (arr->num_items >= arr->size) { arr->size *= 2; arr->items = realloc(arr->items, arr->size * sizeof(*arr->items)); if (!arr->items) { return 0; } } arr->items[arr->num_items].lineno = lineno; arr->items[arr->num_items].comment = comment; arr->num_items++; return 1; } and the correct code would be something like: static int growable_comment_array_add(growable_comment_array *arr, int lineno, char *comment) { if (arr->num_items >= arr->size) { arr->size *= 2; void* new_items_array = realloc(arr->items, arr->size * sizeof(*arr->items)); if (!new_items_array) { return 0; } arr->items = new_items_array; } arr->items[arr->num_items].lineno = lineno; arr->items[arr->num_items].comment = comment; arr->num_items++; return 1; } ---------- components: Interpreter Core messages: 364644 nosy: Alexander Riccio, benjamin.peterson priority: normal severity: normal status: open title: growable_comment_array_add leaks, causes crash type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 22:25:19 2020 From: report at bugs.python.org (ThePokestarFan) Date: Fri, 20 Mar 2020 02:25:19 +0000 Subject: [New-bugs-announce] [issue40021] Throwing an Exception results in stack overflow Message-ID: <1584671119.97.0.0188719299435.issue40021@roundup.psfhosted.org> New submission from ThePokestarFan : If I set up a simple recursion exception function, that calls itself every time an error is raised, Python throws a SIGABRT and crashes due to a "Stack Overflow". def x(): try: raise Exception() except Exception: x() Oddly enough, my system installation of Python 2.7 threw a RuntimeError instead of aborting, which is what I expected. ---------- components: Interpreter Core, macOS files: error.log messages: 364646 nosy: ThePokestarFan, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Throwing an Exception results in stack overflow type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48984/error.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 19 22:45:45 2020 From: report at bugs.python.org (yepan Li) Date: Fri, 20 Mar 2020 02:45:45 +0000 Subject: [New-bugs-announce] =?utf-8?b?W2lzc3VlNDAwMjJdIOWFs+S6juWIlw==?= =?utf-8?b?6KGo55qE5Z+656GA566X5rOV6Zeu6aKY?= Message-ID: <1584672345.36.0.799930749688.issue40022@roundup.psfhosted.org> New submission from yepan Li : English is very poor so I use Chinese, ????????? lis1 = [1,2,3,4,5,6,7,8,9,10]#?????? lislen = len(lis1)#?????? ??lislen(??1)???10 lis2 = []*lislen#???????????????????????? print(lis1)#????????????? for i in lis1:#?i??????1 lis2.insert(lislen,i)#???2??10(lislen)??????i(i???1) lislen = lislen-lislen#?lislen(10) = 10(lislen) - 10(lislen) == 0 #????????lis2.insert(0,i)??????????????? print(lis2) ---------- components: Windows files: Test01.py messages: 364648 nosy: paul.moore, steve.dower, tim.golden, yepan Li, zach.ware priority: normal severity: normal status: open title: ??????????? versions: Python 3.8 Added file: https://bugs.python.org/file48988/Test01.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 20 03:39:00 2020 From: report at bugs.python.org (tzickel) Date: Fri, 20 Mar 2020 07:39:00 +0000 Subject: [New-bugs-announce] [issue40023] os.writev and socket.sendmsg return value are not ideal Message-ID: <1584689940.47.0.494767367712.issue40023@roundup.psfhosted.org> New submission from tzickel : os.writev and socket.sendmsg accept an iterable but the return value is number of bytes sent. That is not helpful as the user will have to write manual code to figure out which part of the iterable was not sent. I propose to make a version of the functions where: 1. The return value is an iterable of the leftovers (including a maybe one-time memoryview into an item who has been partly-sent). 2. There is a small quirk where writev accepts only sequences but sendmsg accepts any iterable, which causes them not to behave the same for no good reason. 3. Do we want an sendmsgall like sendall in socket, where it doesn't give up until everything is sent ? 4. Today trying to use writev / sendmsg to be fully complaint requires checking the number of input items in the iterable to not go over IOV_MAX, maybe the python version of the functions should handle this automatically (and if it overflows, return the extra in leftovers) ? Should the functions be the current one with an optional argument (return_leftovers) or a new function altogether. ---------- components: Library (Lib) messages: 364651 nosy: larry, tzickel priority: normal severity: normal status: open title: os.writev and socket.sendmsg return value are not ideal type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 20 05:39:49 2020 From: report at bugs.python.org (Dong-hee Na) Date: Fri, 20 Mar 2020 09:39:49 +0000 Subject: [New-bugs-announce] [issue40024] Add _PyModule_AddType private helper function Message-ID: <1584697189.32.0.684240851627.issue40024@roundup.psfhosted.org> New submission from Dong-hee Na : See: https://github.com/python/cpython/pull/19084#discussion_r395486583 ---------- assignee: corona10 components: C API messages: 364661 nosy: corona10, vstinner priority: normal severity: normal status: open title: Add _PyModule_AddType private helper function type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 20 07:25:04 2020 From: report at bugs.python.org (Luis E.) Date: Fri, 20 Mar 2020 11:25:04 +0000 Subject: [New-bugs-announce] [issue40025] enum: _generate_next_value_ is not called if its definition occurs after calls to auto() Message-ID: <1584703504.0.0.0724934497115.issue40025@roundup.psfhosted.org> New submission from Luis E. : I ran into this issue when attempting to add a custom _generate_next_value_ method to an existing Enum. Adding the method definition to the bottom of the class causes it to not be called at all: from enum import Enum, auto class E(Enum): A = auto() B = auto() def _generate_next_value_(name, *args): return name E.B.value # Returns 2, E._generate_next_value_ is not called class F(Enum): def _generate_next_value_(name, *args): return name A = auto() B = auto() F.B.value # Returns 'B', as intended I do not believe that the order of method/attribute definition should affect the behavior of the class, or at least it should be mentioned in the documentation. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 364665 nosy: docs at python, edd07 priority: normal severity: normal status: open title: enum: _generate_next_value_ is not called if its definition occurs after calls to auto() type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 20 10:19:02 2020 From: report at bugs.python.org (Daniel) Date: Fri, 20 Mar 2020 14:19:02 +0000 Subject: [New-bugs-announce] [issue40026] Create render_*_diff variants to the *_diff functions in difflib Message-ID: <1584713942.67.0.900708769232.issue40026@roundup.psfhosted.org> New submission from Daniel : Currently difflib offers no way to synthesize a diff output without having to assemble the original and modified strings and then asking difflib to calculate the diff. It would be nice if I could just call a `render_unified_diff(a, b, grouped_opcodes)` and get a diff output. This is useful when I'm synthesizing a patch dynamically and I don't necessarily want to load the entire original file and apply the changes. One example usage would be something like: ``` def make_patch(self): # simplified input for synthesizing the diff a = [] b = [] include_lines = [] for header, _ in self.missing.items(): include_lines.append(f"#include <{header}>\n") while len(b) < self.line: b.append(None) b.extend(include_lines) opcodes = [ [('insert', self.line, self.line, self.line, self.line + len(include_lines))] ] diff = render_unified_diff( a, b, opcodes, fromfile=os.path.join('a', self.filename), tofile=os.path.join('b', self.filename), ) return ''.join(diff) ``` ---------- components: Library (Lib) messages: 364669 nosy: pablogsal, ruoso priority: normal severity: normal status: open title: Create render_*_diff variants to the *_diff functions in difflib _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 20 13:21:13 2020 From: report at bugs.python.org (Wayne Davison) Date: Fri, 20 Mar 2020 17:21:13 +0000 Subject: [New-bugs-announce] [issue40027] re.sub inconsistency beginning with 3.7 Message-ID: <1584724873.49.0.294512015302.issue40027@roundup.psfhosted.org> New submission from Wayne Davison : There is an inconsistency in re.sub() when substituting at the end of a string using a prior match with a '*' qualifier: the substitution now occurs twice. For example: txt = re.sub(r'\s*\Z', "\n", txt) This should work like txt.rstrip() + "\n", but beginning in 3.7, the re.sub version now matches twice and changes any non-empty whitespace into "\n\n" instead of "\n". (If there is no trailing whitespace it only matches once.) The bug is the same if '$' is used instead of '\Z', but it does not happen if an actual character is specified (e.g. a substitution of r'\s*x' does not substitute twice if x has preceding whitespace). I tested 2.7.17, 3.6.9, 3.7.7, 3.8.2, and 3.9.0a4, and it starts to fail in 3.7.7 and beyond. Attached is a test program. ---------- components: Regular Expressions files: sub-bug.py messages: 364688 nosy: Wayne Davison, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: re.sub inconsistency beginning with 3.7 type: behavior versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48990/sub-bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 20 15:52:54 2020 From: report at bugs.python.org (Ross Rhodes) Date: Fri, 20 Mar 2020 19:52:54 +0000 Subject: [New-bugs-announce] [issue40028] Math module method to find prime factors for non-negative int n Message-ID: <1584733974.83.0.0417466712916.issue40028@roundup.psfhosted.org> New submission from Ross Rhodes : Hello, Thoughts on a new function in the math module to find prime factors for non-negative integer, n? After a brief search, I haven't found previous enhancement tickets raised for this proposal, and I am not aware of any built-in method within either Python's math module or numpy, but happy to be corrected on that front. If there's no objection and the method does not already exist, I'm happy to implement it and open for review. Ross ---------- messages: 364711 nosy: trrhodes priority: normal severity: normal status: open title: Math module method to find prime factors for non-negative int n versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 03:27:13 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 21 Mar 2020 07:27:13 +0000 Subject: [New-bugs-announce] [issue40029] test_importlib.test_zip requires zlib but not marked Message-ID: <1584775633.1.0.206140912196.issue40029@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I was trying to compile and run tests on a fresh ubuntu 18.03 machine that didn't zlib. test_importlb.test_zip seems to have some tests that depend on zlib but not marked as such causing test errors. Like other tests these could be skipped zlib is not available using test.support.requires_zlib . Tests are as below : test_zip_version test_zip_entry_points test_case_insensitive test_files ---------- components: Tests messages: 364727 nosy: brett.cannon, xtreak priority: normal severity: normal status: open title: test_importlib.test_zip requires zlib but not marked type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 04:43:29 2020 From: report at bugs.python.org (Mathias Talbo) Date: Sat, 21 Mar 2020 08:43:29 +0000 Subject: [New-bugs-announce] [issue40030] Error with math.fsum() regarding float-point error Message-ID: <1584780209.99.0.70577499312.issue40030@roundup.psfhosted.org> New submission from Mathias Talbo : An issue occurs when running the following code. import math math.fsum([0.1, 0.2]), math.fsum([0.1, 0.7]) This should output 0.3, 0.8 respectively. Instead, it output 0.30000000000000004, 0.7999999999999999 The very floating-point error it is trying to stop from occurring. Thank you for your time. ---------- components: Extension Modules messages: 364730 nosy: Mathias Talbo priority: normal severity: normal status: open title: Error with math.fsum() regarding float-point error type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 05:42:34 2020 From: report at bugs.python.org (Saaheer Purav) Date: Sat, 21 Mar 2020 09:42:34 +0000 Subject: [New-bugs-announce] [issue40031] Python Configure IDLE 'Ok' and 'Apply' buttons do not seem to work. Message-ID: <1584783754.51.0.582471420619.issue40031@roundup.psfhosted.org> New submission from Saaheer Purav : In Python 3.8.2 IDLE, when I try to select a new theme or change the font and font size in the Configure IDLE section, and click on 'Ok' or 'Apply', nothing happens. The buttons have no action. Even when I tried to press F5 to run module, nothing happened. In the 'config-keys.def' file, I edited the run module to 'F6' and tried to run module, but still nothing happened. However, when I edited the file back to 'F5' for run module, it worked, strangely. But still I am unable to select a new theme or change the font and font size as the buttons do not work. ---------- assignee: terry.reedy components: IDLE messages: 364735 nosy: Vader27, terry.reedy priority: normal severity: normal status: open title: Python Configure IDLE 'Ok' and 'Apply' buttons do not seem to work. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 08:40:37 2020 From: report at bugs.python.org (Julin) Date: Sat, 21 Mar 2020 12:40:37 +0000 Subject: [New-bugs-announce] [issue40032] Remove explicit inheriting of object in class definitions Message-ID: <1584794437.59.0.496209203348.issue40032@roundup.psfhosted.org> New submission from Julin : In the source, many class definitions still explicitly inherit from `object` though it is no longer necessary in Python3. Can't we change it? ---------- components: Library (Lib) messages: 364739 nosy: ju-sh priority: normal severity: normal status: open title: Remove explicit inheriting of object in class definitions versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 12:49:09 2020 From: report at bugs.python.org (Damian Yurzola) Date: Sat, 21 Mar 2020 16:49:09 +0000 Subject: [New-bugs-announce] [issue40033] Just defined class missing from scope Message-ID: <1584809349.61.0.7412480872.issue40033@roundup.psfhosted.org> New submission from Damian Yurzola : In the following example the last line throws as 'NameError: name 'Level1A' is not defined' for both 3.7 and 3.8 I assumed that Level1A should already be in scope while defining the insides of Level1B. But it isn't. Is this a bug, or am I missing something? from typing import List, Union class Level0A: pass class Level0B: class Level1A: subs: List[Level0A] class Level1B: subs: List[Level1A] ---------- components: Interpreter Core messages: 364759 nosy: yurzo priority: normal severity: normal status: open title: Just defined class missing from scope versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 13:23:48 2020 From: report at bugs.python.org (San) Date: Sat, 21 Mar 2020 17:23:48 +0000 Subject: [New-bugs-announce] [issue40034] cgi.parse() does not work with multipart POST requests. Message-ID: <1584811428.88.0.0218295989168.issue40034@roundup.psfhosted.org> New submission from San : The cgi.parse stdlib function works in most cases but never works when given a multipart/form-data POST request because it does not set up pdict in a way cgi.parse_multipart() likes (boundary as bytes (not str) and including content length). $ pwd /tmp $ $ /tmp/cpython/python --version Python 3.9.0a4+ $ $ cat cgi-bin/example.cgi #!/tmp/cpython/python import sys, cgi query_dict = cgi.parse() write = sys.stdout.buffer.write write("Content-Type: text/plain; charset=utf-8\r\n\r\n".encode("ascii")) write(f"Worked, query dict is {query_dict}.\n".encode()) $ $ /tmp/cpython/python -m http.server --cgi & sleep 1 [1] 30201 Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... $ $ # GET (url-encoded) requests work: $ curl localhost:8000/cgi-bin/example.cgi?example_key=example_value 127.0.0.1 - - [20/Mar/2020 23:33:48] "GET /cgi-bin/example.cgi?example_key=example_value HTTP/1.1" 200 - Worked, query dict is {'example_key': ['example_value']}. $ $ # POST (multipart) requests do not: $ curl localhost:8000/cgi-bin/example.cgi -F example_key=example_value 127.0.0.1 - - [20/Mar/2020 23:34:15] "POST /cgi-bin/example.cgi HTTP/1.1" 200 - Traceback (most recent call last): File "/tmp/cgi-bin/example.cgi", line 3, in query_dict = cgi.parse() File "/tmp/cpython/Lib/cgi.py", line 159, in parse return parse_multipart(fp, pdict) File "/tmp/cpython/Lib/cgi.py", line 201, in parse_multipart boundary = pdict['boundary'].decode('ascii') AttributeError: 'str' object has no attribute 'decode' 127.0.0.1 - - [20/Mar/2020 23:34:16] CGI script exit status 0x100 $ $ $EDITOR /tmp/cpython/Lib/cgi.py $ $ # After changing cgi.parse POST (multipart) requests work: $ curl localhost:8000/cgi-bin/example.cgi -F example_key=example_value 127.0.0.1 - - [20/Mar/2020 23:35:10] "POST /cgi-bin/example.cgi HTTP/1.1" 200 - Worked, query dict is {'example_key': ['example_value']}. $ ---------- components: Library (Lib) messages: 364762 nosy: sangh priority: normal severity: normal status: open title: cgi.parse() does not work with multipart POST requests. type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 13:32:59 2020 From: report at bugs.python.org (Hamed Elahi) Date: Sat, 21 Mar 2020 17:32:59 +0000 Subject: [New-bugs-announce] [issue40035] ilteration of the .txt file Message-ID: <1584811979.62.0.778885388555.issue40035@roundup.psfhosted.org> New submission from Hamed Elahi : Hello, I saw a problem when this line of code is used in Python 3: settings = [line for line in settings if (line!='' and line[0] != '#')] Before updating windows, this line of code filtered the texts from the beginning of .txt file, so only the first lines remained after filtration, but now, after windows update, it filters the texts so that the last rows of .txt file will remain after filtration. Shouldn't a line of code acts the same in different updates? ---------- components: Windows messages: 364763 nosy: Hamed, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ilteration of the .txt file type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 21 18:12:47 2020 From: report at bugs.python.org (AlphaHot) Date: Sat, 21 Mar 2020 22:12:47 +0000 Subject: [New-bugs-announce] [issue40036] Deleting duplicates in itertoolsmodule.c Message-ID: <1584828767.94.0.279667413693.issue40036@roundup.psfhosted.org> Change by AlphaHot : ---------- nosy: AlphaHot priority: normal pull_requests: 18466 severity: normal status: open title: Deleting duplicates in itertoolsmodule.c versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 00:34:22 2020 From: report at bugs.python.org (Vikash Balasubramanian) Date: Sun, 22 Mar 2020 04:34:22 +0000 Subject: [New-bugs-announce] [issue40037] py_compile.py quiet undefined in main function Message-ID: <1584851662.52.0.134936419172.issue40037@roundup.psfhosted.org> New submission from Vikash Balasubramanian : I just had a random crash while using the autocomplete feature of Emacs. The error message was basically thrown by py_compile.py:213 quiet is undefined. I tracked down the code and sure enough, there is a comparison if quiet < 2: .... Please confirm this. ---------- components: Library (Lib) messages: 364780 nosy: Vikash Balasubramanian priority: normal severity: normal status: open title: py_compile.py quiet undefined in main function type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 01:15:30 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 22 Mar 2020 05:15:30 +0000 Subject: [New-bugs-announce] [issue40038] pathlib: remove partial support for preserving accessor when modifying a path Message-ID: <1584854130.01.0.225016964026.issue40038@roundup.psfhosted.org> New submission from Barney Gale : `pathlib.Path._init()` accepts a 'template' argument that pathlib uses - in some cases - to preserve the current accessor object when creating modified path objects. This works for `resolve()`, `absolute()` and `readlink()`, *but no other cases*! As customizing the accessor is not something we support (yet! see https://discuss.python.org/t/make-pathlib-extensible/3428), and the majority of path methods do not call `_init()` with a 'template' argument (and so do not preserve the accessor), I suggest this internal functionality be removed. Together with bpo-39682 / gh-18846, I believe this would allow us to remove `Path._init()` entirely, which would be a small performance win and a simplification of the code. Demo: ``` import pathlib class CustomAccessor(pathlib._NormalAccessor): pass def print_accessor(path): if isinstance(path._accessor, CustomAccessor): print(" %r: custom" % path) else: print(" %r: normal" % path) print("Here's a path with a custom accessor:") p = pathlib.Path("/tmp") p._accessor = CustomAccessor() print_accessor(p) print("Our accessor type is retained in resolve(), absolute() and readlink():") print_accessor(p.absolute()) print_accessor(p.resolve()) #print_accessor(p.readlink()) print("But not in any other path-creating methods!") print_accessor(p.with_name("foo")) print_accessor(p.with_suffix(".foo")) print_accessor(p.relative_to("/")) print_accessor(p / "foo") print_accessor(p.joinpath("foo")) print_accessor(p.parent) print_accessor(p.parents[0]) print_accessor(list(p.iterdir())[0]) print_accessor(list(p.glob("*"))[0]) print_accessor(list(p.rglob("*"))[0]) #print_accessor(p.expanduser()) ``` ---------- components: Library (Lib) messages: 364783 nosy: barneygale priority: normal severity: normal status: open title: pathlib: remove partial support for preserving accessor when modifying a path type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 01:49:22 2020 From: report at bugs.python.org (Junyu Zhang) Date: Sun, 22 Mar 2020 05:49:22 +0000 Subject: [New-bugs-announce] [issue40039] [CVE-2020-10796] Python multiprocessing Remote Code Execution vulnerability Message-ID: <1584856162.09.0.166116541026.issue40039@roundup.psfhosted.org> Change by Junyu Zhang : ---------- components: Library (Lib) files: Python-multiprocessing-RCE-vulnerability.pdf nosy: Junyu Zhang priority: normal severity: normal status: open title: [CVE-2020-10796] Python multiprocessing Remote Code Execution vulnerability type: security versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48994/Python-multiprocessing-RCE-vulnerability.pdf _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 05:59:42 2020 From: report at bugs.python.org (Kjell Braden) Date: Sun, 22 Mar 2020 09:59:42 +0000 Subject: [New-bugs-announce] [issue40040] ProactorEventLoop fails on recvfrom with IPv6 sockets Message-ID: <1584871182.68.0.304558218275.issue40040@roundup.psfhosted.org> New submission from Kjell Braden : on Windows 10 with Python 3.8.2 and Python 3.9.0a4, the ProactorEventLoop raises "OSError: [WinError 87] The parameter is incorrect" when recvfrom on an AF_INET6 socket returns data: DEBUG:asyncio:Using proactor: IocpProactor INFO:asyncio:Datagram endpoint local_addr=('::', 11111) remote_addr=None created: (<_ProactorDatagramTransport fd=288>, <__main__.Prot object at 0x0000028739A09580>) ERROR:root:error_received Traceback (most recent call last): File "...\Python\Python39\lib\asyncio\proactor_events.py", line 548, in _loop_reading res = fut.result() File "...\Python\Python39\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "...\Python\Python39\lib\asyncio\windows_events.py", line 496, in finish_recv return ov.getresult() OSError: [WinError 87] The parameter is incorrect The same code works without issues on python 3.7 or when using WindowsSelectorEventLoopPolicy. ---------- components: asyncio files: udp_ipv6_server.py messages: 364794 nosy: asvetlov, kbr, yselivanov priority: normal severity: normal status: open title: ProactorEventLoop fails on recvfrom with IPv6 sockets type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48995/udp_ipv6_server.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 07:40:31 2020 From: report at bugs.python.org (Yasunobu Imamura) Date: Sun, 22 Mar 2020 11:40:31 +0000 Subject: [New-bugs-announce] [issue40041] Typo in argparse ja-document wrong:'append', correct:'extend' Message-ID: <1584877231.67.0.450516943499.issue40041@roundup.psfhosted.org> New submission from Yasunobu Imamura : In Japanese document of argparse, https://docs.python.org/ja/3/library/argparse.html In explain of action, ENGLISH DOCUMENT: ..., 'append', ..., 'extend' JAPANESE DOCUMENT: ..., 'append', ..., 'append' So, Japanese document is wrong. ---------- assignee: docs at python components: Documentation messages: 364797 nosy: Yasunobu Imamura, docs at python priority: normal severity: normal status: open title: Typo in argparse ja-document wrong:'append', correct:'extend' type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 08:45:27 2020 From: report at bugs.python.org (Ethan Furman) Date: Sun, 22 Mar 2020 12:45:27 +0000 Subject: [New-bugs-announce] [issue40042] Enum Flag: psuedo-members have None for name attribute Message-ID: <1584881127.96.0.708352464709.issue40042@roundup.psfhosted.org> Change by Ethan Furman : ---------- assignee: ethan.furman nosy: ethan.furman priority: normal severity: normal stage: needs patch status: open title: Enum Flag: psuedo-members have None for name attribute versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 13:44:39 2020 From: report at bugs.python.org (Leon Hampton) Date: Sun, 22 Mar 2020 17:44:39 +0000 Subject: [New-bugs-announce] [issue40043] Poor RegEx example for (?(id/name)yes-pattern|no-pattern) Message-ID: <1584899079.21.0.403850143741.issue40043@roundup.psfhosted.org> New submission from Leon Hampton : Hello, In the 3.7.7 documentation on Regular Expression, the Conditional Construct, (?(id/name)yes-pattern|no-pattern), is discussed. (This is a very thorough document, by the way. Good job!) One example given for the Conditional Construct does not work as described. Specifically, the example gives this matching pattern '(<)?(\w+@\w+(?:\.\w+)+)(?(1)>|$)' and states that it will NOT MATCH the string ' _______________________________________ From report at bugs.python.org Sun Mar 22 15:36:36 2020 From: report at bugs.python.org (Charalampos Stratakis) Date: Sun, 22 Mar 2020 19:36:36 +0000 Subject: [New-bugs-announce] [issue40044] Tests failing with the latest update of openssl to version 1.1.1e Message-ID: <1584905796.83.0.576027833923.issue40044@roundup.psfhosted.org> New submission from Charalampos Stratakis : The fedora rawhide buildbots started failing due to the latest update of openssl to version 1.1.1e. e.g. https://buildbot.python.org/all/#/builders/607/builds/137 Changelog: https://www.openssl.org/news/cl111.txt The relevant info which seems to make the tests fail: Properly detect EOF while reading in libssl. Previously if we hit an EOF while reading in libssl then we would report an error back to the application (SSL_ERROR_SYSCALL) but errno would be 0. We now add an error to the stack (which means we instead return SSL_ERROR_SSL) and therefore give a hint as to what went wrong. Upstream PR: https://github.com/openssl/openssl/pull/10882 urllib3 issue: https://github.com/urllib3/urllib3/issues/1825 ---------- assignee: christian.heimes components: Library (Lib), SSL, Tests messages: 364818 nosy: christian.heimes, cstratak priority: normal severity: normal status: open title: Tests failing with the latest update of openssl to version 1.1.1e versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 22 20:50:33 2020 From: report at bugs.python.org (Kyle Stanley) Date: Mon, 23 Mar 2020 00:50:33 +0000 Subject: [New-bugs-announce] [issue40045] Make "dunder" method documentation easier to locate Message-ID: <1584924633.15.0.655954011008.issue40045@roundup.psfhosted.org> New submission from Kyle Stanley : In a recent python-ideas thread, the rule of dunder methods being reserved for Python internal usage only was brought up (https://mail.python.org/archives/list/python-ideas at python.org/message/GMRPSSQW3SXNCP4WU7SYDINL67M2WLQI/), due to an author of a third party library using them without knowing better. Steven D'Aprano linked the following section of the docs that defines the rule: https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers. When I had attempted to search for the rule in the documentation (prior to the above discussion), I noticed that it was rather difficult to discover because it was written just as "System-defined names" with no mention of "dunder" (which is what the dev community typically refers to them as, at least in more recent history). To make it easier for the average user and library maintainer to locate this section, I propose changing the first line to one of the following: 1) System-defined names, also known as "dunder" names. 2) System-defined names, informally known as "dunder" names. I'm personally in favor of (1), but I could also see a reasonable argument for (2). If we can decide on the wording, it would make for a good first-time PR to the CPython docs. ---------- assignee: docs at python components: Documentation keywords: easy, newcomer friendly messages: 364832 nosy: aeros, docs at python priority: normal severity: normal status: open title: Make "dunder" method documentation easier to locate type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 23 05:01:11 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 23 Mar 2020 09:01:11 +0000 Subject: [New-bugs-announce] [issue40046] Increase test coverage of the random module Message-ID: <1584954071.53.0.240057665768.issue40046@roundup.psfhosted.org> New submission from Serhiy Storchaka : The propose test adds several tests for random module. Mainly tests for integer, sequence and iterable arguments. It also documents that randrange() accepts non-integers. ---------- assignee: docs at python components: Documentation, Tests messages: 364840 nosy: docs at python, mark.dickinson, rhettinger, serhiy.storchaka, tim.peters priority: normal severity: normal status: open title: Increase test coverage of the random module type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 23 10:34:33 2020 From: report at bugs.python.org (=?utf-8?q?Peter_W=C3=BCrtz?=) Date: Mon, 23 Mar 2020 14:34:33 +0000 Subject: [New-bugs-announce] [issue40047] itertools.tee does not release resources during iteration? Message-ID: <1584974073.92.0.811600975239.issue40047@roundup.psfhosted.org> New submission from Peter W?rtz : Itertools `tee` does not seem to de-reference yielded items, even after consumption of all items from all tee-iterators. According to the documentation (to my understanding), there shouldn't be any extra memory requirement as long as the tee-iterators are consumed in a balanced way. I.e. after an item was pulled from all iterators there shouldn't be any residual reference to it. This is true for the example-implementation mentioned in the documentation, but `itertools.tee` doesn't de-reference items until the tee-iterator itself is deleted: https://pastebin.com/r3JUkH41 Is this a bug or am I missing something? ---------- components: Library (Lib) messages: 364849 nosy: pwuertz priority: normal severity: normal status: open title: itertools.tee does not release resources during iteration? type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 23 10:36:53 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 23 Mar 2020 14:36:53 +0000 Subject: [New-bugs-announce] [issue40048] _PyEval_EvalFrameDefault() doesn't reset tstate->frame if _PyCode_InitOpcache() fails Message-ID: <1584974213.54.0.484285807208.issue40048@roundup.psfhosted.org> New submission from STINNER Victor : tstate->frame is a borrowed references to the current frame object. It's set tp the frame at _PyEval_EvalFrameDefault() and resets to frame->f_back at _PyEval_EvalFrameDefault() exit. Problem: when _PyCode_InitOpcache() fails, tstate->frame is not reset to frame->f_back. ---------- components: Interpreter Core messages: 364850 nosy: vstinner priority: normal severity: normal status: open title: _PyEval_EvalFrameDefault() doesn't reset tstate->frame if _PyCode_InitOpcache() fails versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 23 11:54:18 2020 From: report at bugs.python.org (Danijel) Date: Mon, 23 Mar 2020 15:54:18 +0000 Subject: [New-bugs-announce] [issue40049] tarfile cannot extract from stdin Message-ID: <1584978858.02.0.829307540722.issue40049@roundup.psfhosted.org> New submission from Danijel : Hi, I have the following code: ``` import tarfile import sys tar = tarfile.open(fileobj=sys.stdin.buffer, mode='r|*') tar.extractall("tarout") tar.close() ``` then doing the following on a debian 10 system: ``` $ python -m tarfile -c git.tar /usr/share/doc/git $ python -V Python 3.8.1 $ cat git.tar | python foo.py $ cat git.tar | python foo.py Traceback (most recent call last): File "foo.py", line 5, in tar.extractall("tarout") File "/home/danielt/miniconda3/lib/python3.8/tarfile.py", line 2026, in extractall self.extract(tarinfo, path, set_attrs=not tarinfo.isdir(), File "/home/danielt/miniconda3/lib/python3.8/tarfile.py", line 2067, in extract self._extract_member(tarinfo, os.path.join(path, tarinfo.name), File "/home/danielt/miniconda3/lib/python3.8/tarfile.py", line 2139, in _extract_member self.makefile(tarinfo, targetpath) File "/home/danielt/miniconda3/lib/python3.8/tarfile.py", line 2178, in makefile source.seek(tarinfo.offset_data) File "/home/danielt/miniconda3/lib/python3.8/tarfile.py", line 513, in seek raise StreamError("seeking backwards is not allowed") tarfile.StreamError: seeking backwards is not allowed ``` The second extraction trys to seek, although the mode is 'r|*'. For reference if I remove ".buffer" from the code above, I can run it with python2 without problems: ``` $ cat foo2.py import tarfile import sys tar = tarfile.open(fileobj=sys.stdin, mode='r|*') tar.extractall("tarout") tar.close() $ cat git.tar | python2 foo2.py $ cat git.tar | python2 foo2.py $ cat git.tar | python2 foo2.py $ cat git.tar | python2 foo2.py $ cat git.tar | python2 foo2.py ``` ---------- components: Library (Lib) messages: 364860 nosy: dtamuc priority: normal severity: normal status: open title: tarfile cannot extract from stdin versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 23 18:41:03 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 23 Mar 2020 22:41:03 +0000 Subject: [New-bugs-announce] [issue40050] test_importlib leaked [6303, 6299, 6303] references Message-ID: <1585003263.45.0.397695046047.issue40050@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/206/builds/119 test_importlib leaked [6303, 6299, 6303] references, sum=18905 test_importlib leaked [2022, 2020, 2022] memory blocks, sum=6064 Issue reported at: https://bugs.python.org/issue1635741#msg364845 It seems like the regression was introduced by the following change: commit 8334f30a74abcf7e469b901afc307887aa85a888 (HEAD) Author: Hai Shi Date: Fri Mar 20 16:16:45 2020 +0800 bpo-1635741: Port _weakref extension module to multiphase initialization (PEP 489) (GH-19084) ---------- components: Tests messages: 364905 nosy: vstinner priority: normal severity: normal status: open title: test_importlib leaked [6303, 6299, 6303] references versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 01:06:26 2020 From: report at bugs.python.org (wyz23x2) Date: Tue, 24 Mar 2020 05:06:26 +0000 Subject: [New-bugs-announce] [issue40051] Dead link in help(lib2to3) Message-ID: <1585026386.04.0.352572849659.issue40051@roundup.psfhosted.org> New submission from wyz23x2 : When typing this in shell: >>> import lib2to3 >>> help(lib2to3) The output contains this link: --snip-- MODULE REFERENCE https://docs.python.org/3.8/library/lib2to3 <-- The following documentation is automatically generated from the Python --snip-- But when you access it, 404! This works: https://docs.python.org/3.8/library/2to3.html#module-lib2to3 Please change it. Thanks! ---------- assignee: docs at python components: 2to3 (2.x to 3.x conversion tool), Documentation, Library (Lib) messages: 364917 nosy: docs at python, wyz23x2 priority: normal severity: normal status: open title: Dead link in help(lib2to3) type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 02:58:12 2020 From: report at bugs.python.org (Andreas Schneider) Date: Tue, 24 Mar 2020 06:58:12 +0000 Subject: [New-bugs-announce] [issue40052] Incorrect pointer alignment in _PyVectorcall_Function() of cpython/abstract.h Message-ID: <1585033092.63.0.928625583781.issue40052@roundup.psfhosted.org> New submission from Andreas Schneider : In file included from /builds/cryptomilk/pam_wrapper/src/python/pypamtest.c:21: In file included from /usr/include/python3.8/Python.h:147: In file included from /usr/include/python3.8/abstract.h:837: /usr/include/python3.8/cpython/abstract.h:91:11: error: cast from 'char *' to 'vectorcallfunc *' (aka 'struct _object *(**)(struct _object *, struct _object *const *, unsigned long, struct _object *)') increases required alignment from 1 to 8 [-Werror,-Wcast-align] ptr = (vectorcallfunc*)(((char *)callable) + offset); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 error generated. The correct way to do it would be: union { char *data; vectorcallfunc *ptr; } vc; vc.data = (char *)callable + offset; return *vc.ptr; ---------- components: C API messages: 364919 nosy: asn priority: normal severity: normal status: open title: Incorrect pointer alignment in _PyVectorcall_Function() of cpython/abstract.h _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 03:21:17 2020 From: report at bugs.python.org (Nan Hua) Date: Tue, 24 Mar 2020 07:21:17 +0000 Subject: [New-bugs-announce] [issue40053] Document the behavior that no interplotation is applied when no *args are passed in for logging statements Message-ID: <1585034477.86.0.546029969231.issue40053@roundup.psfhosted.org> New submission from Nan Hua : As I see, Python's logging module's implementation has a nice property that, when no additional args are passed in, the msg (first argument) will be directly printed. For example, logging.error('abc %s') can be handled peacefully with printing "ERROR:root:abc %s" in the log. However, the logging's documentation only said the followings: "The msg is the message format string, and the args are the arguments which are merged into msg using the string formatting operator." >From what I see, this implementation (seems the case for both Python2 and Python3) has many benefits: saving CPU resources, safe handling pre-formated string, etc. More importantly, it also de-facto allows using the convenient f-string in logging statement, e.g. logging.error(f'Started at {start_time}, finished at {finish_time}' f' by user {user}') can run correctly and smoothly even with user containing %s inside. In summary, I hope this de-facto actual behavior can be officially endorsed, with wordings like, "When *args is empty, i.e. no additional positional arguments passed in, the msg be of any string (no need to be a format string) and will be directly used as is without interpolation." What do you think? Thank you a lot! ---------- components: Library (Lib) messages: 364920 nosy: nhua priority: normal severity: normal status: open title: Document the behavior that no interplotation is applied when no *args are passed in for logging statements type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 11:29:16 2020 From: report at bugs.python.org (Richard Neumann) Date: Tue, 24 Mar 2020 15:29:16 +0000 Subject: [New-bugs-announce] [issue40054] Allow formatted strings as docstrings Message-ID: <1585063756.05.0.0379809062913.issue40054@roundup.psfhosted.org> New submission from Richard Neumann : Currently only plain strings can be used as docstrings, such as: class Foo: """Spamm eggs.""" For dynamic class generation, it would be useful to allow format strings as docstrings as well: doc = 'eggs' class Foo: """Spamm {}.""".format(doc) or: doc = 'eggs' class Foo: f"""Spamm {doc}.""" A current use case in which I realized that this feature was missing is: class OAuth2ClientMixin(Model, ClientMixin): # pylint: disable=R0904 """An OAuth 2.0 client mixin for peewee models.""" @classmethod def get_related_models(cls, model=Model): """Yields related models.""" for mixin, backref in CLIENT_RELATED_MIXINS: yield cls._get_related_model(model, mixin, backref) @classmethod def _get_related_model(cls, model, mixin, backref): """Returns an implementation of the related model.""" class ClientRelatedModel(model, mixin): f"""Implementation of {mixin.__name__}.""" client = ForeignKeyField( cls, column_name='client', backref=backref, on_delete='CASCADE', on_update='CASCADE') return ClientRelatedModel It actually *is* possible to dynamically set the docstring via the __doc__ attribute: doc = 'eggs' class Foo: pass Foo.__doc__ = doc Allowing format strings would imho be more obvious when reading the code as it is set, where a docstring is expected i.e. below the class / function definition. ---------- messages: 364934 nosy: conqp priority: normal severity: normal status: open title: Allow formatted strings as docstrings type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 12:51:30 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 24 Mar 2020 16:51:30 +0000 Subject: [New-bugs-announce] [issue40055] test___all__ and test_distutils alters the enviroinment: pkg_resources.PEP440Warning Message-ID: <1585068690.81.0.102478946521.issue40055@roundup.psfhosted.org> New submission from STINNER Victor : Even when no test is run, test_distutils alters the environment: $ ./python -m test -v --fail-env-changed test_distutils -m DONTEXISTS == CPython 3.7.7+ (heads/3.7:1cdc61c767, Mar 24 2020, 17:25:30) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] == Linux-5.5.9-200.fc31.x86_64-x86_64-with-fedora-31-Thirty_One little-endian == cwd: /home/vstinner/python/3.7/build/test_python_157151 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 0.37 Run tests sequentially 0:00:00 load avg: 0.37 [1/1] test_distutils ---------------------------------------------------------------------- Ran 0 tests in 0.001s OK Warning -- warnings.filters was modified by test_distutils Before: (140048876788832, [], []) After: (140048876788832, [], [('ignore', None, , None, 0)]) test_distutils run no tests == Tests result: NO TEST RUN == 1 test run no tests: test_distutils Total duration: 655 ms Tests result: NO TEST RUN The problem comes from Lib/distutils/tests/test_check.py: "from distutils.command.check import check, HAS_DOCUTILS" imports indirectly the docutils module which imports pkg_resources. pkg_resources changes warnings filters. docutils is installed by python3-docutils-0.15.2-1.fc31.noarch package and pkg_resources comes from python3-setuptools-41.6.0-1.fc31.noarch package. Attached PR disables docutils to avoid side effects of "import docutils" like pkg_resources modifying warnings filters. ---------- components: Distutils, Tests messages: 364941 nosy: dstufft, eric.araujo, vstinner priority: normal severity: normal status: open title: test___all__ and test_distutils alters the enviroinment: pkg_resources.PEP440Warning versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 16:15:15 2020 From: report at bugs.python.org (Evin Liang) Date: Tue, 24 Mar 2020 20:15:15 +0000 Subject: [New-bugs-announce] [issue40056] more user-friendly turtledemo Message-ID: <1585080915.97.0.255153547178.issue40056@roundup.psfhosted.org> New submission from Evin Liang : [minor] 1. Display underscores as spaces in menu bar 2. Allow user to run custom code ---------- components: Library (Lib) messages: 364961 nosy: Evin Liang priority: normal pull_requests: 18506 severity: normal status: open title: more user-friendly turtledemo type: enhancement versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 17:04:04 2020 From: report at bugs.python.org (OverMighty) Date: Tue, 24 Mar 2020 21:04:04 +0000 Subject: [New-bugs-announce] [issue40057] Missing mention of some class attributes in socketserver documentation Message-ID: <1585083844.09.0.775894389563.issue40057@roundup.psfhosted.org> New submission from OverMighty : The documentation of the `socketserver` module of the Python standard library, available here: https://docs.python.org/3/library/socketserver.html, doesn't mention the existence of the following class attributes: StreamRequestHandler.connection (defined at line 764 of socketserver.py) DatagramRequestHandler.packet (defined at line 812 of socketserver.py) DatagramRequestHandler.socket (defined at line 812 of socketserver.py) ---------- assignee: docs at python components: Documentation messages: 364962 nosy: docs at python, overmighty priority: normal severity: normal status: open title: Missing mention of some class attributes in socketserver documentation type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 24 21:54:40 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 25 Mar 2020 01:54:40 +0000 Subject: [New-bugs-announce] [issue40058] Running test_datetime twice fails with: module 'datetime' has no attribute '_divide_and_round' Message-ID: <1585101280.02.0.657415835319.issue40058@roundup.psfhosted.org> New submission from STINNER Victor : vstinner at apu$ ./python -m test -v test_datetime test_datetime -m test_divide_and_round == CPython 3.9.0a5+ (heads/pr/19122:0ac3031a80, Mar 25 2020, 02:25:19) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] == Linux-5.5.9-200.fc31.x86_64-x86_64-with-glibc2.30 little-endian == cwd: /home/vstinner/python/master/build/test_python_233006 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 0.82 Run tests sequentially 0:00:00 load avg: 0.82 [1/2] test_datetime test_divide_and_round (test.datetimetester.TestModule_Pure) ... ok test_divide_and_round (test.datetimetester.TestModule_Fast) ... skipped 'Only run for Pure Python implementation' ---------------------------------------------------------------------- Ran 2 tests in 0.002s OK (skipped=1) 0:00:00 load avg: 0.82 [2/2] test_datetime test_divide_and_round (test.datetimetester.TestModule_Pure) ... ERROR test_divide_and_round (test.datetimetester.TestModule_Fast) ... skipped 'Only run for Pure Python implementation' ====================================================================== ERROR: test_divide_and_round (test.datetimetester.TestModule_Pure) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/python/master/Lib/test/datetimetester.py", line 87, in test_divide_and_round dar = datetime_module._divide_and_round AttributeError: module 'datetime' has no attribute '_divide_and_round' ---------------------------------------------------------------------- Ran 2 tests in 0.006s FAILED (errors=1, skipped=1) test test_datetime failed test_datetime failed == Tests result: FAILURE == 1 test OK. 1 test failed: test_datetime Total duration: 448 ms Tests result: FAILURE ---------- components: Tests messages: 364970 nosy: vstinner priority: normal severity: normal status: open title: Running test_datetime twice fails with: module 'datetime' has no attribute '_divide_and_round' versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 02:54:21 2020 From: report at bugs.python.org (=?utf-8?b?TWljaGHFgiBHw7Nybnk=?=) Date: Wed, 25 Mar 2020 06:54:21 +0000 Subject: [New-bugs-announce] [issue40059] Provide a toml module in the standard library Message-ID: <1585119261.47.0.818238682424.issue40059@roundup.psfhosted.org> New submission from Micha? G?rny : PEP 518 uses the TOML format to specify build system requirements. AFAIU this means that all new build systems will require a TOML parser. Could you consider adding one to the standard library to reduce the number of chicken-egg problems? The referenced PEP states that 'pytoml TOML parser is ~300 lines of pure Python code', so I don't think integrating it would be a large maintenance cost. [1] https://www.python.org/dev/peps/pep-0518/ ---------- components: Library (Lib) messages: 364979 nosy: brett.cannon, dstufft, mgorny, njs priority: normal severity: normal status: open title: Provide a toml module in the standard library type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 03:20:26 2020 From: report at bugs.python.org (Dima Tisnek) Date: Wed, 25 Mar 2020 07:20:26 +0000 Subject: [New-bugs-announce] [issue40060] socket.TCP_NOTSENT_LOWAT is missing in official macOS builds Message-ID: <1585120826.63.0.863724436073.issue40060@roundup.psfhosted.org> New submission from Dima Tisnek : Somehow, it turns out that `TCP_NOTSENT_LOWAT` that's available since 3.7.x is not available in the official macOS builds ?: > python3.7 Python 3.7.4 (v3.7.4:e09359112e, Jul 8 2019, 14:54:52) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.TCP_NOTSENT_LOWAT Traceback (most recent call last): File "", line 1, in AttributeError: module 'socket' has no attribute 'TCP_NOTSENT_LOWAT' > python3.8 Python 3.8.2 (v3.8.2:7b3ab5921f, Feb 24 2020, 17:52:18) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.TCP_NOTSENT_LOWAT Traceback (most recent call last): File "", line 1, in AttributeError: module 'socket' has no attribute 'TCP_NOTSENT_LOWAT' > python3.9 Python 3.9.0a4 (v3.9.0a4:6e02691f30, Feb 25 2020, 18:14:13) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.TCP_NOTSENT_LOWAT Traceback (most recent call last): File "", line 1, in AttributeError: module 'socket' has no attribute 'TCP_NOTSENT_LOWAT' And my local build has it ?: > ~/cpython/python.exe Python 3.9.0a4+ (heads/master:be501ca241, Mar 4 2020, 15:16:49) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.TCP_NOTSENT_LOWAT 513 So... my guess is official builds are using old SDK or header files. ? My system has it e.g. here: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Kernel.framework/Versions/A/Headers/netinet/tcp.h 230:#define TCP_NOTSENT_LOWAT 0x201 /* Low water mark for TCP unsent data */ And in fact it's present in every `netinet/tcp.h` on my system: CommandLineTools 10.14 and 10.5 sdks; MacOSX dev sdk, {AppleTV,Watch,iPhone}{OS,Simulator} sdks. ---------- components: Extension Modules messages: 364984 nosy: Dima.Tisnek, Mariatta, njs priority: normal severity: normal status: open title: socket.TCP_NOTSENT_LOWAT is missing in official macOS builds versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 05:36:53 2020 From: report at bugs.python.org (Kyle Stanley) Date: Wed, 25 Mar 2020 09:36:53 +0000 Subject: [New-bugs-announce] [issue40061] Possible refleak in _asynciomodule.c future_add_done_callback() Message-ID: <1585129013.48.0.185605805596.issue40061@roundup.psfhosted.org> New submission from Kyle Stanley : When using the `test-with-buildbots` label in GH-19149 (which involved no C changes), a failure occurred in test_asyncio for several of the refleak buildbots. Here's the output of a few: AMD64 Fedora Stable Refleaks PR: test_asyncio leaked [3, 3, 27] references, sum=33 test_asyncio leaked [3, 3, 28] memory blocks, sum=34 2 tests failed again: test__xxsubinterpreters test_asyncio == Tests result: FAILURE then FAILURE == AMD64 RHEL8 Refleaks PR: test_asyncio leaked [3, 3, 3] references, sum=9 test_asyncio leaked [3, 3, 3] memory blocks, sum=9 2 tests failed again: test__xxsubinterpreters test_asyncio == Tests result: FAILURE then FAILURE == RHEL7 Refleaks PR: test_asyncio leaked [3, 3, 3] references, sum=9 test_asyncio leaked [3, 3, 3] memory blocks, sum=9 2 tests failed again: test__xxsubinterpreters test_asyncio == Tests result: FAILURE then FAILURE == I'm unable to replicate it locally, but I think I may have located a subtle, uncommon refleak in `future_add_done_callback()`, within _asynciomodule.c. Specifically: ``` PyObject *tup = PyTuple_New(2); if (tup == NULL) { return NULL; } Py_INCREF(arg); PyTuple_SET_ITEM(tup, 0, arg); Py_INCREF(ctx); PyTuple_SET_ITEM(tup, 1, (PyObject *)ctx); if (fut->fut_callbacks != NULL) { int err = PyList_Append(fut->fut_callbacks, tup); if (err) { Py_DECREF(tup); return NULL; } Py_DECREF(tup); } else { fut->fut_callbacks = PyList_New(1); if (fut->fut_callbacks == NULL) { // Missing ``Py_DECREF(tup);`` ? return NULL; } ``` (The above code is located at: https://github.com/python/cpython/blob/7668a8bc93c2bd573716d1bea0f52ea520502b28/Modules/_asynciomodule.c#L664-L685) In the above conditional for "if (fut->fut_callbacks == NULL)", it appears that `tup` is pointing to a non-NULL new reference at this point, and thus should be decref'd prior to returning NULL. Otherwise, it seems like it could be leaked. But, I would appreciate it if someone could double check this (the C-API isn't an area I'm experienced); particularly since this code has been in place for a decent while (since 3.7). I _suspect_ it's gone undetected and only failed intermittently because this specific ``return NULL`` path is rather uncommon. I'd be glad to open a PR to address the issue, assuming I'm not missing something with the above refleak. Otherwise, feel free to correct me. ---------- assignee: aeros components: C API, Extension Modules messages: 364985 nosy: aeros, asvetlov, yselivanov priority: high severity: normal status: open title: Possible refleak in _asynciomodule.c future_add_done_callback() type: resource usage versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 10:09:34 2020 From: report at bugs.python.org (shaw_koike) Date: Wed, 25 Mar 2020 14:09:34 +0000 Subject: [New-bugs-announce] [issue40062] islapha method returns True when the word is japanese Message-ID: <1585145374.17.0.909964771668.issue40062@roundup.psfhosted.org> New submission from shaw_koike : When I use isalpha method with Japanese, I got it True whenever. For example, ``` >>> "???".isalpha() True ``` Is it the correct behavior? Thanks for readning. ---------- components: Unicode messages: 364988 nosy: ezio.melotti, shaw_koike, vstinner priority: normal severity: normal status: open title: islapha method returns True when the word is japanese type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 12:57:23 2020 From: report at bugs.python.org (Amit Moghe) Date: Wed, 25 Mar 2020 16:57:23 +0000 Subject: [New-bugs-announce] [issue40063] Fatal python error:PyEval_RestoreThread NULL tstate Message-ID: <1585155443.15.0.196815788014.issue40063@roundup.psfhosted.org> New submission from Amit Moghe : Hi Team, I am writting an application in python flask. I this when flask application is establishing Sybase DB connection through normal python SybaseConnector. In this process I am getting below error, could you please suggest on this ? Error :Fatal python error:PyEval_RestoreThread NULL tstate flask/1.0.2/lib/flask/app.py line 1815 in full_dispatch_request Memory fault(Coredump) Could you please suggest on this? ---------- components: Tests messages: 365001 nosy: amitrutvij priority: normal severity: normal status: open title: Fatal python error:PyEval_RestoreThread NULL tstate type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 15:07:25 2020 From: report at bugs.python.org (Fred Drake) Date: Wed, 25 Mar 2020 19:07:25 +0000 Subject: [New-bugs-announce] [issue40064] py38: document xml.etree.cElementTree will be removed in 3.9 Message-ID: <1585163245.27.0.977723603934.issue40064@roundup.psfhosted.org> Change by Fred Drake : ---------- assignee: docs at python components: Documentation nosy: docs at python, fdrake priority: normal severity: normal status: open title: py38: document xml.etree.cElementTree will be removed in 3.9 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 15:09:39 2020 From: report at bugs.python.org (Fred Drake) Date: Wed, 25 Mar 2020 19:09:39 +0000 Subject: [New-bugs-announce] [issue40065] py39: remove deprecation note for xml.etree.cElementTree Message-ID: <1585163379.04.0.795433622529.issue40065@roundup.psfhosted.org> New submission from Fred Drake : Since xml.etree.cElementTree does not exist in Python 3.9, the statement that it's deprecated should be removed from the documentation. ---------- assignee: docs at python components: Documentation keywords: easy messages: 365016 nosy: docs at python, fdrake priority: normal severity: normal status: open title: py39: remove deprecation note for xml.etree.cElementTree versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 15:48:58 2020 From: report at bugs.python.org (Ethan Furman) Date: Wed, 25 Mar 2020 19:48:58 +0000 Subject: [New-bugs-announce] [issue40066] Enum._convert should change __repr__ and/or __str__ to use module name instead of class name Message-ID: <1585165738.23.0.618463548627.issue40066@roundup.psfhosted.org> New submission from Ethan Furman : Serhiy had the idea of having Enum._convert also modify the __str__ and __repr__ of newly created enumerations to display the module name instead of the enumeration name (https://bugs.python.org/msg325007): --> socket.AF_UNIX ==> --> print(socket.AF_UNIX) AddressFamily.AF_UNIX ==> socket.AF_UNIX Thoughts? ---------- assignee: ethan.furman messages: 365019 nosy: barry, eli.bendersky, ethan.furman priority: normal severity: normal status: open title: Enum._convert should change __repr__ and/or __str__ to use module name instead of class name type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 16:26:27 2020 From: report at bugs.python.org (=?utf-8?q?Furkan_=C3=96nder?=) Date: Wed, 25 Mar 2020 20:26:27 +0000 Subject: [New-bugs-announce] [issue40067] Improve error messages for multiple star expressions in assignment Message-ID: <1585167987.52.0.347925597304.issue40067@roundup.psfhosted.org> New submission from Furkan ?nder : Hello everyone, >>> a,*b,*c,*d = range(4) File "", line 1 SyntaxError: two starred expressions in assignment >>> >>> a,*b,*c,*d,*e = range(5) File "", line 1 SyntaxError: two starred expressions in assignment >>> I think this error message is incomplete. It states that there are two starred assignments but there are more. It might be better if we change it to something less vague like "SyntaxError: more than one starred expressions in assignment." ---------- components: Interpreter Core messages: 365023 nosy: BTaskaya, furkanonder priority: normal severity: normal status: open title: Improve error messages for multiple star expressions in assignment type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 18:34:11 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 25 Mar 2020 22:34:11 +0000 Subject: [New-bugs-announce] [issue40068] test_threading: ThreadJoinOnShutdown.test_reinit_tls_after_fork() crash with Python 3.8 on AIX Message-ID: <1585175651.22.0.348763219831.issue40068@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/73/builds/208 --- Warning -- files was modified by test_threading Before: [] After: ['core'] 1 test altered the execution environment: test_threading --- The bug can be reproduced with: $ ./configure --with-pydebug CC=gcc CFLAGS=-O0 $ make (... $ ./python -m test test_threading --fail-env-changed -m test.test_threading.ThreadJoinOnShutdown.test_reinit_tls_after_fork -F -j 20 0:00:00 Run tests in parallel using 20 child processes 0:00:01 [ 1] test_threading passed 0:00:01 [ 2] test_threading passed (...) 0:00:03 [ 17] test_threading passed 0:00:03 [ 18/1] test_threading failed (env changed) Warning -- files was modified by test_threading Before: [] After: ['core'] Kill (...) Tests result: ENV CHANGED ---------- components: Tests messages: 365025 nosy: vstinner priority: normal severity: normal status: open title: test_threading: ThreadJoinOnShutdown.test_reinit_tls_after_fork() crash with Python 3.8 on AIX versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 18:51:43 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Wed, 25 Mar 2020 22:51:43 +0000 Subject: [New-bugs-announce] [issue40069] Clear .lst files for AIX Message-ID: <1585176703.22.0.6311195608.issue40069@roundup.psfhosted.org> New submission from Batuhan Taskaya : AIX files stay even if we run make clean on the directory, I think they should be cleared ---------- components: Build messages: 365026 nosy: BTaskaya, David.Edelsohn priority: normal severity: normal status: open title: Clear .lst files for AIX type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 19:47:42 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 25 Mar 2020 23:47:42 +0000 Subject: [New-bugs-announce] [issue40070] GCC crashed on AMD64 RHEL7 LTO + PGO 3.7 (compiler bug) Message-ID: <1585180062.41.0.685376297211.issue40070@roundup.psfhosted.org> New submission from STINNER Victor : GCC crashed on AMD64 RHEL7 LTO + PGO 3.7: https://buildbot.python.org/all/#/builders/190/builds/131 It crashed at the second stage of the PGO compilation (-fprofile-use). --- (...) gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/asdl.o Python/asdl.c gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/ast.o Python/ast.c gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/ast_opt.o Python/ast_opt.c gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/ast_unparse.o Python/ast_unparse.c gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/bltinmodule.o Python/bltinmodule.c In file included from ./Include/pytime.h:6:0, from ./Include/Python.h:87, from Python/bltinmodule.c:3: Python/bltinmodule.c: In function ?builtin_any?: ./Include/object.h:790:31: note: correcting inconsistent value profile: ic profiler overall count (78) does not match BB count (78) (*Py_TYPE(op)->tp_dealloc)((PyObject *)(op))) ^ Python/bltinmodule.c:411:1: note: Inconsistent profile: indirect call target (0) does not exist builtin_any(PyObject *module, PyObject *iterable) ^ In file included from ./Include/pytime.h:6:0, from ./Include/Python.h:87, from Python/bltinmodule.c:3: ./Include/object.h:790:31: note: correcting inconsistent value profile: ic profiler overall count (0) does not match BB count (0) (*Py_TYPE(op)->tp_dealloc)((PyObject *)(op))) ^ ./Include/object.h:790:31: note: correcting inconsistent value profile: ic profiler overall count (0) does not match BB count (0) Python/bltinmodule.c:2868:1: internal compiler error: Segmentation fault } ^ Please submit a full bug report, with preprocessed source if appropriate. See for instructions. gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/ceval.o Python/ceval.c gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/compile.o Python/compile.c gcc -pthread -c -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fprofile-use -fprofile-correction -I. -I./Include -DPy_BUILD_CORE -o Python/codecs.o Python/codecs.c Preprocessed source stored into /tmp/ccsdRI8K.out file, please attach this to your bugreport. make[1]: *** [Python/bltinmodule.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make[1]: Leaving directory `/home/buildbot/buildarea/3.7.cstratak-RHEL7-x86_64.lto-pgo/build' make: *** [profile-opt] Error 2 --- That's RHEL 7.7 with gcc-4.8.5-39.el7.x86_64. The previous build was successful: https://buildbot.python.org/all/#/builders/190/builds/130 CC.version: gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) datetime.datetime.now: 2020-03-24 16:20:36.900720 According to /var/log/yum.log, the latest package update was done at Mar 18 (05:13:07). So GCC wasn't update between build 130 and 131. ---------- components: Build, Tests messages: 365032 nosy: vstinner priority: normal severity: normal status: open title: GCC crashed on AMD64 RHEL7 LTO + PGO 3.7 (compiler bug) versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 21:25:25 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 26 Mar 2020 01:25:25 +0000 Subject: [New-bugs-announce] [issue40071] test__xxsubinterpreters leaked [1, 1, 1] references: test_ids_global() Message-ID: <1585185925.55.0.269553071046.issue40071@roundup.psfhosted.org> New submission from STINNER Victor : $ ./python -m test -R 3:3 test__xxsubinterpreters -m test_ids_global 0:00:00 load avg: 0.80 Run tests sequentially 0:00:00 load avg: 0.80 [1/1] test__xxsubinterpreters beginning 6 repetitions 123456 ...... test__xxsubinterpreters leaked [1, 1, 1] references, sum=3 test__xxsubinterpreters leaked [1, 1, 1] memory blocks, sum=3 test__xxsubinterpreters failed == Tests result: FAILURE == 1 test failed: test__xxsubinterpreters Total duration: 819 ms Tests result: FAILURE It started to leak since: commit 7dd549eb08939e1927fba818116f5202e76f8d73 Author: Paulo Henrique Silva Date: Tue Mar 24 23:19:58 2020 -0300 bpo-1635741: Port _functools module to multiphase initialization (PEP 489) (GH-19151) ---------- components: Interpreter Core messages: 365042 nosy: vstinner priority: normal severity: normal status: open title: test__xxsubinterpreters leaked [1, 1, 1] references: test_ids_global() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Mar 25 21:42:38 2020 From: report at bugs.python.org (honglei jiang) Date: Thu, 26 Mar 2020 01:42:38 +0000 Subject: [New-bugs-announce] [issue40072] UDP Echo Server raise OSError when recved packet Message-ID: <1585186958.06.0.811123062641.issue40072@roundup.psfhosted.org> New submission from honglei jiang : Env: Win7/Python3.8.2 x64/ Output: Starting UDP server Traceback (most recent call last): File "d:\ProgramData\Python38\lib\asyncio\proactor_events.py", line 548, in _loop_reading res = fut.result() File "d:\ProgramData\Python38\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "d:\ProgramData\Python38\lib\asyncio\windows_events.py", line 496, in finish_recv return ov.getresult() OSError: [WinError 87] ????? [WinError 87] ????? ---------- components: asyncio files: asyncio_udp_ipv6_server.py messages: 365045 nosy: asvetlov, honglei.jiang, yselivanov priority: normal severity: normal status: open title: UDP Echo Server raise OSError when recved packet type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49000/asyncio_udp_ipv6_server.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 06:41:22 2020 From: report at bugs.python.org (Michael Felt) Date: Thu, 26 Mar 2020 10:41:22 +0000 Subject: [New-bugs-announce] [issue40073] AIX: python3 points to "air" Message-ID: <1585219282.6.0.683445147878.issue40073@roundup.psfhosted.org> New submission from Michael Felt : This is a regression in v3.6.10 and v3.7.6 `make install` creates a symbolic link `python3` that points to the executable python3.X In versions v3.6.10 and v3.7.6 the executable is created as python3.Xm while the symbolic link still points to python3.X Note: v3.8.2 and v3.9 (master) do not appear to be affected) ---------- components: Build messages: 365058 nosy: Michael.Felt priority: normal severity: normal status: open title: AIX: python3 points to "air" type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 07:25:16 2020 From: report at bugs.python.org (Diego Palacios) Date: Thu, 26 Mar 2020 11:25:16 +0000 Subject: [New-bugs-announce] [issue40074] pickle module dump and load: add support for string file names Message-ID: <1585221916.12.0.582442173764.issue40074@roundup.psfhosted.org> New submission from Diego Palacios : The pickle functions dump and load are often used in the following lines: ```python import pickle fname = '/path/to/file.pickle' with open(fname, 'rb') as f: object = pickle.load(f) ``` The load function should also accept a file name (string) as input and automatically open and load the object. The same should happen for the dump function. This would allow a simple use of the functions: ```python object = pickle.load(fname) ``` This is what many users need when reading and storing and object from/to a file. ---------- components: Library (Lib) messages: 365061 nosy: Diego Palacios priority: normal severity: normal status: open title: pickle module dump and load: add support for string file names type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 08:16:09 2020 From: report at bugs.python.org (Thomas Holder) Date: Thu, 26 Mar 2020 12:16:09 +0000 Subject: [New-bugs-announce] [issue40075] _tkinter PythonCmd fails to acquire GIL Message-ID: <1585224969.87.0.649895216825.issue40075@roundup.psfhosted.org> New submission from Thomas Holder : The attached demo application runs a Tkinter GUI and a PyQt GUI in the same thread. PyQt owns the main loop and keeps updating the Tkinter instance by calling `update()`. On Windows, when binding a "" event, resizing the Tk window will lead to a crash: ``` Fatal Python error: PyEval_RestoreThread: NULL tstate Current thread 0x00001f1c (most recent call first): File "qt_tk_demo.py", line 50 in ``` This crash happens in `_tkinter.c` in `PythonCmd` inside the `ENTER_PYTHON` macro. The issue can be fixed by using `PyGILState_Ensure` and `PyGILState_Release` instead of the `ENTER_PYTHON` macro inside the `PythonCmd` function. ---------- components: Tkinter, Windows files: qt_tk_demo.py messages: 365064 nosy: Thomas Holder, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: _tkinter PythonCmd fails to acquire GIL type: crash Added file: https://bugs.python.org/file49002/qt_tk_demo.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 10:14:48 2020 From: report at bugs.python.org (Alexander Bolshakov) Date: Thu, 26 Mar 2020 14:14:48 +0000 Subject: [New-bugs-announce] [issue40076] isoformat function drops microseconds part if its value is 000000 Message-ID: <1585232088.22.0.568073689149.issue40076@roundup.psfhosted.org> New submission from Alexander Bolshakov : isoformat function does not conform to the ISO 8601 and drops microseconds part if its value is 000000. The issue can be reproduced using the following code snippet: for i in range(1,10000000): timestamp=datetime.datetime.utcnow().isoformat() if len(timestamp)!=26: print(timestamp) ---------- components: Library (Lib) messages: 365077 nosy: Alexander Bolshakov priority: normal severity: normal status: open title: isoformat function drops microseconds part if its value is 000000 type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 12:04:44 2020 From: report at bugs.python.org (Dong-hee Na) Date: Thu, 26 Mar 2020 16:04:44 +0000 Subject: [New-bugs-announce] [issue40077] Convert static types to PyType_FromSpec() Message-ID: <1585238684.65.0.246012172449.issue40077@roundup.psfhosted.org> New submission from Dong-hee Na : Some of modules is not using PyType_FromSpec. We need to convert them. This changes can bring - allow to destroy types at exit! - allow subinterpreters to have their own "isolated" typ ---------- messages: 365087 nosy: corona10, vstinner priority: normal severity: normal status: open title: Convert static types to PyType_FromSpec() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 13:06:33 2020 From: report at bugs.python.org (Justin Lebar) Date: Thu, 26 Mar 2020 17:06:33 +0000 Subject: [New-bugs-announce] [issue40078] asyncio subprocesses allow pids to be reaped, different behavior than regular subprocesses Message-ID: <1585242393.56.0.193315651182.issue40078@roundup.psfhosted.org> New submission from Justin Lebar : >From https://bugs.python.org/issue1187312 about regular subprocesses: > So as long as the application keeps a reference to the > subprocess object, it can wait for it; auto-reaping only > starts when the last reference was dropped [in Popen.__del__]. asyncio subprocesses seem to behave differently. When we notice the process has exited in BaseSubprocessTransport._process_exited, we call _try_finish(), which -- if all pipes are closed -- calls _call_connection_lost and sets self._proc to None. At this point, my understanding is that once self._proc is GC'ed, we'll run Popen.__del__ and may reap the pid. I would expect asyncio subprocesses to behave the same way as regular Popen objects wrt pid reaping. ---------- components: asyncio messages: 365095 nosy: Justin.Lebar, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio subprocesses allow pids to be reaped, different behavior than regular subprocesses _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 17:56:53 2020 From: report at bugs.python.org (Alexander Riccio) Date: Thu, 26 Mar 2020 21:56:53 +0000 Subject: [New-bugs-announce] [issue40079] NULL pointer deref on error path in _ssl debughelpers.c Message-ID: <1585259813.8.0.527414757133.issue40079@roundup.psfhosted.org> New submission from Alexander Riccio : At line 138 in debughelpers.c, ssl_obj, which was set to NULL on line 122, is dereferenced. I think the original intent was to actually bubble the error up through the ssl object. Full function: static void _PySSL_keylog_callback(const SSL *ssl, const char *line) { PyGILState_STATE threadstate; PySSLSocket *ssl_obj = NULL; /* ssl._SSLSocket, borrowed ref */ int res, e; static PyThread_type_lock *lock = NULL; threadstate = PyGILState_Ensure(); /* Allocate a static lock to synchronize writes to keylog file. * The lock is neither released on exit nor on fork(). The lock is * also shared between all SSLContexts although contexts may write to * their own files. IMHO that's good enough for a non-performance * critical debug helper. */ if (lock == NULL) { lock = PyThread_allocate_lock(); if (lock == NULL) { PyErr_SetString(PyExc_MemoryError, "Unable to allocate lock"); PyErr_Fetch(&ssl_obj->exc_type, &ssl_obj->exc_value, &ssl_obj->exc_tb); return; } } ssl_obj = (PySSLSocket *)SSL_get_app_data(ssl); assert(PySSLSocket_Check(ssl_obj)); if (ssl_obj->ctx->keylog_bio == NULL) { return; } PySSL_BEGIN_ALLOW_THREADS PyThread_acquire_lock(lock, 1); res = BIO_printf(ssl_obj->ctx->keylog_bio, "%s\n", line); e = errno; (void)BIO_flush(ssl_obj->ctx->keylog_bio); PyThread_release_lock(lock); PySSL_END_ALLOW_THREADS if (res == -1) { errno = e; PyErr_SetFromErrnoWithFilenameObject(PyExc_OSError, ssl_obj->ctx->keylog_filename); PyErr_Fetch(&ssl_obj->exc_type, &ssl_obj->exc_value, &ssl_obj->exc_tb); } PyGILState_Release(threadstate); } ---------- assignee: christian.heimes components: SSL messages: 365114 nosy: Alexander Riccio, christian.heimes priority: normal severity: normal status: open title: NULL pointer deref on error path in _ssl debughelpers.c versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 19:03:00 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Thu, 26 Mar 2020 23:03:00 +0000 Subject: [New-bugs-announce] [issue40080] Stripping annotations out as a new optimization mode Message-ID: <1585263780.04.0.43303586086.issue40080@roundup.psfhosted.org> New submission from Batuhan Taskaya : Just like docstrings, annotations do nothing at runtime for the majority of the time. We can just strip out them and gain as much as the docstring optimization in bytecode size on a fully annotated repo. For comparing these two optimizations, I calculated the bytecode weight (marshal dumped size of) of each optimization (with a similar implementation to the compiler but not exact) over a project which both rich in docstrings and annotations. Project: https://github.com/Instagram/LibCST $ python simple_tester.py LibCST Total bytes: 1820086 Total bytes after 629 docstrings (total length of 180333) removed: 1643315 Total bytes after 8859 type annotations removed: 1641594 (I've submitted the script I used to calculate these results.) ---------- components: Interpreter Core files: simple_tester.py messages: 365118 nosy: BTaskaya priority: normal severity: normal status: open title: Stripping annotations out as a new optimization mode type: enhancement versions: Python 3.9 Added file: https://bugs.python.org/file49004/simple_tester.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Mar 26 22:26:02 2020 From: report at bugs.python.org (Sivasundar Nagarajan) Date: Fri, 27 Mar 2020 02:26:02 +0000 Subject: [New-bugs-announce] [issue40081] List sorting Message-ID: <1585275962.0.0.0714871727825.issue40081@roundup.psfhosted.org> New submission from Sivasundar Nagarajan : Good day. Hope you are safe and wish the same with the present situation. Need you help please in understanding the below function of Python. Steps - 1. tried assigning the below values in the list and named it as a 2.if I print, it prints in the same sequence. 3.Tried assigning it to b by the command a.sort() 4.Tried printing b and it gave null. 5.But printed a now, and the values were sorted. Please help me understand if we have any logic with in a list to sort the values after an iteration. Please apologize if I had missed some basics and uncovered it. >>> a = [1,4,3,2,4,5,3,2] >>> a [1, 4, 3, 2, 4, 5, 3, 2] >>> print (a) [1, 4, 3, 2, 4, 5, 3, 2] >>> b = a.sort() >>> b >>> print (a) [1, 2, 2, 3, 3, 4, 4, 5] >>> ---------- assignee: terry.reedy components: IDLE messages: 365129 nosy: Sivasundar Nagarajan, terry.reedy priority: normal severity: normal status: open title: List sorting type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 02:07:12 2020 From: report at bugs.python.org (Alexander Riccio) Date: Fri, 27 Mar 2020 06:07:12 +0000 Subject: [New-bugs-announce] [issue40082] Assertion failure in trip_signal Message-ID: <1585289232.05.0.154553792707.issue40082@roundup.psfhosted.org> New submission from Alexander Riccio : While trying to make sense of some static analysis warnings for the Windows console IO module, I Ctrl+C'd in the middle of an intentionally absurd __repr__ output, and on proceeding in the debugger (which treated it as an exception), I immediately hit the assertion right here: /* Get the Python thread state using PyGILState API, since _PyThreadState_GET() returns NULL if the GIL is released. For example, signal.raise_signal() releases the GIL. */ PyThreadState *tstate = PyGILState_GetThisThreadState(); assert(tstate != NULL); ...With the stacktrace: ucrtbased.dll!issue_debug_notification(const wchar_t * const message) Line 28 C++ ucrtbased.dll!__acrt_report_runtime_error(const wchar_t * message) Line 154 C++ ucrtbased.dll!abort() Line 51 C++ ucrtbased.dll!common_assert_to_stderr_direct(const wchar_t * const expression, const wchar_t * const file_name, const unsigned int line_number) Line 161 C++ ucrtbased.dll!common_assert_to_stderr(const wchar_t * const expression, const wchar_t * const file_name, const unsigned int line_number) Line 175 C++ ucrtbased.dll!common_assert(const wchar_t * const expression, const wchar_t * const file_name, const unsigned int line_number, void * const return_address) Line 420 C++ ucrtbased.dll!_wassert(const wchar_t * expression, const wchar_t * file_name, unsigned int line_number) Line 443 C++ > python39_d.dll!trip_signal(int sig_num) Line 266 C python39_d.dll!signal_handler(int sig_num) Line 342 C ucrtbased.dll!ctrlevent_capture(const unsigned long ctrl_type) Line 206 C++ KernelBase.dll!_CtrlRoutine at 4 () Unknown kernel32.dll!@BaseThreadInitThunk at 12 () Unknown ntdll.dll!__RtlUserThreadStart() Unknown ntdll.dll!__RtlUserThreadStart at 8 () Unknown ...I'm not entirely sure why this happened, but I can tell a few things. _PyRuntime.gilstate.autoInterpreterState is NOT null, in fact the gilstate object is as displayed in my watch window: - _PyRuntime.gilstate {check_enabled=1 tstate_current={_value=0 } getframe=0x79e3a570 {python39_d.dll!threadstate_getframe(_ts *)} ...} _gilstate_runtime_state check_enabled 1 int + tstate_current {_value=0 } _Py_atomic_address getframe 0x79e3a570 {python39_d.dll!threadstate_getframe(_ts *)} _frame *(*)(_ts *) - autoInterpreterState 0x00e5eff8 {next=0x00000000 tstate_head=0x00e601c0 {prev=0x00000000 next=0x00000000 ...} ...} _is * + next 0x00000000 _is * + tstate_head 0x00e601c0 {prev=0x00000000 next=0x00000000 interp=0x00e5eff8 {next=0x00000000 ...} ...} _ts * + runtime 0x7a0e2118 {python39_d.dll!pyruntimestate _PyRuntime} {preinitializing=0 preinitialized=1 core_initialized=...} pyruntimestate * id 0 __int64 id_refcount -1 __int64 requires_idref 0 int id_mutex 0x00000000 void * finalizing 0 int + ceval {tracing_possible=0 eval_breaker={_value=0 } pending={finishing=0 lock=0x00e59390 calls_to_do={_value=...} ...} } _ceval_state + gc {trash_delete_later=0x00000000 trash_delete_nesting=0 enabled=1 ...} _gc_runtime_state + modules 0x00bf1228 {ob_refcnt=3 ob_type=0x7a0b1178 {python39_d.dll!_typeobject PyDict_Type} {ob_base={ob_base=...} ...} } _object * + modules_by_index 0x00750058 {ob_refcnt=1 ob_type=0x7a0b8210 {python39_d.dll!_typeobject PyList_Type} {ob_base={ob_base=...} ...} } _object * + sysdict 0x00bf1298 {ob_refcnt=2 ob_type=0x7a0b1178 {python39_d.dll!_typeobject PyDict_Type} {ob_base={ob_base=...} ...} } _object * + builtins 0x00bf1f48 {ob_refcnt=88 ob_type=0x7a0b1178 {python39_d.dll!_typeobject PyDict_Type} {ob_base={ob_base=...} ...} } _object * + importlib 0x00c0df60 {ob_refcnt=28 ob_type=0x7a0b92d0 {python39_d.dll!_typeobject PyModule_Type} {ob_base={ob_base=...} ...} } _object * num_threads 0 long pythread_stacksize 0 unsigned int + codec_search_path 0x00c4a260 {ob_refcnt=1 ob_type=0x7a0b8210 {python39_d.dll!_typeobject PyList_Type} {ob_base={ob_base=...} ...} } _object * + codec_search_cache 0x00c1f0d8 {ob_refcnt=1 ob_type=0x7a0b1178 {python39_d.dll!_typeobject PyDict_Type} {ob_base={ob_base=...} ...} } _object * + codec_error_registry 0x00c14f10 {ob_refcnt=1 ob_type=0x7a0b1178 {python39_d.dll!_typeobject PyDict_Type} {ob_base={ob_base=...} ...} } _object * codecs_initialized 1 int + fs_codec {encoding=0x00e5aa40 "utf-8" utf8=1 errors=0x00e89ea8 "surrogatepass" ...} + config {_config_init=2 isolated=0 use_environment=1 ...} PyConfig + dict 0x00000000 _object * + builtins_copy 0x00c00a08 {ob_refcnt=1 ob_type=0x7a0b1178 {python39_d.dll!_typeobject PyDict_Type} {ob_base={ob_base=...} ...} } _object * + import_func 0x00bfd900 {ob_refcnt=4 ob_type=0x7a0b90d0 {python39_d.dll!_typeobject PyCFunction_Type} {ob_base={ob_base=...} ...} } _object * eval_frame 0x79a52577 {python39_d.dll!__PyEval_EvalFrameDefault} _object *(*)(_ts *, _frame *, int) co_extra_user_count 0 int + co_extra_freefuncs 0x00e5f308 {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, ...} void(*)(void *)[255] pyexitfunc 0x79af7320 {python39_d.dll!atexit_callfuncs(_object *)} void(*)(_object *) + pyexitmodule 0x00f5d690 {ob_refcnt=16 ob_type=0x7a0b92d0 {python39_d.dll!_typeobject PyModule_Type} {ob_base={ob_base=...} ...} } _object * tstate_next_unique_id 1 unsigned __int64 + warnings {filters=0x0078edc8 {ob_refcnt=3 ob_type=0x7a0b8210 {python39_d.dll!_typeobject PyList_Type} {ob_base=...} } ...} _warnings_runtime_state + audit_hooks 0x00000000 _object * + parser {listnode={level=0 atbol=0 } } + small_ints 0x00e5f734 {0x00755868 {ob_base={ob_base={ob_refcnt=2 ob_type=0x7a0b8738 {python39_d.dll!_typeobject PyLong_Type} {...} } ...} ...}, ...} _longobject *[262] + autoTSSkey {_is_initialized=1 _key=5 } _Py_tss_t This looks to me like there's some kind of race condition in the thread local storage. Later, when TlsGetValue is called in PyThread_tss_get with the value of 5, the error code is lost, so I don't even know the exact reported error, and apparently 0 (nullptr) is a valid return anyways *shrug*. If I'm correctly decoding the address of the TLS slot in the TEB (I think *(unsigned int*)(void*)((@fs+0xe10)+ (20)) is the correct address for the 5th item? 4 bytes, key=5?), then there is actually a null tstate there. Not sure why. This is in a relatively recent (<1wk old) branch of the code, with some non-behavioral tweaks to add annotations in while I'm bug hunting, so it shouldn't matter. I'm not sure if I can reproduce this, but it happened, so there's a bug somewhere. What else? The signal number that tripped this was 2, sig interrupt, which makes sense. There are other threads active, so maybe that's why? The TLS was never initialized for that thread? Here's the dump from the visual studio threads window, 22316 is the active thread. Not Flagged 10360 0 Main Thread Main Thread python39_d.dll!_PyOS_WindowsConsoleReadline ntdll.dll!_NtDeviceIoControlFile at 40 () KernelBase.dll!ConsoleCallServerGeneric() KernelBase.dll!_ReadConsoleInternal at 24 () KernelBase.dll!_ReadConsoleW at 20 () python39_d.dll!_PyOS_WindowsConsoleReadline(void * hStdIn) Line 120 python39_d.dll!PyOS_StdioReadline(_iobuf * sys_stdin, _iobuf * sys_stdout, const char * prompt) Line 253 python39_d.dll!PyOS_Readline(_iobuf * sys_stdin, _iobuf * sys_stdout, const char * prompt) Line 358 python39_d.dll!tok_nextc(tok_state * tok) Line 856 python39_d.dll!tok_get(tok_state * tok, const char * * p_start, const char * * p_end) Line 1166 python39_d.dll!PyTokenizer_Get(tok_state * tok, const char * * p_start, const char * * p_end) Line 1813 python39_d.dll!parsetok(tok_state * tok, grammar * g, int start, perrdetail * err_ret, int * flags) Line 253 python39_d.dll!PyParser_ParseFileObject(_iobuf * fp, _object * filename, const char * enc, grammar * g, int start, const char * ps1, const char * ps2, perrdetail * err_ret, int * flags) Line 188 python39_d.dll!PyParser_ASTFromFileObject(_iobuf * fp, _object * filename, const char * enc, int start, const char * ps1, const char * ps2, PyCompilerFlags * flags, int * errcode, _arena * arena) Line 1388 python39_d.dll!PyRun_InteractiveOneObjectEx(_iobuf * fp, _object * filename, PyCompilerFlags * flags) Line 240 python39_d.dll!PyRun_InteractiveLoopFlags(_iobuf * fp, const char * filename_str, PyCompilerFlags * flags) Line 122 python39_d.dll!PyRun_AnyFileExFlags(_iobuf * fp, const char * filename, int closeit, PyCompilerFlags * flags) Line 81 python39_d.dll!pymain_run_stdin(PyConfig * config, PyCompilerFlags * cf) Line 467 python39_d.dll!pymain_run_python(int * exitcode) Line 556 python39_d.dll!Py_RunMain() Line 632 python39_d.dll!pymain_main(_PyArgv * args) Line 663 python39_d.dll!Py_Main(int argc, wchar_t * * argv) Line 674 python_d.exe!wmain(int argc, wchar_t * * argv) Line 10 python_d.exe!invoke_main() Line 90 python_d.exe!__scrt_common_main_seh() Line 288 python_d.exe!__scrt_common_main() Line 331 python_d.exe!wmainCRTStartup() Line 17 kernel32.dll!@BaseThreadInitThunk at 12 () ntdll.dll!__RtlUserThreadStart() ntdll.dll!__RtlUserThreadStart at 8 () Not Flagged 14944 0 Worker Thread ntdll.dll!_TppWorkerThread at 4 () ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 () ntdll.dll!_TppWorkerThread at 4 () kernel32.dll!@BaseThreadInitThunk at 12 () ntdll.dll!__RtlUserThreadStart() ntdll.dll!__RtlUserThreadStart at 8 () Not Flagged > 22316 0 Worker Thread KernelBase.dll!_CtrlRoutine at 4 () ucrtbased.dll!issue_debug_notification ucrtbased.dll!issue_debug_notification(const wchar_t * const message) Line 28 ucrtbased.dll!__acrt_report_runtime_error(const wchar_t * message) Line 154 ucrtbased.dll!abort() Line 51 ucrtbased.dll!common_assert_to_stderr_direct(const wchar_t * const expression, const wchar_t * const file_name, const unsigned int line_number) Line 161 ucrtbased.dll!common_assert_to_stderr(const wchar_t * const expression, const wchar_t * const file_name, const unsigned int line_number) Line 175 ucrtbased.dll!common_assert(const wchar_t * const expression, const wchar_t * const file_name, const unsigned int line_number, void * const return_address) Line 420 ucrtbased.dll!_wassert(const wchar_t * expression, const wchar_t * file_name, unsigned int line_number) Line 443 python39_d.dll!trip_signal(int sig_num) Line 266 python39_d.dll!signal_handler(int sig_num) Line 342 ucrtbased.dll!ctrlevent_capture(const unsigned long ctrl_type) Line 206 KernelBase.dll!_CtrlRoutine at 4 () kernel32.dll!@BaseThreadInitThunk at 12 () ntdll.dll!__RtlUserThreadStart() ntdll.dll!__RtlUserThreadStart at 8 () Not Flagged 10180 0 Worker Thread ntdll.dll!_TppWorkerThread at 4 () ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 () ntdll.dll!_TppWorkerThread at 4 () kernel32.dll!@BaseThreadInitThunk at 12 () ntdll.dll!__RtlUserThreadStart() ntdll.dll!__RtlUserThreadStart at 8 () Not Flagged 28940 0 Worker Thread ntdll.dll!_TppWorkerThread at 4 () ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 () ntdll.dll!_TppWorkerThread at 4 () kernel32.dll!@BaseThreadInitThunk at 12 () ntdll.dll!__RtlUserThreadStart() ntdll.dll!__RtlUserThreadStart at 8 () Not Flagged 9396 0 Worker Thread ntdll.dll!_TppWorkerThread at 4 () ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 () ntdll.dll!_TppWorkerThread at 4 () kernel32.dll!@BaseThreadInitThunk at 12 () ntdll.dll!__RtlUserThreadStart() ntdll.dll!__RtlUserThreadStart at 8 () Not Flagged 28960 0 Worker Thread ntdll.dll!_TppWorkerThread at 4 () ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 ntdll.dll!_NtWaitForWorkViaWorkerFactory at 20 () ntdll.dll!_TppWorkerThread at 4 () kernel32.dll!@BaseThreadInitThunk at 12 () ntdll.dll!__RtlUserThreadStart() ntdll.dll!__RtlUserThreadStart at 8 () Anyways, I hope that's a useful report for an obscure bug. ---------- components: Interpreter Core messages: 365134 nosy: Alexander Riccio priority: normal severity: normal status: open title: Assertion failure in trip_signal type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 03:04:44 2020 From: report at bugs.python.org (Rajesh R Naik) Date: Fri, 27 Mar 2020 07:04:44 +0000 Subject: [New-bugs-announce] [issue40083] No run option available in python idle in version 3.8.2 Message-ID: <1585292684.98.0.176532437082.issue40083@roundup.psfhosted.org> New submission from Rajesh R Naik : i using pyhton 3.8.2 latest version in that there no run option available in python idle. so please help also iam using windows 10 home ---------- assignee: terry.reedy components: IDLE messages: 365137 nosy: Raj_110, terry.reedy, twouters priority: normal severity: normal status: open title: No run option available in python idle in version 3.8.2 type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 03:16:45 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Fri, 27 Mar 2020 07:16:45 +0000 Subject: [New-bugs-announce] [issue40084] HTTPStatus has incomplete dir() listing Message-ID: <1585293405.13.0.478935860244.issue40084@roundup.psfhosted.org> New submission from Raymond Hettinger : The dir() listing omits the attributes "description" and "phrase": >>> import http >>> from pprint import pp >>> r = http.HTTPStatus(404) >>> pp(vars(r)) {'_value_': 404, 'phrase': 'Not Found', 'description': 'Nothing matches the given URI', '_name_': 'NOT_FOUND', '__objclass__': } >>> r.value 404 >>> r.name 'NOT_FOUND' >>> r.description 'Nothing matches the given URI' >>> r.phrase 'Not Found' >>> dir(r) ['__class__', '__doc__', '__module__', 'as_integer_ratio', 'bit_length', 'conjugate', 'denominator', 'from_bytes', 'imag', 'name', 'numerator', 'real', 'to_bytes', 'value'] One fix would be to teach IntEnum.__dir__() to include entries in the instance dict. Another fix would be to provide a way for a IntEnum subclass to add to the known members list. ---------- components: Library (Lib) messages: 365138 nosy: ethan.furman, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: HTTPStatus has incomplete dir() listing type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 03:45:42 2020 From: report at bugs.python.org (tzickel) Date: Fri, 27 Mar 2020 07:45:42 +0000 Subject: [New-bugs-announce] [issue40085] Argument parsing option c should accept int between -128 to 255 ? Message-ID: <1585295142.15.0.400379676659.issue40085@roundup.psfhosted.org> New submission from tzickel : I converted some code from python to c-api and was surprised that a code stopped working. Basically the "c" parsing option allows for 1 char bytes or bytearray inputs and converts them to a C char. But just as indexing a bytes array returns an int, so should this option support it. i.e. b't'[0] = 116 Not sure if it should limit between 0 to 255 or -128 to 127. ---------- components: C API messages: 365139 nosy: tzickel priority: normal severity: normal status: open title: Argument parsing option c should accept int between -128 to 255 ? type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 07:36:55 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 27 Mar 2020 11:36:55 +0000 Subject: [New-bugs-announce] [issue40086] test_etree is skipped in test_typing due to cElementTree removal Message-ID: <1585309015.87.0.51128301072.issue40086@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Currently, test_etree has a Python 2 shim importing cElementTree and skipping the test on ImportError. Since cElementTree was deprecated and removed in Python 3 with 36543. So this test is now skipped. The fix would be to remove the shim and import Element from xml.etree.ElementTree. This is a good beginner issue. Test log as below : ./python -m test test_typing -m test_etree -vvv == CPython 3.9.0a5+ (heads/master:33f15a16d4, Mar 27 2020, 11:15:48) [GCC 7.5.0] == Linux-4.15.0-66-generic-x86_64-with-glibc2.27 little-endian == cwd: /root/cpython/build/test_python_24162 == CPU count: 1 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 0.07 Run tests sequentially 0:00:00 load avg: 0.07 [1/1] test_typing test_etree (test.test_typing.UnionTests) ... skipped 'cElementTree not found' ---------------------------------------------------------------------- Ran 1 test in 0.001s OK (skipped=1) == Tests result: SUCCESS == 1 test OK. Total duration: 150 ms Tests result: SUCCESS ---------- components: Tests, XML messages: 365143 nosy: serhiy.storchaka, xtreak priority: normal severity: normal stage: needs patch status: open title: test_etree is skipped in test_typing due to cElementTree removal type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 08:17:11 2020 From: report at bugs.python.org (Deepalee Khare) Date: Fri, 27 Mar 2020 12:17:11 +0000 Subject: [New-bugs-announce] [issue40087] How to Uninstall Python3.7.3 using cmd? Message-ID: <1585311431.71.0.879589746848.issue40087@roundup.psfhosted.org> New submission from Deepalee Khare : How to Uninstall Python3.7.3 using cmd ? i tried using cmd: Msiexec /uninstall C:\Python37\python.exe But it giver me an error: "This installation package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer package." how do i uninstall it ? ---------- components: Windows messages: 365146 nosy: deepaleedotkhare at gmail.com, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: How to Uninstall Python3.7.3 using cmd? type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 08:35:37 2020 From: report at bugs.python.org (Yury Norov) Date: Fri, 27 Mar 2020 12:35:37 +0000 Subject: [New-bugs-announce] [issue40088] list.reverse(): slow sublist reverse Message-ID: <1585312537.41.0.422024804139.issue40088@roundup.psfhosted.org> New submission from Yury Norov : Hi all, In Python, I need a tool to reverse part of a list (tail) quickly. I expected that nums[start:end].reverse() would do it inplace with the performance similar to nums.reverse(). However, it doesn't work at all. The fastest way to reverse a part of the list that I found is like this: nums[start:end] = nums[end:start-1:-1] But it is 30 times slower than pure reverse(). The patch below adds a region support for the reverse(). It works as fast as I expect. The test results and script are like this: exec(open('test.py').read()) nums.reverse() 0.006764888763427734 nums = nums[::-1] 0.10066413879394531 nums.reverse(-L/2) 0.003548145294189453 nums.reverse(L/2, L) 0.003538370132446289 nums = nums[:L/2] + nums[L:L/2-1:-1] 0.19934582710266113 nums[L/2:L] = nums[L:L/2-1:-1] 0.11419057846069336 import time nums = list(range(10000000)) L = len(nums) LL = int(L/2) t = time.time() nums.reverse() print('nums.reverse()\t\t\t\t', str(time.time() - t)) t = time.time() nums = nums[::-1] print('nums = nums[::-1]\t\t\t', str(time.time() - t)) t = time.time() nums.reverse(-LL) print('nums.reverse(-L/2)\t\t\t', time.time() - t) t = time.time() nums.reverse(LL, L) print('nums.reverse(L/2, L)\t\t\t', time.time() - t) t = time.time() nums = nums[:LL] + nums[L : LL - 1 : -1] print('nums = nums[:L/2] + nums[L:L/2-1:-1]\t', time.time() - t) t = time.time() nums[LL:L] = nums[L:LL-1:-1] print('nums[L/2:L] = nums[L:L/2-1:-1]\t\t', time.time() - t) If there is better way to reverse lists, can someone point me at the right direction? If not, I'll be happy to fix all existing issues and upstream this approach. PR: https://github.com/python/cpython/pull/19181 Thanks, Yury ---------- components: C API, Interpreter Core messages: 365147 nosy: Yury priority: normal pull_requests: 18548 severity: normal status: open title: list.reverse(): slow sublist reverse versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 11:51:37 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 27 Mar 2020 15:51:37 +0000 Subject: [New-bugs-announce] [issue40089] Add _at_fork_reinit() method to locks Message-ID: <1585324297.97.0.65509955463.issue40089@roundup.psfhosted.org> New submission from STINNER Victor : Using a lock after fork() is unsafe and can crash. Example of a crash in logging after a fork on AIX: https://bugs.python.org/issue40068#msg365028 This problem is explained in length in bpo-6721: "Locks in the standard library should be sanitized on fork". The threading module registers an "at fork" callback: Thread._reset_internal_locks() is called to reset self._started (threading.Event) and self._tstate_lock. The current implementation creates new Python lock objects and forgets about the old ones. I propose to add a new _at_fork_reinit() method to Python lock objects which reinitializes the native lock internally without having to create a new Python object. Currently, my implementation basically creates a new native lock object and forgets about the old new (don't call PyThread_free_lock()). Tomorrow, we can imagine a more efficient impementation using platform specific function to handle this case without having to forget about the old lock. ---------- components: Library (Lib) messages: 365157 nosy: vstinner priority: normal severity: normal status: open title: Add _at_fork_reinit() method to locks versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 12:23:40 2020 From: report at bugs.python.org (Ama Aje My Fren) Date: Fri, 27 Mar 2020 16:23:40 +0000 Subject: [New-bugs-announce] [issue40090] The referenced RFC for the json module should be corrected to rfc8259 Message-ID: <1585326220.2.0.0601154076392.issue40090@roundup.psfhosted.org> New submission from Ama Aje My Fren : Currently the Documentation of the json library module refers to :rfc:`7159` . That RFC has however been obsoleted by :rfc:`8259`. The Documentation for :mod:`json` should be changed to indicate this. ---------- assignee: docs at python components: Documentation messages: 365162 nosy: amaajemyfren, docs at python priority: normal severity: normal status: open title: The referenced RFC for the json module should be corrected to rfc8259 type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 13:02:43 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 27 Mar 2020 17:02:43 +0000 Subject: [New-bugs-announce] [issue40091] Crash in logging._after_at_fork_child_reinit_locks() Message-ID: <1585328563.49.0.978667613959.issue40091@roundup.psfhosted.org> New submission from STINNER Victor : test_threading.ThreadJoinOnShutdown.test_reinit_tls_after_fork() does crash randomly on AIX in logging._after_at_fork_child_reinit_locks(): https://bugs.python.org/issue40068#msg365028 This function ends by releasing a lock: _releaseLock() # Acquired by os.register_at_fork(before=. But it is not safe to use a lock after fork (see bpo-6721 and bpo-40089). The purpose of _after_at_fork_child_reinit_locks() is already to fix locks used by logging handles: see bpo-36533 and commit 64aa6d2000665efb1a2eccae176df9520bf5f5e6. But the global logging._lock is not reset after fork. Attached PR fix the issue. ---------- components: Library (Lib) messages: 365170 nosy: vstinner priority: normal severity: normal status: open title: Crash in logging._after_at_fork_child_reinit_locks() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 13:16:55 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 27 Mar 2020 17:16:55 +0000 Subject: [New-bugs-announce] [issue40092] Crash in _PyThreadState_DeleteExcept() at fork in the process child Message-ID: <1585329415.65.0.534969367772.issue40092@roundup.psfhosted.org> New submission from STINNER Victor : At fork, Python calls PyOS_AfterFork_Child() in the child process which indirectly calls _PyThreadState_DeleteExcept() whichs calls release_sentinel() of the thread which releases the thread state lock (threading.Thread._tstate_lock). Problem: using a lock after fork is unsafe and can crash. That's exactly what happens randomly on AIX when stressing ThreadJoinOnShutdown.test_reinit_tls_after_fork() of test_threading: https://bugs.python.org/issue40068#msg365031 There are different options to solve this issue: * Reset _tstate_lock before using it... not sure that it's worth it, since we are going to delete the threading.Thread object with its _tstate_lock object anymore. After calling fork, the child process has exactly 1 thread: all other threads have been removed. * Modify release_sentinel() to not use the lock: avoid PyThread_release_lock() call. ---------- components: Interpreter Core messages: 365173 nosy: vstinner priority: normal severity: normal status: open title: Crash in _PyThreadState_DeleteExcept() at fork in the process child versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 19:21:42 2020 From: report at bugs.python.org (fireattack) Date: Fri, 27 Mar 2020 23:21:42 +0000 Subject: [New-bugs-announce] [issue40093] ThreadPoolExecutor with wait=True shuts down too early Message-ID: <1585351302.94.0.0578585326748.issue40093@roundup.psfhosted.org> New submission from fireattack : Example ``` from concurrent.futures import ThreadPoolExecutor from time import sleep def wait_on_future(): sleep(1) print(f.done()) # f is not done obviously f2 = executor.submit(pow, 5, 2) print(f2.result()) sleep(1) executor = ThreadPoolExecutor(max_workers=100) f = executor.submit(wait_on_future) executor.shutdown(wait=True) ``` When debugging, it shows "cannot schedule new futures after shutdown": Exception has occurred: RuntimeError cannot schedule new futures after shutdown File "test2.py", line 7, in wait_on_future f2 = executor.submit(pow, 5, 2) According to https://docs.python.org/3/library/concurrent.futures.html, `shutdown(wait=True)` "[s]ignal the executor that it should free any resources that it is using when the currently pending futures are done executing". But when f2 is being submitted, f is not done yet, so executor shouldn't be shut down. ---------- components: Library (Lib) messages: 365194 nosy: fireattack priority: normal severity: normal status: open title: ThreadPoolExecutor with wait=True shuts down too early versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Mar 27 19:29:47 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 27 Mar 2020 23:29:47 +0000 Subject: [New-bugs-announce] [issue40094] Add os._wait_status_to_returncode() helper function Message-ID: <1585351787.12.0.917977186474.issue40094@roundup.psfhosted.org> New submission from STINNER Victor : os.wait() and os.waitpid() returns a "status" number which is not easy to return. It's made of two information: (how the process completed, value). The usual way to handle it is to use a code which looks like: if os.WIFSIGNALED(status): self.returncode = -os.WTERMSIG(status) elif os.WIFEXITED(status): self.returncode = os.WEXITSTATUS(status) elif os.WIFSTOPPED(status): self.returncode = -os.WSTOPSIG(status) else: raise Exception("... put your favorite error message here ...") It's not convenient to have to duplicate this code each time we have to handle a wait status. Moreover, WIFSTOPPED() is commonly treated as "the process was killed by a signal", whereas the process is still alive but was only stopped. WIFSTOPPED() should only happen when the process is traced (by ptrace), or if waitpid() was called with WUNTRACED option. The common case is not to trace a process or to use WUNTRACED. Moreover, if WIFSTOPPED() is true, the process is still alive and can continue its execution. It's bad to consider it as completed. The subprocess module has such bug: Popen._handle_exitstatus() returns -os.WSTOPSIG(sts) if os.WIFSTOPPED(sts) is true. On the other side, the pure Python implementation os._spawnvef() calls again waitpid() if WIFSTOPPED() is true. That sounds like a better behavior. while 1: wpid, sts = waitpid(pid, 0) if WIFSTOPPED(sts): continue elif WIFSIGNALED(sts): return -WTERMSIG(sts) elif WIFEXITED(sts): return WEXITSTATUS(sts) else: raise OSError("Not stopped, signaled or exited???") But I'm not sure how WIFSTOPPED() can be true, since this function creates a child process using os.fork() and it doesn't use os.WUNTRACED flag. I propose to add a private os._wait_status_to_returncode(status) helper function: --- os._wait_status_to_returncode(status) -> int Convert a wait() or waitpid() status to a returncode. If WIFEXITED(status) is true, return WEXITSTATUS(status). If WIFSIGNALED(status) is true, return -WTERMSIG(status). Otherwise, raise a ValueError. If the process is being traced or if waitpid() was called with WUNTRACED option, the caller must first check if WIFSTOPPED(status) is true. This function must not be called if WIFSTOPPED(status) is true. --- I'm not sure if it's a good idea to add the helper as a private function. Someone may discover it and starts to use it. If we decide to make it public tomorrow, removing os._wait_status_to_returncode() would break code. Maybe it's better to directly a public function? But I'm not sure if it's useful, nor if the function name is good, nor if good to helper an function function directly in the os module. Maybe such helper should be added to shutil instead which is more the "high-level" flavor of the os module? I chose to add it to the os module for different reasons: * Existing code using os.WEXITSTATUS() and friends usually only uses the os module. * It's convenient to be able to use os._wait_status_to_returncode(status) in the subprocess module without adding a dependency (import) on the shutil module. * os.wait() and os.waitpid() live in the os module: it's convenient to have an helper functon in the same module. What do you think? * Is it worth it to add os._wait_status_to_returncode() helper function? * If you like the idea, propose a better name! * Should it remain private first? ---------- components: Library (Lib) messages: 365195 nosy: vstinner priority: normal severity: normal status: open title: Add os._wait_status_to_returncode() helper function versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 28 04:27:27 2020 From: report at bugs.python.org (Martynas Brijunas) Date: Sat, 28 Mar 2020 08:27:27 +0000 Subject: [New-bugs-announce] [issue40095] Incorrect st_ino returned for ReFS on Windows 10 Message-ID: <1585384047.06.0.773958986435.issue40095@roundup.psfhosted.org> New submission from Martynas Brijunas : On a Windows 10 volume formatted with ReFS, pathlib.Path.stat() returns an incorrect value for "st_ino". The correct value returned by the OS: C:\Users>fsutil file queryfileid u:\test\test.jpg File ID is 0x00000000000029d500000000000004ae An incorrect value obtained with pathlib.Path.stat(): Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from pathlib import Path >>> hex(Path('U:/test/test.jpg').stat().st_ino) '0x4000000004ae29d5' The problem does *not* exist on an NTFS volume: C:\Users>fsutil file queryfileid o:\OneDrive\test\test.jpg File ID is 0x0000000000000000000300000001be39 >>> hex(Path('O:/OneDrive/test/test.jpg').stat().st_ino) '0x300000001be39' ---------- components: Library (Lib) messages: 365206 nosy: mbrijun at gmail.com priority: normal severity: normal status: open title: Incorrect st_ino returned for ReFS on Windows 10 type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 28 08:19:57 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 28 Mar 2020 12:19:57 +0000 Subject: [New-bugs-announce] [issue40096] Support _Py_NO_RETURN on XLC Message-ID: <1585397997.42.0.0757475150087.issue40096@roundup.psfhosted.org> New submission from Batuhan Taskaya : Like clang and gcc, __attribute__(noreturn) can be used in xlc too for AIX. ---------- messages: 365208 nosy: BTaskaya priority: normal severity: normal status: open title: Support _Py_NO_RETURN on XLC type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 28 14:51:42 2020 From: report at bugs.python.org (Guy Kogan) Date: Sat, 28 Mar 2020 18:51:42 +0000 Subject: [New-bugs-announce] [issue40097] Queue.Empty issue - Python3.8 Message-ID: <1585421502.46.0.860098534769.issue40097@roundup.psfhosted.org> New submission from Guy Kogan : Python 3.8 Queue module is unable to handle the expection: error: Exception in thread Thread-5: Traceback (most recent call last): File "FTP_multithreading.py", line 17, in run new_connection = self.queue.get(timeout=2) File "/usr/local/lib/python3.8/queue.py", line 178, in get raise Empty _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "FTP_multithreading.py", line 18, in run except queue.Empty: AttributeError: 'Queue' object has no attribute 'Empty' b'220 ns.iren.ru FTP server (Version wu-2.6.2(1) Fri Sep 12 08:50:43 IRKST 2008) ready.\r\n' Exception in thread Thread-4: Traceback (most recent call last): File "FTP_multithreading.py", line 17, in run new_connection = self.queue.get(timeout=2) File "/usr/local/lib/python3.8/queue.py", line 178, in get raise Empty _queue.Empty When the Last task is done the exception queue.Empty should occur while handling the exception another exception is occurred. ---------- files: FTP_Code_queue.py messages: 365225 nosy: Python_dev_IL priority: normal severity: normal status: open title: Queue.Empty issue - Python3.8 type: crash versions: Python 3.8 Added file: https://bugs.python.org/file49005/FTP_Code_queue.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 28 15:47:28 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 28 Mar 2020 19:47:28 +0000 Subject: [New-bugs-announce] [issue40098] dir() does not return the list of valid attributes for the object Message-ID: <1585424848.72.0.725162902614.issue40098@roundup.psfhosted.org> New submission from Serhiy Storchaka : Due to the difference in the code of __getattr__ and __dir__ for object and type dir() does not return the list of valid attributes for the object. It can return a name which is not a valid attribute and miss a name of the valid attribute. 1. It does not support metaclasses. class M(type): x = 1 class A(metaclass=M): pass assert hasattr(A, 'x') assert 'x' not in dir(A) 2. It does not use __mro__, but uses __bases__ recursively. class M(type): def mro(cls): if cls.__name__ == 'A': return cls, B return cls, class B(metaclass=M): x = 1 class A(metaclass=M): pass assert hasattr(A, 'x') assert 'x' not in dir(A) 3. It uses the __dict__ attribute instead of the instance dict (they can be different). class A: @property def __dict__(self): return {'x': 1} assert not hasattr(A(), 'x') assert 'x' in dir(A()) 4. It uses the __class__ attribute instead of type(). class B: y = 2 class A: x = 1 @property def __class__(self): return B assert hasattr(A, 'x') assert not hasattr(A, 'y') assert hasattr(A(), 'x') assert not hasattr(A(), 'y') assert 'x' in dir(A) assert 'y' not in dir(A) assert 'x' not in dir(A()) assert 'y' in dir(A()) 4.1. As a side effect dir() creates an instance dictionary if it was not initialized yet (for memory saving). It is possible to make these implementations of __dir__() returning exactly what the corresponding __getattr__() accepts, not more and not less. The code will even be much simpler. But is it what we want? ---------- components: Interpreter Core messages: 365227 nosy: gvanrossum, serhiy.storchaka, tim.peters priority: normal severity: normal status: open title: dir() does not return the list of valid attributes for the object type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Mar 28 22:26:49 2020 From: report at bugs.python.org (gaoxinge) Date: Sun, 29 Mar 2020 02:26:49 +0000 Subject: [New-bugs-announce] [issue40099] modify code format of json library for pep7, 8 Message-ID: <1585448809.29.0.0599404433077.issue40099@roundup.psfhosted.org> Change by gaoxinge : ---------- components: Library (Lib) nosy: gaoxinge priority: normal pull_requests: 18575 severity: normal status: open title: modify code format of json library for pep7,8 type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 04:29:07 2020 From: report at bugs.python.org (Alex) Date: Sun, 29 Mar 2020 08:29:07 +0000 Subject: [New-bugs-announce] [issue40100] Unexpected behavior when handling emoji under codec cp936 Message-ID: <1585470547.47.0.79695308507.issue40100@roundup.psfhosted.org> New submission from Alex <2423067593 at qq.com>: Python 3.8.2 IDLE has an unexpected behavior. When I insert an emoji into IDLE like '?'. Then I found I can't delete it(by typing backspace). When I type the left arrow then it became '??'(U+FFFD). Then I type the left arrow again, then it is '?' again! (When I use the delete button, or type the right button there aren't any bugs.) What's wrong? Also, when I have two emojis like '??', I press delete button between them, nothing happens; when I delete on the right, both of them disappear! (This bug seems not appears on plain 0.) ---------- assignee: terry.reedy components: IDLE messages: 365247 nosy: Alex-Python-Programmer, terry.reedy priority: normal severity: normal status: open title: Unexpected behavior when handling emoji under codec cp936 type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 11:24:48 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sun, 29 Mar 2020 15:24:48 +0000 Subject: [New-bugs-announce] [issue40101] lib2to3 fails on default convert functionality Message-ID: <1585495488.9.0.817562542084.issue40101@roundup.psfhosted.org> New submission from Batuhan Taskaya : When using driver/parser without a custom convert function (like pyconvert.convert), it tries to assign used_names to the root node, which can be anything depending on the given convert function. If none given, it will be a tuple. >>> from lib2to3.pygram import python_grammar >>> from lib2to3.pgen2.driver import Driver >>> >>> d = Driver(grammar=python_grammar) >>> d.parse_string("test\n") Traceback (most recent call la Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.9/lib2to3/pgen2/driver.py", line 103, in parse_string return self.parse_tokens(tokens, debug) File "/usr/local/lib/python3.9/lib2to3/pgen2/driver.py", line 71, in parse_tokens if p.addtoken(type, value, (prefix, start)): File "/usr/local/lib/python3.9/lib2to3/pgen2/parse.py", line 136, in addtoken self.pop() File "/usr/local/lib/python3.9/lib2to3/pgen2/parse.py", line 204, in pop self.rootnode.used_names = self.used_names AttributeError: 'tuple' object has no attribute 'used_names' ---------- components: Library (Lib) messages: 365260 nosy: BTaskaya priority: normal severity: normal status: open title: lib2to3 fails on default convert functionality versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 12:31:31 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sun, 29 Mar 2020 16:31:31 +0000 Subject: [New-bugs-announce] [issue40102] Improve XLC support for function attributes Message-ID: <1585499491.65.0.60964453841.issue40102@roundup.psfhosted.org> New submission from Batuhan Taskaya : XLC supports visibility, deprecated and aligned attributes (can be seen in this language reference https://www.ibm.com/support/pages/sites/default/files/support/swg/swgdocs.nsf/0/18b50c3c2309a37585257daf004d373e/%24FILE/langref.pdf) ---------- messages: 365267 nosy: BTaskaya, pablogsal priority: normal severity: normal status: open title: Improve XLC support for function attributes type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 12:33:47 2020 From: report at bugs.python.org (Nathan Silberman) Date: Sun, 29 Mar 2020 16:33:47 +0000 Subject: [New-bugs-announce] [issue40103] ZipFile.extractall is not multiprocess safe with regard to directory creation. Message-ID: <1585499627.5.0.740021279333.issue40103@roundup.psfhosted.org> New submission from Nathan Silberman : When extracting multiple zip files, each from a separate process, if the files being extracted are in nested directories and files across zips contain the same parent directories, the extraction process fails as one zip attempts to create a directory that was already created by the extraction call in another process. ---------- components: Library (Lib) messages: 365268 nosy: nathansilberman priority: normal severity: normal status: open title: ZipFile.extractall is not multiprocess safe with regard to directory creation. type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 15:13:13 2020 From: report at bugs.python.org (Myron Walker) Date: Sun, 29 Mar 2020 19:13:13 +0000 Subject: [New-bugs-announce] [issue40104] ElementTree does not find elements in a default namespace with namespaces Message-ID: <1585509193.56.0.500211737175.issue40104@roundup.psfhosted.org> New submission from Myron Walker : When you have an xml document like the one with a default namespace below. When you try to lookup nodes in the document they are not found. ``` docTree.find("specVersion") None ``` If you add a namespaces map with the '' key and the default namespaces like: { '': 'urn:schemas-upnp-org:device-1-0' } then the nodes are still not found. The issue is that there is a case missing from xpath_tokenizer that does not yield a pair with the default namespace when one is specified. Here is a fix. https://github.com/myronww/cpython/commit/0fc65daca239624139f2a018a83f0b0ec04a8068 ``` from xml.etree.ElementTree import fromstring as parse_xml_fromstring from xml.etree.ElementTree import ElementTree SAMPLEXML = """ 10 urn:schemas-wifialliance-org:device:WFADevice:1 R7400 (Wireless AP) rootNode = parse_xml_fromstring(SAMPLEXML) # Create a namespaces map like { '': 'urn:schemas-upnp-org:device-1-0' } defaultns = {"": docNode.tag.split("}")[0][1:]} specVerNode = docNode.find("specVersion", namespaces=defaultns) ``` Results should look like this ``` docNode.find("specVersion", namespaces=defaultns) ``` ---------- components: Library (Lib) messages: 365273 nosy: Myron Walker priority: normal severity: normal status: open title: ElementTree does not find elements in a default namespace with namespaces type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 16:48:23 2020 From: report at bugs.python.org (Yudi) Date: Sun, 29 Mar 2020 20:48:23 +0000 Subject: [New-bugs-announce] [issue40105] Updating zip comment doesn't truncate the zip file Message-ID: <1585514903.71.0.609611507951.issue40105@roundup.psfhosted.org> New submission from Yudi : Updating the zip file comment to a shorter comment should truncate the zip file at the end of the new comment. Instead, the new comment is written to the zip file but the file length stays the same. For example, for a zip file that has the following zip comment: b'This is my old amazing comment, I bet you really like it!' # 57 character long Executing the following code: zipFile = ZipFile(filePath, 'a') zipFile.comment = b'My new shorter comment' # 22 character long zipFile.close() Will actually update the comment length in the zip header to the correct new length (22), but the bytecode will still have the following data: b'My new shorter comment comment, I bet you really like it!' Python reads the comment correctly since it relies on the comment length from the metadata (as far as I can tell), but the file is corrupt. This is similar to the following old issue - https://bugs.python.org/issue9239 But I wasn't sure whether to try and re-open that old one or create a new one. Tested on version 3.8.2 (Windows 10). Thanks! ---------- components: Library (Lib) messages: 365278 nosy: yudilevi priority: normal severity: normal status: open title: Updating zip comment doesn't truncate the zip file type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 17:49:38 2020 From: report at bugs.python.org (Mouse) Date: Sun, 29 Mar 2020 21:49:38 +0000 Subject: [New-bugs-announce] [issue40106] multiprocessor spawn Message-ID: <1585518578.2.0.0489480686513.issue40106@roundup.psfhosted.org> New submission from Mouse : MacOS Catalina 10.15.3 and 10.15.4. Python-3.8.2 (also tested with 3.7.7, which confirmed the problem being in the fix described in https://bugs.python.org/issue33725. Trying to use "multiprocessor" with Python-3.8 and with the new default of `set_start_method('spawn')` is nothing but a disaster. Not doing join() leads to consistent crashes, like described here https://bugs.python.org/issue33725#msg365249 Adding p.join() immediately after p.start() seems to work, but increases the total run-time by factor between two and four, user time by factor of five, and system time by factor of ten. Occasionally even with p.join() I'm getting some processes crashing like shown in https://bugs.python.org/issue33725#msg365249. I found two workarounds: 1. Switch back to 'fork' by explicitly adding `set_start_method('fork') to the __main__. 2. Drop the messy "multiprocessing" package and use "multiprocess" instead, which turns out to be a good and reliable fork of "multiprocessing". If anybody cares to dig deeper into this problem, I'd be happy to provide whatever information that could be helpful. Here's the sample code (again): ``` #!/usr/bin/env python3 # # Test "multiprocessing" package included with Python-3.6+ # # Usage: # ./mylti1.py [nElements [nProcesses [tSleep]]] # # nElements - total number of integers to put in the queue # default: 100 # nProcesses - total number of parallel processes/threads # default: number of physical cores available # tSleep - number of milliseconds for a thread to sleep # after it retrieved an element from the queue # default: 17 # # Algorithm: # 1. Creates a queue and adds nElements integers to it, # 2. Creates nProcesses threads # 3. Each thread extracts an element from the queue and sleeps for tSleep milliseconds # import sys, queue, time import multiprocessing as mp def getElements(q, tSleep, idx): l = [] # list of pulled numbers while True: try: l.append(q.get(True, .001)) time.sleep(tSleep) except queue.Empty: if q.empty(): print(f'worker {idx} done, got {len(l)} numbers') return if __name__ == '__main__': nElements = int(sys.argv[1]) if len(sys.argv) > 1 else 100 nProcesses = int(sys.argv[2]) if len(sys.argv) > 2 else mp.cpu_count() tSleep = float(sys.argv[3]) if len(sys.argv) > 3 else 17 # To make this sample code work reliably and fast, uncomment following line #mp.set_start_method('fork') # Fill the queue with numbers from 0 to nElements q = mp.Queue() for k in range(nElements): q.put(k) # Keep track of worker processes workers = [] # Start worker processes for m in range(nProcesses): p = mp.Process(target=getElements, args=(q, tSleep / 1000, m)) workers.append(p) p.start() # Now do the joining for p in workers: p.join() ``` Here's the timing: ``` $ time python3 multi1.py worker 9 done, got 5 numbers worker 16 done, got 5 numbers worker 6 done, got 5 numbers worker 8 done, got 5 numbers worker 17 done, got 5 numbers worker 3 done, got 5 numbers worker 14 done, got 5 numbers worker 0 done, got 5 numbers worker 15 done, got 4 numbers worker 7 done, got 5 numbers worker 5 done, got 5 numbers worker 12 done, got 5 numbers worker 4 done, got 5 numbers worker 19 done, got 5 numbers worker 18 done, got 5 numbers worker 1 done, got 5 numbers worker 10 done, got 5 numbers worker 2 done, got 5 numbers worker 11 done, got 6 numbers worker 13 done, got 5 numbers real 0m0.325s user 0m1.375s sys 0m0.692s ``` If I comment out the join() and uncomment set_start_method('fork'), the timing is ``` $ time python3 multi1.py worker 0 done, got 5 numbers worker 3 done, got 5 numbers worker 2 done, got 5 numbers worker 1 done, got 5 numbers worker 5 done, got 5 numbers worker 10 done, got 5 numbers worker 6 done, got 5 numbers worker 4 done, got 5 numbers worker 7 done, got 5 numbers worker 9 done, got 5 numbers worker 8 done, got 5 numbers worker 14 done, got 5 numbers worker 11 done, got 5 numbers worker 12 done, got 5 numbers worker 13 done, got 5 numbers worker 16 done, got 5 numbers worker 15 done, got 5 numbers worker 17 done, got 5 numbers worker 18 done, got 5 numbers worker 19 done, got 5 numbers real 0m0.175s user 0m0.073s sys 0m0.070s ``` You can observe the difference. Here's the timing if I don't bother with either join() or set_start_method(), but import "multiprocess" instead: ``` $ time python3 multi2.py worker 0 done, got 5 numbers worker 1 done, got 5 numbers worker 2 done, got 5 numbers worker 4 done, got 5 numbers worker 3 done, got 5 numbers worker 5 done, got 5 numbers worker 6 done, got 5 numbers worker 8 done, got 5 numbers worker 9 done, got 5 numbers worker 7 done, got 5 numbers worker 14 done, got 5 numbers worker 11 done, got 5 numbers worker 13 done, got 5 numbers worker 16 done, got 5 numbers worker 12 done, got 5 numbers worker 10 done, got 5 numbers worker 15 done, got 5 numbers worker 17 done, got 5 numbers worker 18 done, got 5 numbers worker 19 done, got 5 numbers real 0m0.192s user 0m0.089s sys 0m0.076s ``` Also, on a weaker machine with only 4 cores (rather than 20 that ran the above example), the instability of the "multiprocessor"-based code shows: ``` $ time python3.8 multi1.py worker 3 done, got 33 numbers worker 2 done, got 33 numbers worker 1 done, got 34 numbers worker 0 done, got 0 numbers real 0m5.448s user 0m0.339s sys 0m0.196s ``` Observe how one process out of four got nothing from the queue. With "multiprocess" the code runs like a clockwork - each process gets exactly 1/N of the queue: ``` $ time python3.8 multi2.py worker 0 done, got 25 numbers worker 1 done, got 25 numbers worker 2 done, got 25 numbers worker 3 done, got 25 numbers real 0m0.551s user 0m0.082s sys 0m0.044s ``` I think that the best course for "multiprocessor" would be reverting the default to 'fork'. It also looks like for the users the best course would be switching to "multiprocess". ---------- components: macOS messages: 365279 nosy: mouse07410, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: multiprocessor spawn type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 18:37:39 2020 From: report at bugs.python.org (Barney Gale) Date: Sun, 29 Mar 2020 22:37:39 +0000 Subject: [New-bugs-announce] [issue40107] pathlib: make `_Accessor.open()` return a file object and not a file descriptor Message-ID: <1585521459.52.0.94031195769.issue40107@roundup.psfhosted.org> New submission from Barney Gale : This is one of a series of bug reports / PRs that lay the groundwork for making pathlib extensible. See here for detail: https://discuss.python.org/t/make-pathlib-extensible/3428 Currently `_Accessor.open()` is expected to function like `os.open()` and return a file descriptor. I'd suggest this interface is too low-level if our eventual aim is to allow users to implement their own accessor. It would be better is `_Accessor.open()` is expected to function like `io.open()` and return a file object. That way, accessors don't need to deal with file descriptors at all, which is important if they're working with remote filesystems. I'm planning to wait for bpo-39895 / gh-18838 to land before starting work on this. ---------- components: Library (Lib) messages: 365283 nosy: barneygale priority: normal severity: normal status: open title: pathlib: make `_Accessor.open()` return a file object and not a file descriptor type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 20:02:03 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Mon, 30 Mar 2020 00:02:03 +0000 Subject: [New-bugs-announce] [issue40108] Improve error message for -m option when .py is present Message-ID: <1585526523.79.0.240338395586.issue40108@roundup.psfhosted.org> New submission from Raymond Hettinger : I think we can do better than the following: $ python3.8 -m unicode_math_symbols.py /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8: Error while finding module specification for 'unicode_math_symbols.py' (ModuleNotFoundError: __path__ attribute not found on 'unicode_math_symbols' while trying to find 'unicode_math_symbols.py') It is a reasonably common mistake to add .py to the module name for module names loaded by the -m command-line launcher. The error mess is somewhat opaque and not suggestive of what the actual problem is or how to fix it. ---------- messages: 365286 nosy: rhettinger priority: normal severity: normal status: open title: Improve error message for -m option when .py is present versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Mar 29 21:22:35 2020 From: report at bugs.python.org (Nathan Brooks) Date: Mon, 30 Mar 2020 01:22:35 +0000 Subject: [New-bugs-announce] [issue40109] List index doesn't work with multiple assignment Message-ID: <1585531355.78.0.895999484391.issue40109@roundup.psfhosted.org> New submission from Nathan Brooks : Faulty example: x = [1,2,3,4,5,6,7] # this should replace items 3 and 6 with each other x[2], x[x.index(6)] = 6, x[2] print(x) [1,2,3,4,5,6,7] Workaround: x = [1,2,3,4,5,6,7] i = x.index(6) # this replaces items 3 and 6 in the list. x[2], x[i] = 6, x[2] print(x) [1,2,6,4,5,3,7] ---------- messages: 365289 nosy: Nathan Brooks priority: normal severity: normal status: open title: List index doesn't work with multiple assignment _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 00:06:27 2020 From: report at bugs.python.org (Nick Guenther) Date: Mon, 30 Mar 2020 04:06:27 +0000 Subject: [New-bugs-announce] [issue40110] multiprocessing.Pool.imap() should be lazy Message-ID: <1585541187.46.0.606629490552.issue40110@roundup.psfhosted.org> New submission from Nick Guenther : multiprocessing.Pool.imap() is supposed to be a lazy version of map. But it's not: it submits work to its workers eagerly. As a consequence, in a pipeline, all the work from earlier steps is queued, performed, and finished first, before starting later steps. If you use python3's built-in map() -- aka the old itertools.imap() -- the operations are on-demand, so it surprised me that Pool.imap() doesn't. It's basically no better than using Pool.map(). Maybe it saves memory by not materializing large iterables in every worker process? But it still materializes the CPU time from the iterables even if unneeded. This can be partially worked around by giving each step of the pipeline its own Pool -- then, at least the earlier steps of the pipeline don't block the later steps -- but the jobs are still done eagerly instead of waiting for their results to actually be requested. ---------- files: multiprocessing-eager-imap.py messages: 365295 nosy: kousu priority: normal severity: normal status: open title: multiprocessing.Pool.imap() should be lazy versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49010/multiprocessing-eager-imap.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 01:51:59 2020 From: report at bugs.python.org (Dima Tisnek) Date: Mon, 30 Mar 2020 05:51:59 +0000 Subject: [New-bugs-announce] [issue40111] Introspect ssl context: read ALPN and NPN protocols Message-ID: <1585547519.26.0.563091445054.issue40111@roundup.psfhosted.org> New submission from Dima Tisnek : It's quite easy to create new or modify existing ssl context: ssl_context = ssl.create_default_context() ssl_context.set_alpn_protocols(["h2"]) I'm writing a library where the context may be passed by the caller (useful if the caller wants to set custom CA path, or client cert auth, share TLS session tickets, etc.). I'd love to be able to check that the context I get has correct ALPN and/or NPN protocols specified. I'd love to be able to do something like this: assert "h2" in ssl_context.alpn_protocols or assert "h2" in ssl_context.get_alpn_protocols() There's sortof precedent for this, I use following code to set and check TLS version flags: ssl_context.options |= ssl.OP_NO_TLSv1 assert ssl.OP_NO_TLSv1 in ssl_context.options ---------- components: Extension Modules messages: 365300 nosy: Dima.Tisnek priority: normal severity: normal status: open title: Introspect ssl context: read ALPN and NPN protocols versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 07:18:15 2020 From: report at bugs.python.org (Michael Felt) Date: Mon, 30 Mar 2020 11:18:15 +0000 Subject: [New-bugs-announce] [issue40112] AIX: xlc - default path changed and no longer recognized Message-ID: <1585567095.85.0.705098258477.issue40112@roundup.psfhosted.org> New submission from Michael Felt : The is a check if compiler is xlc, and skips a test if it is. XLC no longer installs in /usr/vac, and the test_search_cpp fails (again) ---------- components: Distutils messages: 365302 nosy: Michael.Felt, dstufft, eric.araujo priority: normal severity: normal status: open title: AIX: xlc - default path changed and no longer recognized type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 07:32:25 2020 From: report at bugs.python.org (Davide Golinelli) Date: Mon, 30 Mar 2020 11:32:25 +0000 Subject: [New-bugs-announce] [issue40113] Turtle demo Message-ID: <1585567945.94.0.599224677.issue40113@roundup.psfhosted.org> New submission from Davide Golinelli : running the attacched simple program the turtle go backwards even if not asked. i added a sleep command in order to view the bug more easly ---------- components: Tests files: spike.py messages: 365303 nosy: Davide Golinelli priority: normal severity: normal status: open title: Turtle demo type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49011/spike.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 07:37:25 2020 From: report at bugs.python.org (brendon zhang) Date: Mon, 30 Mar 2020 11:37:25 +0000 Subject: [New-bugs-announce] [issue40114] support maxsize argument for lru_cache's user_function option Message-ID: <1585568245.56.0.943686007567.issue40114@roundup.psfhosted.org> Change by brendon zhang : ---------- components: Library (Lib) nosy: brendon-zhang at hotmail.com, rhettinger priority: normal severity: normal status: open title: support maxsize argument for lru_cache's user_function option type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 10:38:30 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 30 Mar 2020 14:38:30 +0000 Subject: [New-bugs-announce] [issue40115] test_asyncio leaked [3, 3, 3] references, sum=9 Message-ID: <1585579110.73.0.577311644063.issue40115@roundup.psfhosted.org> New submission from STINNER Victor : x86 Gentoo Refleaks 3.x: https://buildbot.python.org/all/#/builders/16/builds/128 test_asyncio leaked [3, 3, 3] references, sum=9 test_asyncio leaked [3, 5, 3] memory blocks, sum=11 3:26:23 load avg: 3.78 Re-running test_asyncio in verbose mode test_asyncio leaked [3, 3, 24] references, sum=30 test_asyncio leaked [3, 3, 26] memory blocks, sum=32 ---------- components: Tests, asyncio messages: 365318 nosy: asvetlov, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio leaked [3, 3, 3] references, sum=9 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 11:13:56 2020 From: report at bugs.python.org (Mark Shannon) Date: Mon, 30 Mar 2020 15:13:56 +0000 Subject: [New-bugs-announce] [issue40116] Regression in memory use of shared key dictionaries for "compact dicts" Message-ID: <1585581236.94.0.571794171265.issue40116@roundup.psfhosted.org> New submission from Mark Shannon : The current implementation of dicts prevents keys from being shared when the order of attribute differs from the first instance created. This can potentially use a considerably larger amount of memory than expected. Consider the class: class C: opt = DEFAULT def __init__(self, attr, optional=None): if optional: self.opt = optional self.attr = attr This is a reasonable way to write a class, but has unpredictable memory use. In the attached example, per-instance dict size goes from 104 bytes to 232 bytes when sharing is prevented. The language specification says that the dicts maintain insertion order, but the wording implies that this only to explicit dictionaries, not instance attribute or other namespace dicts. Either we should allow key sharing in these cases, or clarify the documentation. ---------- components: Interpreter Core files: compact_dict_prevents_key_sharing.py messages: 365319 nosy: Mark.Shannon, inada.naoki priority: normal severity: normal stage: test needed status: open title: Regression in memory use of shared key dictionaries for "compact dicts" type: behavior Added file: https://bugs.python.org/file49013/compact_dict_prevents_key_sharing.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 11:31:16 2020 From: report at bugs.python.org (Hameer Abbasi) Date: Mon, 30 Mar 2020 15:31:16 +0000 Subject: [New-bugs-announce] [issue40117] __round__ doesn't behave well with return NotImplemented Message-ID: <1585582276.55.0.632143445993.issue40117@roundup.psfhosted.org> New submission from Hameer Abbasi : Minimal reproducer: >>> class A: ... def __round__(self): ... return NotImplemented ... >>> round(A()) NotImplemented Should give a TypeError. This can be useful when deciding, for example, if a given a.dtype implements round based on the dtype ---------- components: Interpreter Core messages: 365323 nosy: Hameer Abbasi priority: normal severity: normal status: open title: __round__ doesn't behave well with return NotImplemented versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 11:35:28 2020 From: report at bugs.python.org (omer sela) Date: Mon, 30 Mar 2020 15:35:28 +0000 Subject: [New-bugs-announce] [issue40118] os.stat in linux shows the wrong inode Message-ID: <1585582528.02.0.848256815415.issue40118@roundup.psfhosted.org> New submission from omer sela : when calling os.stat(fd).st_ino on with a file descriptor of a symbolic link it returns the inode of the original file and not of the link (picture attached) ---------- components: Library (Lib) files: python_bug.png messages: 365324 nosy: omer sela priority: normal severity: normal status: open title: os.stat in linux shows the wrong inode type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49015/python_bug.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 15:09:41 2020 From: report at bugs.python.org (Anthon van der Neut) Date: Mon, 30 Mar 2020 19:09:41 +0000 Subject: [New-bugs-announce] [issue40119] ensurepip should use ifferent pattern for pip/setuptool wheel files Message-ID: <1585595381.19.0.00980598804085.issue40119@roundup.psfhosted.org> New submission from Anthon van der Neut : Setuptools, starting with minor version 45.1.0 no longer is a -py2.py3-none-any.whl file, but a -py3-none-any.whl file. In ensurepip's __init__.py the former is hard-coded, so the setuptools shipping with python (for 3.9.0a5 this is 41.2.0) cannot be upgraded beyond 45.0.0 I can provide a PR that fixes this, (either by using glob.glob() to find the exact .whl available. or by extending the tuple in _PROJECTS.) ---------- messages: 365344 nosy: anthon priority: normal severity: normal status: open title: ensurepip should use ifferent pattern for pip/setuptool wheel files type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 17:04:23 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Mon, 30 Mar 2020 21:04:23 +0000 Subject: [New-bugs-announce] [issue40120] Undefined C behavior going beyond end of struct via a char[1]. Message-ID: <1585602263.71.0.279519604291.issue40120@roundup.psfhosted.org> New submission from Gregory P. Smith : The correct C99 way to do this is using a char[]. PyBytesObject and unicode's struct encoding_map both do this. Unclear to me if we should backport this to earlier versions or not (because PyBytesObject may be exposed?) Probably, but I also doubt it is a big deal as compilers are generally not _yet_ making use of this detail AFAIK. ---------- assignee: gregory.p.smith components: Interpreter Core messages: 365349 nosy: gregory.p.smith priority: normal severity: normal stage: patch review status: open title: Undefined C behavior going beyond end of struct via a char[1]. type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Mar 30 17:31:39 2020 From: report at bugs.python.org (Steve Dower) Date: Mon, 30 Mar 2020 21:31:39 +0000 Subject: [New-bugs-announce] [issue40121] socket module missing audit events Message-ID: <1585603899.47.0.326918129099.issue40121@roundup.psfhosted.org> New submission from Steve Dower : Some of the events it was supposed to raise are not being raised. This is likely my fault for not adding the thorough testing in the test suite at the time (I'm uncovering them now with a different test suite...) I'll do a thorough review of this module and post a PR with the fixes. This shouldn't require any new events, as far as I can tell. ---------- assignee: steve.dower messages: 365356 nosy: steve.dower priority: normal severity: normal stage: needs patch status: open title: socket module missing audit events type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 02:33:39 2020 From: report at bugs.python.org (laike9m) Date: Tue, 31 Mar 2020 06:33:39 +0000 Subject: [New-bugs-announce] [issue40122] The implementation and documentation of "dis.findlables" don't match Message-ID: <1585636419.17.0.745722185034.issue40122@roundup.psfhosted.org> New submission from laike9m : The documentation of dis.findlabels says: > dis.findlabels(code) > Detect all offsets in the code object code which are jump targets, and return a list of these offsets. But the implementation actually expects a raw compiled bytecode. >>> def f():pass >>> dis.findlabels(f.__code__) Traceback (most recent call last): File "", line 1, in File "/Users/laike9m/.pyenv/versions/3.7.4/lib/python3.7/dis.py", line 424, in findlabels for offset, op, arg in _unpack_opargs(code): File "/Users/laike9m/.pyenv/versions/3.7.4/lib/python3.7/dis.py", line 408, in _unpack_opargs for i in range(0, len(code), 2): TypeError: object of type 'code' has no len() >>> dis.findlabels(f.__code__.co_code) [] Since the other functions in the dis module all accept a code object rather than compiled code, I would suggest change the code instead of documentation. Plus, this function does not seem to be tested anywhere. I can make a PR if this issue is confirmed. ---------- components: Library (Lib) messages: 365367 nosy: laike9m, serhiy.storchaka priority: normal severity: normal status: open title: The implementation and documentation of "dis.findlables" don't match versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 05:24:14 2020 From: report at bugs.python.org (merli) Date: Tue, 31 Mar 2020 09:24:14 +0000 Subject: [New-bugs-announce] [issue40123] append() works not correct Message-ID: <1585646654.92.0.927427440827.issue40123@roundup.psfhosted.org> New submission from merli : Please try out and look bug in this example: Liste = [] StringL = ["Nr.:", "NR", "Bielefeld", "Paderborn", "Lemgo"] for i in range (10): StringL[1] = str(i) Liste.append(StringL) print (StringL) #print (Liste) print () print() for i in range (10): print (Liste[i]) ---------- messages: 365371 nosy: merli priority: normal severity: normal status: open title: append() works not correct type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 07:34:58 2020 From: report at bugs.python.org (Phil) Date: Tue, 31 Mar 2020 11:34:58 +0000 Subject: [New-bugs-announce] [issue40124] Clearer assertion error Message-ID: <1585654498.8.0.880221845215.issue40124@roundup.psfhosted.org> New submission from Phil : https://discuss.python.org/t/assertionerror-asyncio-streams-in-drain-helper/3743/4 I recently came across this error, which I now know how to fix. I think the error can be clearer and I've a PR which I think does so. ---------- components: asyncio messages: 365378 nosy: asvetlov, pgjones, yselivanov priority: normal severity: normal status: open title: Clearer assertion error versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 11:05:15 2020 From: report at bugs.python.org (Benjamin Peterson) Date: Tue, 31 Mar 2020 15:05:15 +0000 Subject: [New-bugs-announce] [issue40125] update OpenSSL 1.1.1 in multissltests.py to 1.1.1f Message-ID: <1585667115.02.0.367576519808.issue40125@roundup.psfhosted.org> Change by Benjamin Peterson : ---------- assignee: christian.heimes components: SSL nosy: benjamin.peterson, christian.heimes priority: normal severity: normal status: open title: update OpenSSL 1.1.1 in multissltests.py to 1.1.1f versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 12:07:24 2020 From: report at bugs.python.org (Barry McLarnon) Date: Tue, 31 Mar 2020 16:07:24 +0000 Subject: [New-bugs-announce] [issue40126] Incorrect error handling in unittest.mock Message-ID: <1585670844.45.0.631660598637.issue40126@roundup.psfhosted.org> New submission from Barry McLarnon : The error handling in mock.decorate_callable (3.5-3.7) and mock.decoration_helper (3.8-3.9) is incorrectly implemented. If the error handler is triggered in the loop, the `patching` variable is out of scope and raises an unhandled `UnboundLocalError` instead. This happened as a result of a 3rd-party library that attempts to clear the `patchings` list of a decorated function. The below code shows a recreation of the incorrect error handling: import functools from unittest import mock def is_valid(): return True def mock_is_valid(): return False def decorate(f): @functools.wraps(f) def decorate_wrapper(*args, **kwargs): # This happens in a 3rd-party library f.patchings = [] return f(*args, **kwargs) return decorate_wrapper @decorate @mock.patch('test.is_valid', new=mock_is_valid) def test_patch(): raise Exception() ---------- components: Tests messages: 365395 nosy: bmclarnon priority: normal severity: normal status: open title: Incorrect error handling in unittest.mock type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 12:40:25 2020 From: report at bugs.python.org (Christophe Nanteuil) Date: Tue, 31 Mar 2020 16:40:25 +0000 Subject: [New-bugs-announce] [issue40127] Documentation of SSL library Message-ID: <1585672825.27.0.231879770193.issue40127@roundup.psfhosted.org> New submission from Christophe Nanteuil : For the ssl.create_default_context() function, it states that, "if cafile, capath and cadata are None, the function *can* choose to trust the system's default CA certificates instead". This statement is not clear as, if the values are None, there is no choice and the only elements available are system's default CA. AFAIK, if the values are not None, it will not fall back to system's default CA even if the given CA does not match. I propose to modify the end of the sentence with "the function trusts the system's default CA certificates instead". ---------- assignee: docs at python components: Documentation messages: 365398 nosy: Christophe Nanteuil, docs at python priority: normal severity: normal status: open title: Documentation of SSL library versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 13:16:48 2020 From: report at bugs.python.org (Arthur) Date: Tue, 31 Mar 2020 17:16:48 +0000 Subject: [New-bugs-announce] [issue40128] IDLE Show completions/ autocomplete pop-up not working Message-ID: <1585675008.66.0.405710178889.issue40128@roundup.psfhosted.org> New submission from Arthur : Hi, I am new to python, I am learning it now. In the course the guy is using autocomplete and when he writes "math." he gets an autocomplete menu. on my device for some reason it is not working. I also tried the key combination to force pop-up but nothing happens. I am running macOSx Catalina 10.15.2 and IDLE 3.8.2 P.s. I reinstalled IDLE, nothing changed. ---------- assignee: terry.reedy components: IDLE messages: 365401 nosy: darthur90, terry.reedy priority: normal severity: normal status: open title: IDLE Show completions/ autocomplete pop-up not working type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 16:37:44 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 31 Mar 2020 20:37:44 +0000 Subject: [New-bugs-announce] [issue40129] Add test classes for custom __index__, __int__, __float__ and __complex__ Message-ID: <1585687064.58.0.0478382370232.issue40129@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are many tests for int-like objects (which implement custom __index__ or __int__ methods) in different files. There are less tests for float-like objects (with the __float__ method). There are even tests for complex-like (with the __complex__ method) in different files. To simplify maintaining old tests and adding new tests I propose to add general test classes with custom methods __index__, __int__, __float__ or __complex__ which return the specified value or raise an exception. There is already one similar general class: FakePath. It could be done using unittest.mock, but Mock objects are much more complex and has too much magic, so they can have different behavior than simpler classes with a single special method. ---------- components: Tests messages: 365422 nosy: mark.dickinson, serhiy.storchaka priority: normal severity: normal status: open title: Add test classes for custom __index__, __int__, __float__ and __complex__ type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 17:15:43 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 31 Mar 2020 21:15:43 +0000 Subject: [New-bugs-announce] [issue40130] Remove _PyUnicode_AsKind from private C API Message-ID: <1585689343.48.0.60884405546.issue40130@roundup.psfhosted.org> New submission from Serhiy Storchaka : _PyUnicode_AsKind is exposed as private C API. It is only used in unicodeobject.c, where it is defined. Its name starts with an underscore, it is not documented and not included in PC/python3.def (therefore is not exported on Windows). Seems it is not used in third party code (I have not found any occurrences on GitHub except CPython clones). Initially it was also used in Python/formatter_unicode.c, and I think it is the only reason of exposing it in the header. I think that now it can be removed. The proposed PR removes _PyUnicode_AsKind from headers, makes it static, rename it, and change its signature (since all the kind, data pointer and length are available at the caller site). It also includes minor cleanup and microoptimizations. ---------- components: C API messages: 365427 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Remove _PyUnicode_AsKind from private C API type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Mar 31 19:28:17 2020 From: report at bugs.python.org (Leron Gray) Date: Tue, 31 Mar 2020 23:28:17 +0000 Subject: [New-bugs-announce] [issue40131] Zipapp example has parameters in the wrong order Message-ID: <1585697297.66.0.208168236164.issue40131@roundup.psfhosted.org> New submission from Leron Gray : The second example listed here for zipapp has the parameters in the wrong order. https://docs.python.org/3/library/zipapp.html?highlight=zipapp#examples It should be create_archive("myapp", "myapp.pyz") since the parameters are (source, target). The documentation for the function itself is correct though. ---------- assignee: docs at python components: Documentation messages: 365436 nosy: Leron Gray, docs at python priority: normal severity: normal status: open title: Zipapp example has parameters in the wrong order type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________